it's facebook, they probably issued a takedown request for all their logged peers
it is chefskiss.tiff that I keep "having conversations"[0] where people tell me that "AI" is "well-suited" to "data extraction that traditional tools struggle with" and then datapoints like this keep. coming. up.
[0] - I....am not inviting these. the promptfans just Defend their twisted little hearts away, unasked. it is tedious.
and the engineers I know who are still avoiding it work noticeably slower
yep yep! as we all know, velocity is all that matters! crank that handle, produce those features! the factory must flow!!
and you fucking know what? it's not even just me being a snide motherfucker, this rant is literally fucking supported by data:
The survey found that 75.9% of respondents (of roughly 3,000* people surveyed) are relying on AI for at least part of their job responsibilities, with code writing, summarizing information, code explanation, code optimization, and documentation taking the top five types of tasks that rely on AI assistance. Furthermore, 75% of respondents reported productivity gains from using AI.
...
As we just discussed in the above findings, roughly 75% of people report using AI as part of their jobs and report that AI makes them more productive.
And yet, in this same survey we get these findings:
if AI adoption increases by 25%, time spent doing valuable work is estimated to decrease 2.6% if AI adoption increases by 25%, estimated throughput delivery is expected to decrease by 1.5% if AI adoption increases by 25%, estimated delivery stability is expected to decrease by 7.2%
and that's a report sponsored and managed right from the fucking lying cloud company, no less. a report they sponsor, run, manage, and publish is openly admitting this shit. that is how much this shit doesn't fucking work the way you sell it to be doing.
but no, we should trust your driveby bullshit. motherfucker.
there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking
so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing
let them eat stock
Unfortunately in this case the problem is you (as a non-frequenter of this sub (which is explicitly about dunking on these fools)) coming in with no context, although I’d agree with you in principle otherwise
Also it sounds like that game-EA thing could do with a sneer on techtakes
that looks like someone used win9x mspaint to make a flag, fucked it up, and then fucked it up even more on the saving throw
ah yes, the well-known UELA that every human has clicked on when they start searching from prominent search box on the android device they have just purchased. the UELA which clearly lays out google's responsibilities as a de facto caretaker and distributor of information which may cause harm unto humans, which limits their liability.
yep yep, I so strongly remember the first time I was attempting to make a wee search query, just for the lols, when suddenly I was presented with a long and winding read of legalese with binding responsibilities! oh, what a world.
.....no, wait. it's the other one.
it's more insidious: it's not giving it away. it's subverting the "hmm UBI is good actually" argument and saying that people should receive compute instead of money, with a quiet "oh and naturally someone will have to pay us for providing them that compute"
changing thermometers

look at the cute little ai, it thinks it's people! becoming a legal liability and everything! adorbs
"thought process" lol.