954
you are viewing a single comment's thread
view the rest of the comments
[-] jsomae@lemmy.ml 33 points 4 weeks ago* (last edited 4 weeks ago)

I'd just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time -- Amazon's new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.

[-] Shayeta@feddit.org 30 points 4 weeks ago

It doesn't matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

[-] jsomae@lemmy.ml 12 points 4 weeks ago

Right, so this is really only useful in cases where either it's vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI's output.

[-] MangoCats@feddit.it 5 points 4 weeks ago

It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

I'm envisioning a world where multiple AI engines create and check each others' work... the first thing they need to make work to support that scenario is probably fusion power.

[-] zbyte64@awful.systems 3 points 4 weeks ago

It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

I usually write 3x the code to test the code itself. Verification is often harder than implementation.

[-] jsomae@lemmy.ml 4 points 3 weeks ago* (last edited 3 weeks ago)

It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there's a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.

(This is speculation.)

[-] MangoCats@feddit.it 3 points 4 weeks ago

Yes, but the test code "writes itself" - the path is clear, you just have to fill in the blanks.

Writing the proper product code in the first place, that's the valuable challenge.

[-] zbyte64@awful.systems 2 points 3 weeks ago* (last edited 3 weeks ago)

Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn't work until proven otherwise, AI or not. Then when it doesn't work I find it is easier to debug you own code than someone else's and that includes AI.

[-] MangoCats@feddit.it 2 points 3 weeks ago

I've been R&D forever, so at my level the question isn't "does the code work?" we pretty much assume that will take care of itself, eventually. Our critical question is: "is the code trying to do something valuable, or not?" We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things...

[-] zbyte64@awful.systems 1 points 3 weeks ago* (last edited 3 weeks ago)

Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

[-] MangoCats@feddit.it 2 points 3 weeks ago

Yeah, sometimes the requirements write themselves and in those cases successful execution is "on the critical path."

Unfortunately, our requirements are filtered from our paying customers through an ever rotating cast of Marketing and Sales characters who, nominally, are our direct customers so we make product for them - but they rarely have any clear or consistent vision of what they want, but they know they want new stuff - that's for sure.

[-] zbyte64@awful.systems 1 points 3 weeks ago

When requirements are "Whatever" then by all means use the "Whatever" machine: https://eev.ee/blog/2025/07/03/the-rise-of-whatever/

And then look for a better gig because such an environment is going to be toxic to your skill set. The more exacting the shop, the better they pay.

[-] MangoCats@feddit.it 2 points 3 weeks ago* (last edited 3 weeks ago)

The more exacting the shop, the better they pay.

That hasn't been my experience, but it sounds like good advice anyway. My experience has been that the more profitable the parent company, the better the job security and the better the pay too. Once "in," tune in to the culture and align with the people at your level and above who seem like they'll be sticking around long term. If the company isn't financially secure, all bets are off and you should be seeking, and taking, a better offer when you can find one.

I knocked around startups for 10/22 years (depending on how you characterize that one 12 year gig that ended with everybody laid off...) The pay was good enough, but job security just wasn't on the menu. Finally, one got bought by a big fish and I've been in the belly of the beast for 11 years now.

[-] MangoCats@feddit.it 7 points 4 weeks ago

I have been using AI to write (little, near trivial) programs. It's blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn't... yet.

[-] wise_pancake@lemmy.ca 2 points 3 weeks ago

Agents do that loop pretty well now, and Claude now uses your IDE's LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.

The tooling has improved a ton in the last 3 months.

[-] Outbound7404@lemmy.ml 5 points 4 weeks ago

A human can review something close to correct a lot better than starting the task from zero.

[-] DreamlandLividity@lemmy.world 10 points 4 weeks ago

It is a lot harder to notice incorrect information in review, than making sure it is correct when writing it.

[-] loonsun@sh.itjust.works 3 points 4 weeks ago

Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review

[-] MangoCats@feddit.it 3 points 4 weeks ago

harder to notice incorrect information in review, than making sure it is correct when writing it.

That depends entirely on your writing method and attention span for review.

Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.

[-] MangoCats@feddit.it 5 points 4 weeks ago

In University I knew a lot of students who knew all the things but "just don't know where to start" - if I gave them a little direction about where to start, they could run it to the finish all on their own.

[-] MangoCats@feddit.it 13 points 4 weeks ago

being able to do 30% of tasks successfully is already useful.

If you have a good testing program, it can be.

If you use AI to write the test cases...? I wouldn't fly on that airplane.

[-] jsomae@lemmy.ml 4 points 3 weeks ago
[-] outhouseperilous@lemmy.dbzer0.com 6 points 4 weeks ago
[-] jsomae@lemmy.ml 16 points 4 weeks ago

I'm not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.

[-] outhouseperilous@lemmy.dbzer0.com 8 points 4 weeks ago

It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it's llm shit you know those numbers have been more massaged than any human in history has ever been.

[-] jsomae@lemmy.ml 7 points 4 weeks ago

I meant the latter, not "it can do 30% of tasks correctly 100% of the time."

[-] outhouseperilous@lemmy.dbzer0.com 5 points 4 weeks ago

You get how that's fucking useless, generally?

[-] jsomae@lemmy.ml 7 points 4 weeks ago

yes, that's generally useless. It should not be shoved down people's throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.

[-] outhouseperilous@lemmy.dbzer0.com 1 points 4 weeks ago* (last edited 4 weeks ago)

Less broadly useful than 20 tons of mixed texture human shit, and more ecologically devastatimg.

[-] jsomae@lemmy.ml 8 points 4 weeks ago

Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.

[-] outhouseperilous@lemmy.dbzer0.com 3 points 4 weeks ago* (last edited 4 weeks ago)

Its not a magical 30%, factors apply. It's not even a mind that thinks and just isnt very good.

This isnt like a magical dice that gives you truth on a 5 or a 6, and lies on 1,2,3,7, and for.

This is a (very complicated very large) language or other data graph that programmatically identifies an average. 30% of the time-according to one potempkin-ass demonstration. Which means the more possible that is, the easier it is to either use a simpler cheaper tool that will give you a better more reliable answer much faster.

And 20 tons of human shit has uses! If you know its providence, there's all sorts of population level public health surveillance you can do to get ahead of disease trends! Its also got some good agricultural stuff in it-phosphorous and stuff, if you can extract it.

Stop. Just please fucking stop glazing these NERVE-ass fascist shit-goblins.

[-] jsomae@lemmy.ml 6 points 4 weeks ago

I think everyone in the universe is aware of how LLMs work by now, you don't need to explain it to someone just because they think LLMs are more useful than you do.

IDK what you mean by glazing but if by "glaze" you mean "understanding the potential threat of AI to society instead of hiding under a rock and pretending it's as useless as a plastic radio," then no, I won't stop.

[-] outhouseperilous@lemmy.dbzer0.com 3 points 4 weeks ago* (last edited 3 weeks ago)

It's absolutely dangerous but it doesnt have to work even a little to do damage; hell, it already has. Your thing just makes it sound much more capable than it is. And it is not.

Also, it's not AI.

Edit: and in a comment replying to this one, one of your fellow fanboys proved

everyone knows how they work

Wrong

[-] jsomae@lemmy.ml 5 points 4 weeks ago
[-] outhouseperilous@lemmy.dbzer0.com 2 points 4 weeks ago

No, it matters. Youre pushing the lie they want pushed.

[-] jsomae@lemmy.ml 3 points 3 weeks ago

Hitler liked to paint, doesn't make painting wrong. The fact that big tech is pushing AI isn't evidence against the utility of AI.

That common parlance is to call machine learning "AI" these days doesn't matter to me in the slightest. Do you have a definition of "intelligence"? Do you object when pathfinding is called AI? Or STRIPS? Or bots in a video game? Dare I say it, the main difference between those AIs and LLMs is their generality -- so why not just call it GAI at this point tbh. This is a question of semantics so it really doesn't matter to the deeper question. Doesn't matter if you call it AI or not, LLMs work the same way either way.

[-] outhouseperilous@lemmy.dbzer0.com 1 points 3 weeks ago

Semantics, of course, famously never matter.

[-] jsomae@lemmy.ml 2 points 3 weeks ago
[-] jumping_redditor@sh.itjust.works 1 points 3 weeks ago

the industrial revolution could be seen as dangerous, yet it brought the highest standard of living increase in centuries

[-] MangoCats@feddit.it 3 points 4 weeks ago

As useless as a cubicle farm full of unsupervised workers.

[-] outhouseperilous@lemmy.dbzer0.com 5 points 4 weeks ago

Tjose are people who could be living their li:es, pursuing their ambitions, whatever. That could get some shit done. Comparison not valid.

[-] Honytawk@feddit.nl 3 points 4 weeks ago

The comparison is about the correctness of their work.

Their lives have nothing to do with it.

[-] davidagain@lemmy.world 2 points 3 weeks ago

Human lives are the most important thing of all. Profits are irrelevant compared to human lives. I get that that's not how Besos sees the world, but he's a monstrous outlier.

[-] outhouseperilous@lemmy.dbzer0.com 1 points 3 weeks ago

So, first, bad comparison.

Second: if that's the equivalent, why not do the one that makes tge wealthy let a few pennies go to fall on actual people?

[-] amelia@feddit.org 4 points 3 weeks ago

I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where "AI" is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.

[-] jsomae@lemmy.ml 3 points 3 weeks ago

The notion that AI is half-ready is a really poignant observation actually. It's ready for select applications only, but it's really being advertised like it's idiot-proof and ready for general use.

[-] someacnt@sh.itjust.works 2 points 3 weeks ago

Thing is, they might achieve 99% accuracy given the speed of progress. Lots of brainpower is getting poured into LLMs. Honestly, it is soo scary. It could be replacing me...

[-] jsomae@lemmy.ml 1 points 3 weeks ago

yeah, this is why I'm #fuck-ai to be honest.

this post was submitted on 07 Jul 2025
954 points (100.0% liked)

Technology

73602 readers
3296 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS