1179
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 03 Dec 2025
1179 points (100.0% liked)
Technology
77361 readers
2299 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
AI even ruined AI. Up until this insane hype train, ML models were specialized tools to achieve their tasks. Now the whole field is dominated by LLMs and slopgen bullshit.
Yeah that's the annoying thing. Generative AI is actually really useful.....in SPECIFIC situations. Discovering new battery tech, new medicines, etc. are all good use cases because it's basically a parrot and blender combined and most of these things are rehashes if existing technologies in new and novel ways.
It is not a fucking good solution for a search engine replacement to ask "Why do farts smell?". It uses way too much energy for that and it hallucinates bullshit.
Yeah. They solved protien folding with ML a few years back. And I like using it for things like noise removal in Lightroom.
But so much of it has been focused on useless (at best) bullshit that I just want the bubble to burst already.
I agree with the general sentiment here but just wanted to clarify that they definitely didn't "solve protein folding" yet. Alpha fold is a significant improvement in structure prediction and it generated a lot of hype but some of the structures I've seen it put out are total nonsense.
It's good for optimisation problems, where you have a complex high-dimensional space to search and you're solving for some measurable quality.
A lot of top researchers have already moved on from transformers.
Yann LeCun, Meta’s longtime chief AI scientist, quit and said LLMs are a “dead end” because scaling text-only models can’t produce real intelligence, and he's not the only one who thinks so. Lots of engineers understand the limitations of LLMs.
There was already a lot of ML bullshit from the big data bubble ~ 2010 and before ChatGPT, together with all of the fuss about data scientists. But now it's a 100 times worse.