480
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 13 May 2026
480 points (100.0% liked)
Technology
84668 readers
5370 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
AI boosters crying into their computers: "but I put make no mistakes into the prompt how is this happening!!!"
Genuine curiosity:
You’re of course allowed to be mad at techbros and capitalism, but this feels like getting mad at a technology which I can’t resolve.
It’s a wonderful and fascinating technology that has real value and purpose when used correctly.
Is it a conflating of techbros + the new tech that everyone’s reacting to, or are we actually mad at the tech itself?
Thanks so much in advance for any constructive answers
The article isn't about the technology. This "experiment" is pure techbro fantasy.
It's not quite techbro fantasy, the actual point of the whole thing is marketing.
It's worked quite well at that, the amount of coverage they've garnered from the stunt is remarkable. Bravo, to be honest
Yeah, LLMs are useful tools, though not the silver bullet the hype proclaims them to be. The tech bros tightly controlling LLMs and chasing insane profits with their closed models, data centers, and subscriptions are the main problem. Open models like Qwen 3.6 27B that are approaching frontier capabilities while running on consumer hardware is really the only thing that gives me any hope for the future of LLMs.
LLM's are a technological dead end. They aren't interesting in the slightest, as anything they can do is already done more effectively and efficiently with other tools
Huh?
I think people just need to reset their expectations.
I asked one for help to interpret PCI policy application (credit card regulatory stuff). I gave it the situation and it provided me with a good answer that, when I asked our compliance team about, they agreed.
That saved me a lot of time. I don't see how that's a dead end. Then I had it draft a response to the person asking questions; I tuned it a little to my liking and sent it. What might have taken me an hour before took 10 minutes. This seems like a helpful thing, not a bad thing. I'm not sure what other technology would have done that.
But you had to ask your compliance team. Now repeat after your compliance team has been laid off. Good luck.
Gemini, remind me not to ask blargh any questions.
Also, Gemini, my daughter is asking for someone to play with her. Can you run around with the feather wand and have her chase it or something?
I think LLMs are an interesting technology. Of course, the output is inherently untrustworthy, and that rules out a ton of applications tech bros are trying to cram it into.
Do you have any examples?
Google search up until about 5 years ago. Then they enshittified in favor of AI summaries that regularly get shit wrong
In scientific queries. LMs return an answer from the largest data but if a system or model was recently proven wrong, they still return the wrong answer.
If you make very specific queries about DNA or protein sequence, they usually generate fabrications that are completely wrong.
They tend to return answers trained on the Internet, an uncurated pile of dogshit when it comes to science.
First it's the tech bros using a tech for something it wasn't meant for and continuously lying about it. That causes a backlash and makes people hate the tech itself, because it's being used where it causes friction.
Yeah, it really sucks, because LLM tech itself is amazing. Quantifying language and ideas into what's basically a massive queryable concept map is a huge achievement. What do the tech giants decide to do with that achievement? Shove it every little place it doesn't belong making everyone hate it.
Oh well, I'll keep backing up the interesting local open-source models people make and playing with them in the corner.
This tech sucks balls. Stop trying to justify it.
I don’t know what “sucks balls” means in terms of technology.
Does that mean it doesn’t work well, or you hate it, or something else?
It means, Fuck Off, AI.
Was your reply generated by LLM? because you don't seem to have understood the joke but seem to have confidently gone off on one.
Real value and purpose...give one example.
Summarization
context window smdh let’s invest more, just a startup cost 😅😰