Fixing all the shit AI breaks is going to create a lot of jobs
Not enough
"Your father and I are for the jobs the ~~asteroid~~ AI will create."
And then I get down voted for laughing when people say that they use AI for "general research" 🙄🙄🙄
I've had people legitimately post the answer they got from chat gpt to answer someone's question and then get annoyed when people tell them its wrong.
"I'm not sure, but ChatGPT says...."
No, fuck off, go back to grade school.
Only yesterday, I searched for a very simple figure, the number of public service agents in a specific administrative region. This is, obviously, public information. There is a government site where you can get it. However I didn't know the exact site, so I searched for it on Google.
Of course, AI summary shows up first, and gives me a confident answer, accurately mirroring my exact request. However the number seems way too low to me, so I go check the first actual search result, the aforementioned official site. Google's shitty assistant took a sentence about a subgroup of agents, and presented it as the total. The real number was clearly given before, and was about 4 times that.
This is just a tidbit of information any human with the source would have identified in a second. How the hell are we supposed to trust AI for complex stuff after that?
The AI models can be hilariously bad even on their own terms.
Yesterday I asked Gemini for a population figure (because I was too lazy to look it up myself). First I asked it:
what is the combined population of place 1, place 2 and place 3
It answered:
Unfortunately, providing an absolutely precise, up-to-the-minute combined population figure is difficult. Population numbers are constantly changing.
To find these figures, I would recommend checking the Office for National Statistics (ONS) website. They provide detailed population statistics for local authorities in England.
On a whim, I asked it again as:
combined population of place 1, place 2 and place 3
And then it gave me the answer sweet as a nut.
Apparently I was being too polite with it, I guess?
I slapped a picture of a chart into Gemini because I didn’t know what the type of chart was called but I wanted to mention it in a Uni report. I was too lazy to go looking at chart types and thought that would be quicker.
I just asked it “What kind of chart is this” and it ignored that and started analysing the chart instead and started stating what the chart was about and giving insights into the chart. Didn’t tell me what kind of chart it was even though that was the only thing I asked.
Bear in mind that I deliberately cropped out any context to avoid it trying to do that, just in case, so all I got from it was pure hallucinations. It was just making pure shit up that I didn’t ask for.
I switched to the reasoning model and asked again, then it gave me the info I wanted.
Gotta let it take the W on that first answer, honestly.
I searched for pictures of Uranus recently. Google gave me pictures of Jupiter and then the ai description on top chided me telling me that what was shown were pictures of Jupiter, not Uranus. 20 years ago it would have just worked.
Stupid that we have to do this, but add before:2022
and it filters out all the slop
I'm shocked!
Shocked I tell you!
Only 60%‽
Blows my mind that it's so low.
While I do think that it's simply bad at generating answers because that is all that's going on, generating the most likely next word that works a lot of the time but then can fail spectacularly...
What if we've created AI but by training it with internet content, we're simply being trolled by the ultimate troll combination ever.
This is what happens when you train your magical AI on a decade+ of internet shitposting
They didn't learn from all the previous times someone tried to train a bot on the internet.
It's almost poetic how Tay.ai, Microsoft's earlier shitty ai, was also poisoned by internet trolling and became a Nazi on twitter nearly a decade ago
Training AI with internet content was always going to fail, as at least 60% of users online are trolls. It's even dumber than expecting you can have a child from anal sex.
My level of shitposting has increased dramatically ever since I learned that I’m not just trolling the person I replied to, but future generations to come. You gotta have a legacy you’re proud of before you kick the bucket, ya know what I mean?
Society grows great when old men plant shitposts whose shade they know they shall never sit in.
Because of what you just wrote some dumb ass is going to try to have a child through anal sex after doing a google search.
They're not joking about a hypothetical. It was a real thing that happened.
they've been having sex the wrong way
that's subjective
There's no way this isn't bullshit. Please let this be bullshit...
I'm gonna go ahead and try without a Google search.
I believe in you, please name your child after me if it works out.
Know that If it doesn't work, I'm not giving up.
I believe in you, if you end up having twins please name them after this instance
There was that one time when an AI gave a pizza recipe including gluing the cheese down with Elmer's glue, because that was suggested as a joke on Reddit once.
There will never be such a thing as a useful LLM.
where do you think lawyers come from?
but you can, it's about as likely as having one from a thigh-job but is technically not impossible.
Who could have seen this coming? Definitely not the critics of LLM hyperscalers.
Move fast and break things, brah!
In the late 90s and early 2000s, internet search engines were designed to actually find relevant things ... it's what made Google famous
Since the 2010s, internet search engines have all been about monetizing, optimizing, directing, misdirecting, manipulating searches in order to drive users to the highest paying companies or businesses, groups or individuals that best knew how to use Search Engine Optimization. For the past 20 years, we've created an internet based on how we can manipulate everyone and everything in order to make someone money. The internet is no longer designed to freely and openly share information ... it's now just a wasteland of misinformation, disinformation, nonsense and manipulation because we are all trying to make money off one another in some way.
AI is just making all those problems bigger, faster and more chaotic. It's trying to make money for someone but it doesn't know how yet .... but they sure are working on trying to figure it out.
Not just the search engines, but the websites themselves as well. Gaming the search engines is now an entire profitable industry, not just people putting links to their friends' websites at the bottom of their webpage, or making a webring.
It's just been a race to the bottom. The search engines get worse, as do the websites, and the whole thing is exacerbated by people today being able to churn out entire websites by the hundreds. Anyone trying to do things without playing the game simply ends up buried under layers of rubbish.
The Sages of the modern day are the lucky few who know which old and boring sites to ask for an answer.
I'd say it's a reflection of society.
Oh man, that's too good. Thanks for sharing this. Now I kinda want to ask it about blue waffles, but I'm a little scared to.
Well, that’s less bad than 100% SEO optimized garbage with LLM generated spam stories around a few Amazon links.
The same technology Elon Musk wants to use to process your taxes everyone!
That guy is a moron.
But AI assistance in taxes is also being introduced where I live (Spain which is currently being government by a coalition of socialist parties).
Still not deployed so I couldn't say how it will work. But preliminary info seems promising. They are going to use a publicly trained AI project that has already being released.
The thing is that I don't think that precisely that is a Musk idea. It's something that have been probably been talked about various tax agencies in the world in the latest years. The probably is just parroting the idea and giving them project to one of his billionaire friends.
The same technology the billionaire class wants I use to eliminate payroll entirely
Well yeah, they get their information from the Internet. Garbage in. Garbage out.
From the article...
Surprisingly, premium paid versions of these AI search tools fared even worse in certain respects. Perplexity Pro ($20/month) and Grok 3's premium service ($40/month) confidently delivered incorrect responses more often than their free counterparts.
Though these premium models correctly answered a higher number of prompts, their reluctance to decline uncertain responses drove higher overall error rates.
60% of the time it works every time
To me it seems the title is misleading as the research is very narrowly scoped. They provided news excerts to the LLMs and asked for the title, the author, the publication date, and the URL. Is this something people do? I would be interested if they used some real world examples.
I knew this whole hype was way overblown. This AI is "good" but not replace every employee with it good.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.