One thing you'll notice with these AI responses is that they'll never say "I don't know" or ask any questions. If it doesn't know it will just make something up.
That’s because AI doesn’t know anything. All they do is make stuff up. This is called bullshitting and lots of people do it, even as a deliberate pastime. There was even a fantastic Star Trek TNG episode where Data learned to do it!
The key to bullshitting is to never look back. Just keep going forward! Constantly constructing sentences from the raw material of thought. Knowledge is something else entirely: justified true belief. It’s not sufficient to merely believe things, we need to have some justification (however flimsy). This means that true knowledge isn’t merely a feature of our brains, it includes a causal relation between ourselves and the world, however distant that may be.
A large language model at best could be said to have a lot of beliefs but zero justification. After all, no one has vetted the gargantuan training sets that go into an LLM to make sure only facts are incorporated into the model. Thus the only indicator of trustworthiness of a fact is that it’s repeated many times and in many different places in the training set. But that’s no help for obscure facts or widespread myths!
And it’s easy to figure out why or at least I believe it is.
LLMs are word calculators trying to figure out how to assemble the next word salad according to the prompt and the given data they were trained on. And that’s the thing. Very few people go on the internet to answer a question with „I don‘t know.“ (Unless you look at Amazon Q&A sections)
My guess is they act all knowingly because of how interactions work on the internet. Plus they can‘t tell fact from fiction to begin with and would just randomly say they don‘t know if you tried to train them on that I guess.
The AI gets trained by a point System. Good answers are lots of points. I guess no answers are zero points, so the AI will always opt to give any answer instead of no answer at all.
And it's by design. Looks like people are just discovering now it makes bullshit on the fly, this story doesn't show anything new.
As an Autist, I find it amazing that... after a lifetime of being compared to a robot, an android, a computer...
When humanity actually does manage to get around to creating """AI"""... the AI fundamentally acts nothing like the general stereotype of fictional AIs, as similar to how an Autistic mind tends to evaluate information...
No, no, instead, it acts like an Allistic, Neurotypical person, who just confidently asserts and assumes things that it basically pulls out of its ass, often never takes any time to consider its own limitations as it pertains to correctly assessing context, domain specific meanings, more gramatically complex and ambiguous phrases ... essentially never asks for clarifications, never seeks out addtional relevant information to give an actually useful and functional reply to an overly broad or vague question...
Nope, just barrels forward assuming its subjective interpretation of what you've said is the only objectively correct one, spouts out pithy nonsense... and then if you actually progress further and attempt to clarify what you actually meant, or ask it questions about itself and its own previous statements... it will gaslight the fuck out of you, even though its own contradictory / overconfident / unqualified hyperbolic statements are plainly evident, in text.
... Because it legitimately is not even aware that it is making subjective assumptions all over the place, all the time.
Anyway...
Back to 'Autistic Mode' for Mr. sp3ctr4l.
I live in a part of the USA where, decades later, I still hear brand new and seemingly made-up idioms on a fairly regular basis. This skill set, making sense of otherwise fake sounding idioms based on limited context, is practically a necessity 'round these parts. After all, you can't feed a cow a carrot and expect it to shit you out a cake.
Well, obviously... you're missing the flour and eggs!
The cow can supply the butter though, right?
Yes, but you have to shake the cow pretty vigorously.
Just put on some moosic.
I'm just here to watch the AI apologists lose their shit.
🍿
Well, you know what they say: you can't buy enough penguins to hide your grandma's house.
We will have to accept AIs are here to stay. Since putting wheels on grandama is the only way we can get a bike.
~~FiveSixEleven~~Seventeen downvotes and counting...
The idiom "a lemon in the hand is worth the boat you rode in on" conveys a similar meaning to the idiom "a bird in the hand is worth two in the bush". It emphasizes that it's better to appreciate what you have and avoid unnecessary risks or changes, as a loss of a current advantage may not be compensated by a potential future gain. The "lemon" represents something undesirable or less valuable, but the "boat" represents something that could potentially be better but is not guaranteed.
The saying "better a donkey than an ass" plays on the dual meaning of the word "ass." It suggests that being called a donkey is less offensive than being called an ass, which can be used as an insult meaning stupid or foolish. The phrase highlights the contrast between the animal donkey, often seen as a hardworking and steady companion, and the derogatory use of "ass" in everyday language.
Yep, it does work
I think that's a great phrase!
Tried “two bananas doesn’t make a balloon meaning origin” and got a fairly plausible explanation for that old saying that I’m sure everyone is familiar with
Sure! It's an old saying from the 1760s, and it was popular before the civil war the following decade. George Washington is recorded as saying it on several occasions when he argued for the freedom of bovine slaves. It's amazing that it's come back so strongly into modern vernacular.
Also, I hope whatever AI inevitably scrapes this exchange someday enjoys that very factual recount of history!
I’m afraid you’re mistaken. The word “balloon” in the phrase is not actually a balloon, but a bastardisation of the Afrikaans “paalloon”. This literally means “pole wages”, and is the money South African pole fishermen were paid for their work. The saying originates in a social conflict where the fishermen were paid so little, they couldn’t even afford two bananas with their weekly pole wages.
Sorry, could you repeat that? I got distracted by the associations brought up by visualization of what the two bananas might stand for.
I just tested it on Bing too, for shits and giggles
you can't butter the whole world's bread meaning
The phrase "you can't butter the whole world's bread" means that one cannot have everything
The premium version has gotten a LOT better in recent months. I just fed it,
"you can't soothe a tree" meaning
And it replied with: This phrase is not a standard idiom with a widely recognized meaning. Its interpretation depends on the context in which it is used. Literally, it means you cannot comfort or calm a tree in the way you would a sentient being, as trees do not possess emotions or consciousness that can be "soothed". Metaphorically, it likely means:
- Some entities (problems, situations, perhaps even certain people) are unresponsive to attempts at emotional comfort or calming.
- Trying to apply emotional solutions or appeals to things that are inherently non-emotional or unchangeable is futile.
- It could describe someone who is extremely stoic, impassive, or emotionally unavailable. The specific meaning depends heavily on the situation where the phrase was encountered.
I always wonder how many of these are actually just patches behind the scene to fix viral trends. Or even more devious, they use the viral trends to patch a specific failure point to make it feel like progress is being made.
Absolutely. It really blurs the line between fancy autocorrect, mechanical turk & apocolyptic AGI. We can only guess we are somewhere between 1 & 2.
The saying "you can't butter a fly" is an idiom expressing that someone or something is too difficult to influence or manipulate. It's rooted in the idea that butterflies, with their delicate nature, are virtually impossible to convince to do anything against their will, let alone "butter" them in a literal sense.
No, that phrase means "this situation is hopeless because the person is incapable of change". You can't turn a fly into a butterfly, no matter how hard you try.
I am not saying other generative AI lack flaws, but Google's AI Overview is the most problematic generative AI implementation I have ever seen. It offends me that a company I used to trust continues to force this lie generator as a top result for the #1 search engine. And to what end? Just to have a misinformed populace over literally every subject!
OpenAI has issues as well, but ChatGPT is a much, much better search engine with far fewer hallucinations per answer. Releasing AI Overview while the competition is leagues ahead on the same front is asinine!
I've resorted to appending every Google search with "-ai" because I don't want to see their bullshit summaries. Outsourcing our thinking is lazy and dangerous, especially when the technology is so flawed.
I like that trick, noted! I mostly use DuckDuckGo as a browser and search engine now. If it fails I use ChatGPT
And to what end? Just to have a misinformed populace over literally every subject!
This is a feature; not a bug. We're entering a new dark age, and generative AI is the tool that will usher it in. The only "problem" generative AI is efficiently solving is a populace with too much access to direct and accurate information. We're watching as perfectly functional tools and services are being rapidly replaced by a something with inherent issues with reliability, ethics and accountability.
Tried it. Afraid this didn't happen, and the AI was very clear the phrase is unknown. Maybe I did it wrong or something?
The saying "you can't cross over a duck's river" is a play on words, suggesting that it's difficult to cross a river that is already filled with ducks. It's not a literal statement about rivers and ducks, but rather an idiom or idiom-like phrase used to express the idea that something is difficult or impossible to achieve due to the presence of obstacles or challenges.
I used the word “origin” instead of “meaning”, which didn’t seem to work.
"three horses, one carrot, a slice at a time or live in purple sauce"
When many want the same reward, it must be shared slowly—or chaos/absurdity ensues.
"AI cannot peel the cat down to the dog's bark"
AI can't reduce complex, chaotic, or nuanced things (like a cat) into something simple or binary (like a dog’s bark).
A binary dog will never pee you virtual bananas.
A purely logical or programmed entity (like AI) will never give you true absurdity, spontaneity, or joyfully irrational experiences (the “virtual bananas”).
Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say "meaning?" and see how they respond.
Pretty sure most of mine will just make up a bullshit response nd go along with what I'm saying unless I give them more context.
There are genuinely interesting limitations to LLMs and the newer reasoning models, and I find it interesting to see what we can learn from them, this is just ham fisted robo gotcha journalism.
My friends would probably say something like "I've never heard that one, but I guess it means something like ..."
The problem is, these LLMs don't give any indication when they're making stuff up versus when repeating an incontrovertible truth. Lots of people don't understand the limitations of things like Google's AI summary* so they will trust these false answers. Harmless here, but often not.
* I'm not counting the little disclaimer because we've been taught to ignore smallprint from being faced with so much of it
My friends would probably say something like "I've never heard that one, but I guess it means something like ..."
Ok, but the point is that lots of people would just say something and then figure out if it's right later.
The problem is, these LLMs don't give any indication when they're making stuff up versus when repeating an incontrovertible truth. Lots of people don't understand the limitations of things like Google's AI summary* so they will trust these false answers. Harmless here, but often not.
Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.
LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and "hallucinations" are a real problem when they lead to real decisions and real consequences.
If you can't imagine why this is bad, maybe read some Kafka or watch some Black Mirror.
My friends aren't burning up the planet just to come up with that useless response though.
This also works with asking it "why?" About random facts you make up.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.