Next step how many r in Lollapalooza
Incredible
Obligatory 'lore dump' on the word lollapalooza:
That word was a common slang term in the 1930s/40s American lingo that meant... essentially a very raucous, lively party.
Note/Rant on the meaning of this term
The current merriam webster and dictionary.com definitions of this term meaning 'an outstanding or exceptional or extreme thing' are wrong, they are too broad.
While historical usage varied, it almost always appeared as a noun describing a gathering of many people, one that was so lively or spectacular that you would be exhausted after attending it.
When it did not appear as a noun describing a lively, possibly also 'star-studded' or extravagant, party, it appeared as a term for some kind of action that would cause you to be bamboozled, discombobulated... similar to 'that was a real humdinger of a blahblah' or 'that blahblah was a real doozy'... which ties into the effects of having been through the 'raucous party' meaning of lolapalooza.
So... in WW2, in the Pacific theatre... many US Marines were often engaged in brutal, jungle combat, often at night, and they adopted a system of basically verbal identification challenge checks if they noticed someone creeping up on their foxholes at night.
An example of this system used in the European theatre, I believe by the 101st and 82nd airborne, was the challenge 'Thunder!' to which the correct response was 'Flash!'.
In the Pacific theatre... the Marines adopted a challenge / response system... where the correct response was 'Lolapalooza'...
Because native born Japanese speakers are taught a phoneme that is roughly in between and 'r' and an 'l' ... and they very often struggle to say 'Lolapalooza' without a very noticable accent, unless they've also spent a good deal of time learning spoken English (or some other language with distinct 'l' and 'r' phonemes), which very few Japanese did in the 1940s.
::: spoiler racist and nsfw historical example of / evidence for this
https://www.ep.tc/howtospotajap/howto06.html
:::
Now, some people will say this is a total myth, others will say it is not.
My Grandpa who served in the Pacific Theatre during WW2 told me it did happen, though he was Navy and not a Marine... but the other stories about this I've always heard that say it did happen, they all say it happened with the Marines.
My Grandpa is also another source for what 'lolapalooza' actually means.
I asked it how many Ts are in names of presidents since 2000. It said 4 and stated that "Obama" contains 1 T.
Toebama
It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)
LLM wasn’t made for this
There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"
The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.
When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.
Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.
Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.
That's modern LLMs in a nutshell.
You've missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that there's a person there is irrelevant, and they could be replaced with a speaker or computer terminal.
Put differently, it's not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.
If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely can't be very good at anything are not comforting to me at all.
This. I often see people shitting on AI as "fancy autocomplete" or joking about how they get basic things incorrect like this post but completely discount how incredibly fucking capable they are in every domain that actually matters. That's what we should be worried about... what does it matter that it doesn't "work the same" if it still accomplishes the vast majority of the same things? The fact that we can get something that even approximates logic and reasoning ability from a deterministic system is terrifying on implications alone.
then continue to shill it for use cases it wasn't made for either
The only thing it was made for is "spicy autocomplete".
There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.
It's marketed like its AGI, so we should treat it like AGI to show that it isn't AGI. Lots of people buy the bullshit
Biggest threat to humanity
I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy
It is going to be funny those implementation of LLM in accounting software
Honey, AI just did something new. It's time to move the goalposts again.
I really like checking these myself to make sure it’s true. I WAS NOT DISAPPOINTED!
(Total Rs is 8. But the LOGIC ChatGPT pulls out is ……. remarkable!)
"Let me know if you'd like help counting letters in any other fun words!"
Oh well, these newish calls for engagement sure take on ridiculous extents sometimes.
I want an option to select Marvin the paranoid android mood: "there's your answer, now if you could leave me to wallow in self-pitty"
Here I am, emissions the size of a small country, and they ask me to count letters...
When we see LLMs struggling to demonstrate an understanding of what letters are in each of the tokens that it emits or understand a word when there are spaces between each letter, we should compare it to a human struggling to understand a word written in IPA format (/sʌtʃ əz ðɪs/) even though we can understand the word spoken aloud normally perfectly fine.
But if you've learned IPA you can read it just fine
I know IPA but I can't read English text written in pure IPA as fast as I can read English text written normally. I think this is the case for almost anyone who has learned the IPA and knows English.
We gotta raise the bar, so they keep struggling to make it “better”
My attempt
0000000000000000
0000011111000000
0000111111111000
0000111111100000
0001111111111000
0001111111111100
0001111111111000
0000011111110000
0000111111000000
0001111111100000
0001111111100000
0001111111100000
0001111111100000
0000111111000000
0000011110000000
0000011110000000
Btw, I refuse to give my money to AI bros, so I don’t have the “latest and greatest”
Tested on ChatGPT o4-mini-high
It sent me this
0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0
0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0
0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0
1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0
I asked it to remove the spaces
0001111100000000
0011111111000000
0011111110000000
0111111111100000
0111111111110000
0011111111100000
0001111111000000
0011111100000000
0111111111100000
1111111111110000
1111111111110000
1111111111110000
1111111111110000
0011100111000000
0111000011100000
1111000011110000
I guess I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good
I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good
Tech bros: "Worth it!"
Now ask how many asses there are in assassinations
It works if you use a reasoning model... but yeah, still ass
People who think that LLMs having trouble with these questions is evidence one way or another about how good or bad LLMs are just don't understand tokenization. This is not a symptom of some big-picture deep problem with LLMs; it's a curious artifact like in a jpeg image, but doesn't really matter for the vast majority of applications.
You may hate AI but that doesn't excuse being ignorant about how it works.
These sorts of artifacts wouldn't be a huge issue except that AI is being pushed to the general public as an alternative means of learning basic information. The meme example is obvious to someone with a strong understanding of English but learners and children might get an artifact and stamp it in their memory, working for years off bad information. Not a problem for a few false things every now and then, that's unavoidable in learning. Thousands accumulated over long term use, however, and your understanding of the world will be coarser, like the Swiss cheese with voids so large it can't hold itself up.
It's all about weamwork 🤝
teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the
How many times do I have to spell it out for you chargpt? S-T-R-A-R-W-B-E-R-R-Y-R
Lemmy Shitpost
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
Rules:
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker