875
AGI achieved 🤖 (lemmy.dbzer0.com)
top 50 comments
sorted by: hot top controversial new old
[-] cyrano@lemmy.dbzer0.com 214 points 4 days ago

Next step how many r in Lollapalooza

[-] sexy_peach@feddit.org 93 points 4 days ago
[-] altkey@lemmy.dbzer0.com 22 points 4 days ago

Apparently, this robot is japanese.

load more comments (2 replies)
[-] sp3ctr4l@lemmy.dbzer0.com 17 points 4 days ago* (last edited 4 days ago)

Obligatory 'lore dump' on the word lollapalooza:

That word was a common slang term in the 1930s/40s American lingo that meant... essentially a very raucous, lively party.

Note/Rant on the meaning of this term

The current merriam webster and dictionary.com definitions of this term meaning 'an outstanding or exceptional or extreme thing' are wrong, they are too broad.

While historical usage varied, it almost always appeared as a noun describing a gathering of many people, one that was so lively or spectacular that you would be exhausted after attending it.

When it did not appear as a noun describing a lively, possibly also 'star-studded' or extravagant, party, it appeared as a term for some kind of action that would cause you to be bamboozled, discombobulated... similar to 'that was a real humdinger of a blahblah' or 'that blahblah was a real doozy'... which ties into the effects of having been through the 'raucous party' meaning of lolapalooza.

So... in WW2, in the Pacific theatre... many US Marines were often engaged in brutal, jungle combat, often at night, and they adopted a system of basically verbal identification challenge checks if they noticed someone creeping up on their foxholes at night.

An example of this system used in the European theatre, I believe by the 101st and 82nd airborne, was the challenge 'Thunder!' to which the correct response was 'Flash!'.

In the Pacific theatre... the Marines adopted a challenge / response system... where the correct response was 'Lolapalooza'...

Because native born Japanese speakers are taught a phoneme that is roughly in between and 'r' and an 'l' ... and they very often struggle to say 'Lolapalooza' without a very noticable accent, unless they've also spent a good deal of time learning spoken English (or some other language with distinct 'l' and 'r' phonemes), which very few Japanese did in the 1940s.

::: spoiler racist and nsfw historical example of / evidence for this

https://www.ep.tc/howtospotajap/howto06.html

:::

Now, some people will say this is a total myth, others will say it is not.

My Grandpa who served in the Pacific Theatre during WW2 told me it did happen, though he was Navy and not a Marine... but the other stories about this I've always heard that say it did happen, they all say it happened with the Marines.

My Grandpa is also another source for what 'lolapalooza' actually means.

load more comments (14 replies)
load more comments (2 replies)
[-] Korhaka@sopuli.xyz 14 points 3 days ago

I asked it how many Ts are in names of presidents since 2000. It said 4 and stated that "Obama" contains 1 T.

[-] TheOakTree@lemm.ee 7 points 2 days ago
[-] RedstoneValley@sh.itjust.works 135 points 4 days ago

It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)

[-] UnderpantsWeevil@lemmy.world 47 points 4 days ago* (last edited 4 days ago)

LLM wasn’t made for this

There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"

The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.

When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.

Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.

Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.

That's modern LLMs in a nutshell.

[-] jsomae@lemmy.ml 9 points 3 days ago

You've missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that there's a person there is irrelevant, and they could be replaced with a speaker or computer terminal.

Put differently, it's not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.

If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely can't be very good at anything are not comforting to me at all.

[-] kassiopaea 4 points 3 days ago

This. I often see people shitting on AI as "fancy autocomplete" or joking about how they get basic things incorrect like this post but completely discount how incredibly fucking capable they are in every domain that actually matters. That's what we should be worried about... what does it matter that it doesn't "work the same" if it still accomplishes the vast majority of the same things? The fact that we can get something that even approximates logic and reasoning ability from a deterministic system is terrifying on implications alone.

load more comments (13 replies)
load more comments (17 replies)
[-] merc@sh.itjust.works 12 points 3 days ago

then continue to shill it for use cases it wasn't made for either

The only thing it was made for is "spicy autocomplete".

load more comments (2 replies)
[-] REDACTED@infosec.pub 27 points 4 days ago

There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.

load more comments (1 replies)
[-] BarrelAgedBoredom@lemm.ee 25 points 4 days ago

It's marketed like its AGI, so we should treat it like AGI to show that it isn't AGI. Lots of people buy the bullshit

load more comments (2 replies)
load more comments (5 replies)
[-] VirgilMastercard@reddthat.com 152 points 4 days ago

Biggest threat to humanity

[-] idiomaddict@lemmy.world 93 points 4 days ago

I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy

load more comments (12 replies)
[-] cyrano@lemmy.dbzer0.com 45 points 4 days ago

It is going to be funny those implementation of LLM in accounting software

load more comments (3 replies)
[-] bitjunkie@lemmy.world 7 points 2 days ago

Deep reasoning is not needed to count to 3.

load more comments (1 replies)
[-] sheetzoos@lemmy.world 4 points 2 days ago

Honey, AI just did something new. It's time to move the goalposts again.

[-] qx128@lemmy.world 31 points 3 days ago

I really like checking these myself to make sure it’s true. I WAS NOT DISAPPOINTED!

(Total Rs is 8. But the LOGIC ChatGPT pulls out is ……. remarkable!)

[-] Zacryon@feddit.org 26 points 3 days ago

"Let me know if you'd like help counting letters in any other fun words!"

Oh well, these newish calls for engagement sure take on ridiculous extents sometimes.

[-] filcuk@lemmy.zip 18 points 3 days ago

I want an option to select Marvin the paranoid android mood: "there's your answer, now if you could leave me to wallow in self-pitty"

[-] localhost443@discuss.tchncs.de 9 points 3 days ago

Here I am, emissions the size of a small country, and they ask me to count letters...

load more comments (1 replies)
load more comments (5 replies)
[-] jsomae@lemmy.ml 3 points 2 days ago* (last edited 2 days ago)

When we see LLMs struggling to demonstrate an understanding of what letters are in each of the tokens that it emits or understand a word when there are spaces between each letter, we should compare it to a human struggling to understand a word written in IPA format (/sʌtʃ əz ðɪs/) even though we can understand the word spoken aloud normally perfectly fine.

[-] GandalftheBlack@feddit.org 2 points 2 days ago

But if you've learned IPA you can read it just fine

[-] jsomae@lemmy.ml 1 points 1 day ago

I know IPA but I can't read English text written in pure IPA as fast as I can read English text written normally. I think this is the case for almost anyone who has learned the IPA and knows English.

[-] MrLLM@ani.social 16 points 3 days ago

We gotta raise the bar, so they keep struggling to make it “better”

My attempt

0000000000000000
0000011111000000
0000111111111000
0000111111100000
0001111111111000
0001111111111100
0001111111111000
0000011111110000
0000111111000000
0001111111100000
0001111111100000
0001111111100000
0001111111100000
0000111111000000
0000011110000000
0000011110000000

Btw, I refuse to give my money to AI bros, so I don’t have the “latest and greatest”

[-] ipitco@lemmy.super.ynh.fr 25 points 3 days ago* (last edited 3 days ago)

Tested on ChatGPT o4-mini-high

It sent me this

0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0
0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0
0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0
1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0

I asked it to remove the spaces


0001111100000000
0011111111000000
0011111110000000
0111111111100000
0111111111110000
0011111111100000
0001111111000000
0011111100000000
0111111111100000
1111111111110000
1111111111110000
1111111111110000
1111111111110000
0011100111000000
0111000011100000
1111000011110000

I guess I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good

[-] xavier666@lemm.ee 1 points 2 days ago

I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good

Tech bros: "Worth it!"

[-] ICastFist@programming.dev 33 points 4 days ago

Now ask how many asses there are in assassinations

[-] notdoingshittoday@lemmy.zip 64 points 4 days ago
[-] LodeMike@lemmy.today 14 points 3 days ago

Man AI is ass at this

*laugh track*

load more comments (14 replies)
[-] Rin@lemm.ee 16 points 3 days ago

It works if you use a reasoning model... but yeah, still ass

[-] jsomae@lemmy.ml 13 points 3 days ago* (last edited 3 days ago)

People who think that LLMs having trouble with these questions is evidence one way or another about how good or bad LLMs are just don't understand tokenization. This is not a symptom of some big-picture deep problem with LLMs; it's a curious artifact like in a jpeg image, but doesn't really matter for the vast majority of applications.

You may hate AI but that doesn't excuse being ignorant about how it works.

[-] untorquer@lemmy.world 22 points 3 days ago

These sorts of artifacts wouldn't be a huge issue except that AI is being pushed to the general public as an alternative means of learning basic information. The meme example is obvious to someone with a strong understanding of English but learners and children might get an artifact and stamp it in their memory, working for years off bad information. Not a problem for a few false things every now and then, that's unavoidable in learning. Thousands accumulated over long term use, however, and your understanding of the world will be coarser, like the Swiss cheese with voids so large it can't hold itself up.

load more comments (5 replies)
load more comments (6 replies)
[-] besselj@lemmy.ca 55 points 4 days ago

It's all about weamwork 🤝

[-] burgerpocalyse@lemmy.world 22 points 4 days ago

teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the

load more comments (2 replies)
load more comments (1 replies)
[-] LanguageIsCool@lemmy.world 18 points 3 days ago

How many times do I have to spell it out for you chargpt? S-T-R-A-R-W-B-E-R-R-Y-R

load more comments
view more: next ›
this post was submitted on 11 Jun 2025
875 points (100.0% liked)

Lemmy Shitpost

32362 readers
2385 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS