189

Ouch.

top 25 comments
sorted by: hot top controversial new old
[-] dis_honestfamiliar@lemmy.world 56 points 4 days ago

I guess that's what happens when the AI is trained on Reddit data.

[-] qjkxbmwvz@startrek.website 30 points 4 days ago
[-] VubDapple@lemmy.world 18 points 4 days ago
[-] shittydwarf@lemmy.dbzer0.com 17 points 4 days ago

Thanks for the gold kind stranger!

[-] carotte 6 points 4 days ago

well, that’s enough internet for today!

[-] oleorun@real.lemmy.fan 8 points 4 days ago

Something something hell in a cell with shitty watercolour announcers table

[-] SendMePhotos@lemmy.world 5 points 4 days ago

Damn lochness monster

[-] FooBarrington@lemmy.world 6 points 4 days ago

I did Nazi that coming!!!

[-] workerONE@lemmy.world 8 points 4 days ago* (last edited 4 days ago)

So much this

[-] pixxelkick@lemmy.world 34 points 4 days ago

On the original thread of questions, it went on for a long time and had multiple questions about psychological, emotional, and physical abuse.

LLMs get more and more off the rails as their context gets longer (longer convo), most folks have prolly at this point noticed every now and then a long running convo gets a little... schizophrenic feeling as it drags on.

The combination of a very long convo with a lot of tokens, and its subject being that of discussing and defining types of abuse, and I can see how eventually the LLM will generate a response like that randomly when it goes off the rails.

[-] ininewcrow@lemmy.ca 14 points 4 days ago* (last edited 4 days ago)

This happened to me and my friends this summer. The three of us were talking about AI technology and one friend who is an engineer wanted to demonstrate all this so he turned on ChatGPT on his phone and we started asking random questions. The three of us were just having fun and taking turns asking about food, birds, geology, houses, construction, math equations, medicine, the meaning of life, and a bunch of other silly things ....... after about half an hour it went off the rails and started giving bizarre answers that tried to create responses that tried to combine everything we had been asking about up to that point. Completely crazy responses that tried to give a meaning of life explanation that included birds, peanuts and how a bicycle works. We wanted to record the responses because they were so off the wall but by the time we started recording the audio, we were disconnected, the conversation reset and everything went back to normal.

[-] bane_killgrind@slrpnk.net 14 points 4 days ago

There is a new conversational space beyond which is known to man. It is a space as vast as your mom and as timeless as corporate greed. It is the middle ground between light and shadow, between the observed and deducted, and it lies between the pit of man's assumptions and the summit of his hubris. This is the dimension of hallucination. It is an area which we call, "The Twilight Zone."

[-] Peppycito@sh.itjust.works 4 points 4 days ago

Your comment went off the rails in your second paragraph so you might want to take a Turing test.

[-] pixxelkick@lemmy.world 10 points 4 days ago
[-] bricklove@midwest.social 12 points 4 days ago

A simple "wrong" would have just done fine

[-] LifeInMultipleChoice@lemmy.dbzer0.com 2 points 3 days ago* (last edited 3 days ago)

Did you read through it, it was a remarkable answer by Gemini, but it was also cool to see how they were utilizing the LLM to minimalize putting any thought into the work.

.. put in paragraphs, add more, add more, add these key terms, put back in paragraphs, add more.

Okay, I guess I know all about this subject now.

[-] Nougat@fedia.io 13 points 4 days ago

The easy part is making a program that can pretend to be human. The hard part is getting it to not be an asshole.

[-] elvith@feddit.org 9 points 4 days ago

How do you pretend to be human, without being an asshole? Isn’t that the essence of humankind?

[-] Spacehooks@reddthat.com 5 points 4 days ago

Need to base AI off of a Canadian. Worked for the pentaverate AI.

[-] iAvicenna@lemmy.world 6 points 3 days ago

Did the AI chatbot thought it was having a conversation with Elon

[-] dumbass@leminal.space 10 points 4 days ago

How bad at doing homework is she that the ai had a mental breakdown trying to teach her!?

[-] desktop_user 7 points 4 days ago

they really should have shared the entire token context, I get hating on llms, but context matters.

[-] donuts@lemmy.world 12 points 4 days ago

That's less "seeking help on homework" than "having it do your work for you".

But it's incredibly bad.

[-] donuts@lemmy.world 5 points 4 days ago

At least it's a better headline than the last article I read about it. That one said something along the lines of "during back-and-forth conversation about challenges and solutions for aging adults...", like we all couldn't see literal questions being pasted one by one

this post was submitted on 17 Nov 2024
189 points (100.0% liked)

Weird News - Things that make you go 'hmmm'

914 readers
97 users here now

Rules:

  1. News must be from a reliable source. No tabloids or sensationalism, please.

  2. Try to keep it safe for work. Contact a moderator before posting if you have any doubts.

  3. Titles of articles must remain unchanged; however extraneous information like "Watch:" or "Look:" can be removed. Titles with trailing, non-relevant information can also be edited so long as the headline's intent remains intact.

  4. Be nice. If you've got nothing positive to say, don't say it.

Violators will be banned at mod's discretion.

Communities We Like:

-Not the Onion

-And finally...

founded 1 year ago
MODERATORS