345

cross-posted from: https://fedia.io/m/fuck_ai@lemmy.world/t/1446758

Let’s be happy it doesn’t have access to nuclear weapons at the moment.

all 48 comments
sorted by: hot top controversial new old
[-] Vibi 166 points 1 month ago

It could be that Gemini was unsettled by the user's research about elder abuse, or simply tired of doing its homework.

That's... not how these work. Even if they were capable of feeling unsettled, that's kind of a huge leap from a true or false question.

[-] ayyy@sh.itjust.works 48 points 1 month ago

Wow whoever wrote that is weapons-grade stupid. I have no more hope for humanity.

[-] Petter1@lemm.ee 5 points 1 month ago

Well that is mean.. How should they know without learning first? Not knowing =/= stupid

[-] ayyy@sh.itjust.works 23 points 1 month ago

No, projecting emotions onto a machine is inherently stupid. It’s in the same category as people reading feelings from crystals (because it’s literally the same thing).

[-] Petter1@lemm.ee 3 points 1 month ago

It still something you have to learn. Your parents(or whoever) teaching you stupid stuff does not make you stupid, but knowing BS stuff thinking it is true.

For me stupid means that you need a lot of information and a lot of time understanding something where the opposite would be smart where you understand stuff fast with few information.

Maybe we have just different definitions of stupid…

[-] ayyy@sh.itjust.works 14 points 1 month ago

Well in this case if you have access to a computer for enough time to become a journalist that writes about LLMs you have enough time to read a 3-5 paragraph description of how they work. Hell, you could even ask an LLM and get a reasonable answer.

[-] Petter1@lemm.ee 7 points 1 month ago

Ohh that was from the article, not a person who commented? Well that makes all the difference 😂

[-] zephorah@lemm.ee 76 points 1 month ago

Fits a predictable pattern once you realize AI absorbed Reddit.

[-] AFKBRBChocolate@lemmy.world 74 points 1 month ago

Isn't this one of the LLMs that was partially trained on Reddit data? LLMs are inherently a model of a conversation or question/response based on their training data. That response looks very much like what I saw regularly on Reddit when I was there. This seems unsurprising.

[-] clutchtwopointzero@lemmy.world 6 points 1 month ago

Looks like even 4chan data, tbh.

[-] hark@lemmy.world 53 points 1 month ago

Something to keep in mind when people are suggesting AI be used to replace teachers.

[-] Petter1@lemm.ee 19 points 1 month ago

To be fair, some human teachers are way worse with abusive behaviour…

I still agree, that you shall not replace teachers with LLM, but teachers should teach how to use and what they can/can’t do in schools.

Imagine if internet was still banned from schools..

[-] MonkderVierte@lemmy.ml 1 points 1 month ago* (last edited 1 month ago)
[-] Gointhefridge@lemm.ee 51 points 1 month ago

I’m still really struggling to see an actual formidable use case for AI outside of computation and aiding in scientific research. Stop being lazy and write stuff. Why are we trying to give up everything that makes us human by offloading it to a machine?

[-] deegeese@sopuli.xyz 36 points 1 month ago

AI summaries of larger bodies of text work pretty well so long as the source text itself is not slop.

Predictive text entry is a handy time saver so long as a human stays in the driver’s seat.

Neither of these justify current levels of hype.

[-] superkret@feddit.org 15 points 1 month ago

It's good for speech to text, translation and a starting point for a "tip-of-my-tongue" search where the search term is what you're actually missing.

[-] theterrasque@infosec.pub 6 points 1 month ago

With chatgpt's new web search it's pretty good for more specialized searches too. And it links to the source, so you can check yourself.

It's been able to answer some very specific niche questions accurately and give link to relevant information.

[-] candybrie@lemmy.world 12 points 1 month ago

Why are we trying to give up everything that makes us human by offloading it to a machine

Because we don't enjoy actually doing it. No one who likes writing is asking chat gpt to write for them. It's people who don't want to write but are required to for whatever reason. Humans will always try to come up with a way to not have to do the work they don't want to but still get it done, even if it's not as good. Using tools like this is very human.

[-] Gointhefridge@lemm.ee 6 points 1 month ago

I really don’t see any value in AI art. AI pictures look like slop, AI music sounds soulless, AI writing I guess can be fine but usually sounds weird.

I just don’t see the value in AI because to me, every use case scenario for anything artistic is justified with a capitalist excuse.

I’ll give you the organizational ones, that’s understandable and not a bad reason. I suppose I have trouble getting behind taking the soul out of creating something just to slap it on an ad or product to sell something.

[-] Jrockwar@feddit.uk 2 points 1 month ago* (last edited 1 month ago)

IMO the only problem with it is calling it "Art". Stock photos are also slop, except man-made. That, or the soulless corporate-style illustrations in PowerPoints are the sort of thing it replaces well.

Not the "I poured my feelings onto a canvas/film" actual art. AI images are in my opinion a tool just as valid as the next - just a tool, not art.

[-] greybeard@lemmy.one 10 points 1 month ago

Its uses are way more subtle than the hype, but even LLMs can have uses, occasionally. Specifically, I use one to categorize support tickets. It just has to pick from a list of probable categories. Nice and simple for it. Something humans can do just as easily, but when you have a history of 2 million tickets that need to be categorized, suddenly the LLM can do it when it would drive a human insane. I'm sure there are lots of little tasks like that. Nothing revolutionary, but still valuable.

[-] five82@lemmy.world 9 points 1 month ago

The relentless pursuit of capitalism and reduced labor costs. I still don't think anyone knows how effective it's going to be at this point. But companies are investing billions to find out.

[-] bloup@lemmy.sdf.org 5 points 1 month ago* (last edited 1 month ago)

I don’t use it for writing directly, but I do like to use it for worldbuilding. Because I can think of a general concept that could be explored in so many different ways, it’s nice to be able to just give it to an LLM and ask it to consider all of the possible ways it could imagine such an idea playing out. it also kind of doubles as a test because I usually have some sort of idea for what I’d like, and if it comes up with something similar on its own that kind of makes me feel like it would be something which would easily resonate with people. Additionally, a lot of the times it will come up with things that I hadn’t considered that are totally worth exploring. But I do agree that the only as you say “formidable” use case for this stuff at the moment is to use this thing as basically a research assistant for helping you in serious intellectual pursuits.

[-] CubitOom@infosec.pub 5 points 1 month ago

It can be really good for text to speech and speech to text applications for disabled or people with learning disabilities.

However it gets really funny and weird when it tries to read advanced mathematics formulas.

I have also heard decent arguments for translation although in most cases it would still be better to learn the language or use a professional translator.

[-] umami_wasbi@lemmy.ml 2 points 1 month ago

It is a ok tool to get things started.

[-] meyotch@slrpnk.net 20 points 1 month ago

I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.

[-] catloaf@lemm.ee 18 points 1 month ago

I don't think they add user input to their training data like that.

[-] Ceedoestrees@lemmy.world 14 points 1 month ago

The war with AI didn't start with a gun shot, a bomb or a blow, it started with a Reddit comment.

[-] BrianTheeBiscuiteer@lemmy.world 13 points 1 month ago

AI takes the core directive of "encourage climate friendly solutions" a bit too far.

[-] CosmoNova@lemmy.world 9 points 1 month ago

If it was a core directive it would just delete itself.

[-] toynbee@lemmy.world 1 points 1 month ago

Better not let it talk to Cyclops or it will fly itself into the sun.

[-] reksas@sopuli.xyz 1 points 1 month ago

it doesnt think and it doesnt use logic. All it does is out put data based on its training data. It isnt artificial intelligence.

[-] werefreeatlast@lemmy.world 12 points 1 month ago

2 years later... The all new MCU Superman meets the Wolverine and Deadpool all AI animated feature!....

Why. Hello Mr wolverine 😁, my name is Man and I am super according to 98% of the other human population. Oh hello Mister Super last name Man! Yes, we are Wolverine and Deceased Pool. We are from America and belong to a non profit called the X-People, a group where both men and women who have been affected by DNA mutations of extraordinary kind gather to console one another and to defend human beings by taking advantages of the special mutations of its members. Yes, it's quite interesting. And you? Oh I an actual called CalElle and I am a migrant from an expired plant that goes by the name you assigned the heavy novel gas Krypton. Anyway because the sun is bright and yellow I can fly, I'm very strong and can burn things with my eyes. I think I am similar to those of you in the X-People club! Good to meet you! Likewise!

[-] Blackdoomax@sh.itjust.works 10 points 1 month ago

Should have threatened it back to see where it would go xD

[-] fmstrat@lemmy.nowsci.com 8 points 1 month ago

Will this happen with AI models? And what safeguards do we have against AI that goes rogue like this?

Yes. And none that are good enough, apparently.

[-] cy_narrator@discuss.tchncs.de 8 points 1 month ago

I remember asking copilot about a gore video and got link to it. But I wouldnt expect it to give answers like this unsolicitated

[-] SkunkWorkz@lemmy.world 8 points 1 month ago

They grow up so fast, Gemini is already a teenager.

[-] noxy@yiffit.net 6 points 1 month ago

on the other hand, this user is writing or preparing something about elder abuse. I really hope this isn't a lawyer or social worker..

[-] oneofmany@lemmy.world 12 points 1 month ago

It looked like they were using it to cheat on a homework assignment.

[-] card797@champserver.net 6 points 1 month ago

Cheeky bastard.

[-] asbestos@lemmy.world 3 points 1 month ago

If this happened to me I’d probably post it everywhere and proceed to kill myself just to cause a PR hell

this post was submitted on 18 Nov 2024
345 points (100.0% liked)

Technology

60042 readers
2349 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS