362
submitted 4 months ago by boem@lemmy.world to c/technology@lemmy.world
top 20 comments
sorted by: hot top controversial new old
[-] cupcakezealot 42 points 4 months ago

common eu w

[-] FaceDeer@fedia.io 37 points 4 months ago

So Meta's AIs will mainly reflect non-EU cultural values.

[-] brsrklf@jlai.lu 40 points 4 months ago

EU cultural values include resisting against corporations doing whatever they want with our data. Let's see meta try to reflect those.

[-] FaceDeer@fedia.io 6 points 4 months ago

So you want Meta's AI to have values that don't include resisting against corporations doing whatever they want with your data?

This is a seriously double-edged sword here. The training data of these AIs is what gives these AIs their capabilities and biases.

[-] brsrklf@jlai.lu 14 points 4 months ago

Anyway, no matter from which parts of the world it's trained, we're talking about 2024 Facebook content. We've seen what Reddit does to an AI.

Can't wait for meta's cultured AI to share its wisdom with us.

[-] FaceDeer@fedia.io 2 points 4 months ago* (last edited 4 months ago)

Reddit is actually extremely good for AI. It's a vast trove of examples of people talking to each other.

When it comes to factual data then there are better sources, sure, but factual data has never been the key deficiency of AI. We've long had search engines for that kind of thing. What AIs had trouble with was human interaction, which is what Reddit and Facebook are all about. These datasets train the AI to be able to communicate.

If the Fediverse was larger we'd be a significant source of AI training material too. Would be surprised if it's not being collected already.

[-] lemmyvore@feddit.nl 5 points 4 months ago

Training AI is not some noble endeavor that must be done no matter what. It's a commercial grab that needs to balance utility with consumer rights.

[-] No_Change_Just_Money@feddit.de 23 points 4 months ago

Mate, it takes the combined discussions of the internet

I would not expect any sort of values

[-] brbposting@sh.itjust.works 4 points 4 months ago

I would expect plenty of deeply-held values:

Rule 34

Shitposts

CSAM

Disingenuous partisan mis/disinformation

Worst hot takes imaginable

[-] dmtalon@infosec.pub 23 points 4 months ago

Its crazy how much further ahead Europe is in Privacy Protection.

All these companies need to be held responsible for what they do with our data, and what it costs them when they lose control of it. Either figure out how to safe guard it or suffer painful consequences. Or perhaps only store what's necessary for us to interact.

[-] Grippler@feddit.dk 29 points 4 months ago* (last edited 4 months ago)

But then again, we also have pretty much every EU group pushing for super invasive chat control. It's ridiculous how schizophrenic they are on the subject of digital privacy.

[-] sugar_in_your_tea@sh.itjust.works 8 points 4 months ago

Yup, the EU isn't a role model for the world or anything. They have some good laws, and those should be replicated elsewhere, but don't assume that just because they got a few things right, that they don't mess up in other really important ways.

[-] echodot@feddit.uk 10 points 4 months ago

For some reason a lot of parts of Europe seem to want to elect hard right borderline neo Nazis. Many cases, not even borderline.

God knows what the appeal is. Since the hard right and every particularly interested in protecting their own more interested in protecting their wallets. Not a concern that the vast majority of the populace are really going to empathise with.

[-] Grippler@feddit.dk 5 points 4 months ago

Even parties to the left are pro this surveillance bullshit.

[-] sugar_in_your_tea@sh.itjust.works 5 points 4 months ago

That's apparently a thing everywhere.

I'm in the US, and people here just seem to be okay with the TSA, NSA, CBP, etc all going through your stuff. I was complaining about BS stoplight cameras on a trip to another state, and my parents and cousin seemed to want more of them, despite them largely just harassing law-abiding citizens by shortening yellow-light durations and ticketing people for pulling too far forward... They also seem interested in facial recognition in stores and whatnot.

I don't get it. If they did an ounce of research, they'd see that these don't actually reduce crime or protect anyone, they just drive revenue and harass people. I mention "privacy" and they pull the "nothing to hide" argument.

People seem to want their privacy violated. I just don't get it.

[-] sunzu@kbin.run 3 points 4 months ago

They are telling what they care about, take notice.

I am once they get a local AI grfiter, they will change tune too.

[-] lemmyvore@feddit.nl 2 points 4 months ago

It's not the same groups and entities pushing these things. It looks contradictory because it all ends up submitted to the same legislative bodies but that's par for the course in a functional democracy.

[-] themurphy@lemmy.ml 1 points 4 months ago* (last edited 4 months ago)

Yeah, seems weird, but there's also points where it's not related at all.

One is a company using user data they didn't tell they would use for this purpose, and illegally trying to do it anyway. They literally sell the data by making a product of it. It's also a private company with stakeholders.

Other is EU scanning messages, but not selling them.

So it's about who you trust basically.

[-] Duke_Nukem_1990@feddit.de 13 points 4 months ago

Don't worry, maybe Meta can eventually just buy the inevitable leaks resulting from the general chat surveilance the EU so vehemently tries to push through.

[-] autotldr@lemmings.world 5 points 4 months ago

This is the best summary I could come up with:


And while this climb down has been cheered by privacy advocates, Meta called it "a step backwards for European innovation" that will cause "further delays bringing the benefits of AI to people in Europe."

"We're disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram  — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March," the social network said in a statement on Friday.

Without a steady diet of EU information, Meta's AI systems won't be able to "accurately understand important regional languages, cultures or trending topics on social media," the American goliath said at the time.

"In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset," Almond continued.

"We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected."

Privacy group noyb had filed complaints with various European DPAs about Meta's LLM training plans, and its chair Max Schrems on Friday said while the organization welcomed the news, it "will monitor this closely."


The original article contains 589 words, the summary contains 231 words. Saved 61%. I'm a bot and I'm open source!

this post was submitted on 17 Jun 2024
362 points (100.0% liked)

Technology

58713 readers
2599 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS