524
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

TikTok ran a deepfake ad of an AI MrBeast hawking iPhones for $2 — and it's the 'tip of the iceberg'::As AI spreads, it brings new challenges for influencers like MrBeast and platforms like TikTok aiming to police unauthorized advertising.

top 50 comments
sorted by: hot top controversial new old
[-] AdmiralShat@programming.dev 211 points 1 year ago

Everyone with a brain has been saying this would happen for the last decade, and yet there was no legislation put in place to target this behavior

Why does every law need to be reactionary? Why can't we see a situation developing and get ahead of it by legislating the very obvious things it can be used for?

[-] camr_on@lemmy.world 62 points 1 year ago* (last edited 1 year ago)

How about a real answer:

All but a few of our legislators have any idea how technology/Internet works. Anything about the Internet that is obvious to the crowd on lemmy will probably never cross the radar of a geriatric legislator who never needs to even write their own emails bc an aide will do it

[-] KairuByte@lemmy.dbzer0.com 49 points 1 year ago

So, the first reason is that the law likely already covers most cases where someone is using deepfakes. Using it to sell a product? Fraud. Using it to scam someone? Fraud. Using it to make the person say something they didn’t? Likely falls into libel.

The second reason is that the current legislation doesn’t even understand how the internet works, is likely amazed by the fact that cell phones exist without the use of magic, and half of them likely have dementia. Good luck getting them to even properly understand the problem, never mind come up with a solution that isn’t terrible.

[-] Pxtl@lemmy.ca 8 points 1 year ago

The problem is that realistically this kind of tort law is hilariously difficult to enforce.

Like, 25 years ago we were pirating like mad, and it was illegal! But enforcing it meant suing individual people for piracy, so it was unenforceable.

Then the DMCA was introduced, which defined how platforms were responsible for policing IP crime. Now every platform heavily automates copyright enforcement.

Because there, it was big moneybags who were being harmed.

But somebody trying to empty out everybody's Gramma's chequing account with fraud? Nope, no convenient platform enforcement system for that.

load more comments (4 replies)
load more comments (2 replies)
[-] p03locke@lemmy.dbzer0.com 18 points 1 year ago

yet there was no legislation put in place to target this behavior

Why is the solution to every problem outlawing something?

"We need to do something about prostitution. Let's outlaw it!"

"We need to do something about alcohol. Let's outlaw it!"

"We need to do something about drugs. Let's outlaw them!"

"We need to do something about gambling. Let's outlaw it!"

All of it... a bunch of miserable failures, which have put good people in prison and turned our whole country into a goddamn police state. You can't outlaw technology without international treaties to make sure every other country follows suit. That barely works with nuclear weapons, and only because two cities exploded by the bombs and at least a couple decades of being afraid of a nuclear apocalypse.

What the hell do you think is going to happen if we make moves on AI? China takes the lead, does what it wants, and suddenly, it's the far superior superpower. The end.

Hell, how do we know this isn't China propaganda running on China's propaganda platform?

[-] ABCDE@lemmy.world 70 points 1 year ago

Why is legislation= outlawing to you?

[-] SatansMaggotyCumFart@lemmy.world 37 points 1 year ago

Fear mongering.

[-] AdmiralShat@programming.dev 57 points 1 year ago* (last edited 1 year ago)

How is making it illegal to steal a person's face and make them say things they never agreed to going to make China an AI super power?

Not gonna lie my dude you have a luke warm take

[-] urist 15 points 1 year ago

It’s already illegal to impersonate someone to steal money. It’s called fraud.

AI is going to cause huge problems (I am really worried about how things are going to shake out) but I’m also not convinced writing special laws about it is going to change anything. We do need to make sure our current laws don’t have loopholes that AI can somehow exploit.

load more comments (8 replies)
load more comments (2 replies)
[-] halvo317@sh.itjust.works 50 points 1 year ago

At no point did you come anywhere close to anything that can be considered a rational thought

[-] Catoblepas 12 points 1 year ago

I award you no points, and may God have mercy on your soul.

[-] Ghostalmedia@lemmy.world 24 points 1 year ago

Or you could propose alternative solutions instead of building a weird straw man with sex workers and gin.

[-] Atomic@sh.itjust.works 13 points 1 year ago

Are you seriously comparing alcohol. To people stealing someone's likeness to commit fraud?

load more comments (2 replies)
load more comments (1 replies)
[-] PocketRocket@lemmy.world 69 points 1 year ago

Oh boy. This is all moving very quickly. People already fall for simple SMS scams, I can only imagine just how many more will be falling victim to this trash in months/years to come.

[-] CeeBee@lemmy.world 27 points 1 year ago

People have already been falling for scams that "Elon Musk" was promoting. Naturally I'm talking about these crypto schemes run by scammers on YouTube using a deepfake of Musk. It's been happening for about two years now.

[-] FilthyHands@sh.itjust.works 19 points 1 year ago* (last edited 1 year ago)

Bill Gates has been giving away his fortune to some lucky email recipients every year now since the days when you had to pay for the internet by the hour.

load more comments (1 replies)
[-] slaacaa@lemmy.world 18 points 1 year ago

Just imagine fans getting a facetime call from a Taylor Swift, explaining they won half-price tickets to an exlusive fan event. Then “Taylor” has to drop out to make the other calls, but will leave them a link for the purchase - only valid for 15 minutes, as of course many others are waiting for this opportunity.

[-] GnuLinuxDude@lemmy.ml 64 points 1 year ago* (last edited 1 year ago)

This is the entire basis of using an adblocker like ublock origin. It is purely defensive. You don't know what an advertising (malvertising) network will deliver, and neither does the website you're on (Tiktok, Google, Yahoo, eBay, etc etc etc). With generative AI and video ads and the lack of content checking on the advertising network this will just get worse and worse. I mean, why spend money on preventing this? The targeted ads and user data collection is where the money's at, baby!

Related note, installing uBO on my dad's PC some 8 years ago was far more effective than any kind of virus scanner or whatever. Allowing commerce on the Internet was a mistake. That's the root of all this bullshit, anyway.

[-] Moneo@lemmy.world 26 points 1 year ago

Fuck ads in general. I don't care if they are legitimate or not I don't want to be mentally assaulted every time I try to browse a website.

load more comments (1 replies)
[-] p03locke@lemmy.dbzer0.com 10 points 1 year ago

Allowing commerce ~~on the Internet~~ was a mistake. That’s the root of all this bullshit, anyway.

That's more accurate.

load more comments (2 replies)
load more comments (6 replies)
[-] TropicalDingdong@lemmy.world 59 points 1 year ago

Bro I still don't know who MrBeast is.

[-] herr@lemmy.world 33 points 1 year ago* (last edited 1 year ago)

Currently largest and most successful YouTuber on the platform (by a wide margin), started out by doing challenge videos about himself (24h in ice, that kinda stuff) that he'd invite friends to as the goody sidekicks causing mischief and making his challenges a little harder/more interesting.

These days, his stuff has transformed into a media powerhouse, all of it is still kinda falling into a challenge category. Now with far higher stakes and involving other people in competitions against each other - think "kids vs adults - group with most people still in the game after 5 days wins $500k" - where several days (sometimes months) of filming all gets cut down to one 10-20 minute long video.

There's also just "look at this thing" videos like "$1 to $10,000,00 car" where him and his friends check out increasingly expensive cars until they eventually get a whole bridge cordoned off to drive in the most expensive car in the world.

He does some philanthropy, like his "plant 10 million trees" campaign and makes money through sponsorship deals and advertising his own brands - they're currently running their own line of (fair trade?) chocolate bars that are available (in most places?) in the US, which kids will buy because of the brand recognition, leaving them with a ton of profits.

[-] echodot@feddit.uk 8 points 1 year ago

How is he simultaneously so famous and yet no one knows who he is? I feel more people would know who Linus is than him. Until about a year ago I'd never even heard the name.

[-] TORFdot0@lemmy.world 18 points 1 year ago

I’ve never watched a Mr Beast video but it sounds like he makes a lot of content that would mainly appeal to zoomers which explain his apparent high popularity but low cultural impact

[-] kureta@lemmy.ml 9 points 1 year ago

sampling bias?

load more comments (1 replies)
[-] PocketRocket@lemmy.world 10 points 1 year ago

If memory serves (being knowledge I gleaned from a podcast). He's a YouTuber that has carved out a popular niche in philanthropy of sorts. All for views of course, but some philanthropy none the less. Very popular I think with, I want to say Gen Alpha aged kids. A lot of people have imitated the content style in the last few years. So I guess there is instant brand recognition and trust there for a lot of people.

load more comments (3 replies)
[-] Matriks404@lemmy.world 38 points 1 year ago* (last edited 1 year ago)

The more I hear about AI-generated content and other crap that is posted online these days, I wonder if I should just start reading books instead, maybe even learn to play on a musical instrument and leave virtual world altogether.

load more comments (7 replies)
[-] KingThrillgore@lemmy.ml 31 points 1 year ago

Butlerian jihad sounds like a good idea rn

[-] camr_on@lemmy.world 14 points 1 year ago

Yeah but then I need to hire a mentat to keep my shit straight

[-] DragonTypeWyvern@literature.cafe 12 points 1 year ago

No.

We need the AIs to make the Men of Gold so we can compete with the murder orgy space elves.

[-] rez_doggie@lemmy.world 28 points 1 year ago* (last edited 1 year ago)

Came to watch a fake mrbeast left dissatisfied. Came back to post a link: https://twitter.com/MrBeast/status/1709046466629554577

[-] Pixelologist@lemmy.dbzer0.com 12 points 1 year ago

Click blewlowaugh now

[-] scorpious@lemmy.world 15 points 1 year ago

Ban TikTok already. Can someone tell me why this isn’t a good idea?

[-] Boogiepop@lemmy.world 29 points 1 year ago

So you see this as specifically a tiktok problem and not a tech problem? Do you think it won't/hasn't happened elsewhere, and will be only a tiktok problem? I don't use tiktok, or care about it but I feel like every problem with it is something endemic to social media platforms run by businesses atm.

load more comments (5 replies)
load more comments (5 replies)
[-] Asudox@lemmy.world 14 points 1 year ago

And that is why we need a pixel poisoner but for videos.

[-] KairuByte@lemmy.dbzer0.com 19 points 1 year ago

I’m not familiar with the term, and Google shows nothing that makes sense in context. Can you explain the concept?

[-] Omniraptor@lemm.ee 10 points 1 year ago* (last edited 1 year ago)

Here specifically it's a technique to alter images that makes them distorted for the "perception" by generative neural networks and unusable as training data but still recognizable to a human.

The general term is https://en.wikipedia.org/wiki/Adversarial_machine_learning#Data_poisoning

One example of a tool that does this is https://glaze.cs.uchicago.edu/ but I have doubts about its imperceptibility

[-] SoaringDE@feddit.de 9 points 1 year ago

Yeah I'm at a loss aswell. Is it a way to prove the source of a video?

load more comments (7 replies)
[-] akilou@sh.itjust.works 13 points 1 year ago

Can Mr Beast sue til tok over this?

load more comments (4 replies)
[-] autotldr@lemmings.world 12 points 1 year ago

This is the best summary I could come up with:


TikTok ran an advertisement featuring an AI-generated deepfake version of MrBeast claiming to give out iPhone 15s for $2 as part of a 10,000 phone giveaway.

The sponsored video, which Insider viewed on the app on Monday, looked official as it included MrBeast's logo and a blue check mark next to his name.

Two days ago, Tom Hanks posted a warning to fans about a promotional video hawking a dental plan that featured an unapproved AI version of himself.

"Realism, efficiency, and accessibility or democratization means that now this is essentially in the hands of everyday people," Henry Ajder, an academic researcher and expert in generative AI and deepfakes, told Insider.

Not all AI-generated ad content featuring celebrities is inherently bad, as a recent campaign coordinated between Lionel Messi and Lay's demonstrates.

"If someone releases an AI-generated advert without disclosure, even if it's perfectly benign, I still think that should be labeled and should be positioned to an audience in the way that they can understand," Ajder said.


The original article contains 518 words, the summary contains 168 words. Saved 68%. I'm a bot and I'm open source!

[-] 9thSun@midwest.social 11 points 1 year ago

I read this title last night and thought it was a story about AI making an amalgamation of MrBeast and Stephen Hawking to shill iPhones

load more comments (1 replies)
[-] DingoBilly@lemmy.world 8 points 1 year ago

Lol who gives a fuck. If you're a massive influencer being deepfaked then who cares - fuck your brand being damaged, I'd just call it part of the role of having a job like that. If you're a person who buys ads because an influencer is telling you then you're also a moron and you'll be scammed regardless.

[-] DudeDudenson@lemmings.world 8 points 1 year ago* (last edited 1 year ago)

While I understand and partially agree with your sentiment, the problem is tik tok just casually deepfaking people in general without their consent and without being clear about it.

Specially relevant for Americans since it's a Chinese company doing it. They could literally have deepfaked influencers running political deestabilization campaigns on their platform and no one seems to care

[-] admin@lemmy.my-box.dev 9 points 1 year ago

I sincerely doubt tiktok made the ad themselves though.

load more comments (3 replies)
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 04 Oct 2023
524 points (100.0% liked)

Technology

59035 readers
2601 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS