824

The actor told an audience in London that AI was a “burning issue” for actors.

top 50 comments
sorted by: hot top controversial new old
[-] ShittyBeatlesFCPres@lemmy.world 149 points 1 year ago
[-] anteaters@feddit.de 43 points 1 year ago

Poor man's voice was stolen and now while he cannot use it anymore you make mean jokes :(

[-] FaceDeer@kbin.social 63 points 1 year ago

His voice wasn't stolen, it's still right where he left it.

[-] Th4tGuyII@kbin.social 45 points 1 year ago

If you made a painting for me, and then I started making copies of it without your permission and selling them off, while I might not have stolen the physical painting, I have stolen your art.

Just because they didn't rip his larynx out of his throat, doesn't mean you can't steal someone's voice.

[-] drekly@lemmy.world 18 points 1 year ago

Well, I just printed a picture of the Mona Lisa.

Did I steal the Mona Lisa? Or did I just copy it? Reproduce it?

[-] stopthatgirl7@kbin.social 36 points 1 year ago

You’re also not causing da Vinci to potentially miss out on jobs by copying it. You’re also not taking away his ability to say no to something he doesn’t want to be associated with.

[-] drekly@lemmy.world 14 points 1 year ago* (last edited 1 year ago)

That's fine. I'm not arguing this is a bad thing, I'm just being pedantic about the word theft.

Having your voice used to say things you didn't say is a terrifying prospect. Combined with deep faking takes it one step further.

But is it technically theft?

[-] stopthatgirl7@kbin.social 9 points 1 year ago* (last edited 1 year ago)

Yes, actually. In the same way as copyright infringement or identity theft could be considered so.

Bette Midler vs Ford

[-] Dkarma@lemmy.world 6 points 1 year ago

Wow the court obviously got this one wrong. Imitation is in no way stealing someone's voice.

load more comments (1 replies)
load more comments (1 replies)
[-] NoneOfUrBusiness@kbin.social 5 points 1 year ago

The idea obviously doesn't apply to the public domain.

[-] andthenthreemore@startrek.website 11 points 1 year ago

We're getting into samantics but it's counterfeit not stolen.

It would be more like if you made a painting for me, and I then used that to replicate your artistic style and used that to make new paintings without your permission and passed it off as your work.

[-] irmoz@reddthat.com 16 points 1 year ago* (last edited 1 year ago)
load more comments (15 replies)
load more comments (8 replies)
[-] uriel238 55 points 1 year ago

When I read striking actor Stephen Fry my brain responded Why yes he is! Rather!

Am I a bad person?

[-] RizzRustbolt@lemmy.world 11 points 1 year ago

Found the Hugh Laurie!

[-] drislands@lemmy.world 10 points 1 year ago

Oh man I didn't realize that WASN'T what it meant until I read your comment...

[-] nickwitha_k@lemmy.sdf.org 6 points 1 year ago

I think he would be rather pleased.

[-] mycroft@lemmy.world 54 points 1 year ago* (last edited 1 year ago)

I think it's important to remember how this used to happen.

AT&T paid voice actors to record phoneme groups in the 90s/2000s and have been using those recordings to train voice models for decades now. There are about a dozen AT&T voices we're all super familiar with because they're on all those IVR/PBX replacement systems we talk to instead of humans now.

The AT&T voice actors were paid for their time, and not offered royalties but they were told that their voices would be used to generate synthentic computer voices.

This was a consensual exchange of work, not super great long term as there's no royalties or anything and it's really just a "work for hire" that turns into a product... but that aside -- the people involved all agreed to what they were doing and what their work would be used for.

The ultimate problem at the root of all the generative tools is ultimately one of consent. We don't permit the arbitrary copying of things that are perceived to be owned by people, nor do we think it's appropriate to do things without people's consent with their "Image, likeness, voice, or written works."

Artists tell politicians to stop using their music all the time etc. But ultimately until we really get a ruling on what constitutes "derivative" works nothing will happen. An AI is effectively the derivative work of all the content that makes up the vectors that represents it so it seems a no brainer, but because it's radio on the internet we're not supposed to be mad at Napster for building it's whole business on breaking the law.

[-] Squids@sopuli.xyz 12 points 1 year ago* (last edited 1 year ago)

I think a more interesting (and less dubious) example of this would be Vocaloid and to a greater extent, cevio AI

Vocaloid is a synth bank where instead of the notes being musical instruments, they're phonemes which have been recorded and then packaged into a product which you pay for, which means royalties are involved (I think there might also be a thing with royalties for big performances and whatnot?) Cevio AI takes this a step further by using AI to better smooth together the phonemes and make pitching sound more natural (or not - it's an instrument, you can break it in interesting ways if you try hard enough). And obviously, they consented to that specific thing and get paid for it. They gave Yamaha/Sony/the general public a specific character voice and permission to use that specific voice.

(There's a FOSS voicebanks but that adds a different layer of complication to things like I think a lot of them were recorded before the idea of an "AI bank" was even a possibility. And like, while a paid voice bank is a proprietary thing, the open source alternatives are literally just a big file of .WAVs so it's much easier to go outside their intended purposes)

load more comments (1 replies)
[-] driving_crooner@lemmy.eco.br 11 points 1 year ago

I don't think permits and concent alone can be used in labor relationship, because the unbalance position of power employees and employers have with each other. Could the workers really negotiate better working conditions? They really can't, not without an union anyway.

[-] banneryear1868@lemmy.world 45 points 1 year ago* (last edited 1 year ago)

Studios basically want to own the personas of their actors so they can decouple the actual human from it and just use their images. There's been a lot of weird issues with this already in videogames with body capture and voice acting, and contracts aren't read through properly or the wording is vague, and not all agents know about this stuff yet. It's very dystopian to think your whole appearance and persona can be taken from you and commodified. I remember when Tupac's hologram performed at Coachella in 2012 and thinking how fucked up that was. You have these huge studios and event promoters appropriating his image to make money, and an audience effectively watching a performance of technological necromancy where a dead person is re-animated.

[-] Metatronz@lemmy.world 10 points 1 year ago

Did Tupac's estate agree? Or receive compensation?

[-] Glytch@ttrpg.network 25 points 1 year ago

Who cares if his estate agreed to it? HE didn't. His estate shouldn't have the right to make money off of things he never actually did.

Let the dead stay dead, it's just an excuse to not pay new, living artists.

load more comments (3 replies)
[-] AngryCommieKender@lemmy.world 6 points 1 year ago

It took me a minute to realize that you said Tupac, not Tuvok.

[-] shyguyblue@lemmy.world 5 points 1 year ago

Great, another holodeck episode...

[-] Damage@feddit.it 27 points 1 year ago

"it wasn't me planning the terrorist attack over the phone, it was someone stealing my voice with an AI"

[-] ripcord@kbin.social 19 points 1 year ago

This is, unfortunately, the world we are about to be in.

load more comments (1 replies)
[-] autotldr@lemmings.world 25 points 1 year ago

This is the best summary I could come up with:


Among those warning about the technology’s potential to cause harm is British actor and author Stephen Fry, who told an audience at the CogX Festival in London on Thursday about his personal experience of having his identity digitally cloned without his permission.

Speaking at a news conference as the strike was announced, union president Fran Drescher said AI “poses an existential threat” to creative industries, and said actors needed protection from having “their identity and talent exploited without consent and pay.”

As AI technology has advanced, doctored footage of celebrities and world leaders—known as deepfakes—has been circulating with increasing frequency, prompting warnings from experts about artificial intelligence risks.

At a U.K. rally held in support of the SAG-AFTRA strike over the summer, Emmy-winning Succession star Brian Cox shared an anecdote about a friend in the industry who had been told “in no uncertain terms” that a studio would keep his image and do what they liked with it.

Oscar winner Matthew McConaughey told Salesforce CEO Marc Benioff during a panel event at this year’s Dreamforce conference that he had concerns about the rise of AI in Hollywood.

A spokesperson for the Alliance of Motion Picture and Television Producers (AMPTP), the entertainment industry’s official collective bargaining representative, was not available for comment when contacted by Fortune.


The original article contains 911 words, the summary contains 213 words. Saved 77%. I'm a bot and I'm open source!

[-] maggio@discuss.tchncs.de 5 points 1 year ago
[-] nxfsi@lemmy.world 21 points 1 year ago

Don't worry, ""artists"" only complain about ai when open source ai gets released.

[-] Sneptaur@pawb.social 56 points 1 year ago

Get your head out of your ass. Their voices are their art and to replicate that is not only disturbing it’s morally wrong. Especially if you do so for profit.

[-] nxfsi@lemmy.world 13 points 1 year ago

Nobody complained about copyright when Microsoft had the only image ai in the game, only when the open source stable diffusion came out did they start screeching about how ai was "stealing their jobs".

[-] GunnarRunnar@kbin.social 31 points 1 year ago

Fuck off. The tech got popular and public got educated on what makes it work.

[-] nxfsi@lemmy.world 7 points 1 year ago

So years of Microsoft's advertising dalle did nothing to educate the public about how ai works but they're suddenly all experts the week after stable diffusion comes out?

[-] Enigma@sh.itjust.works 18 points 1 year ago

No, they didn’t because I’ve literally never heard of it until your comment. And I understand that my experience is anecdotal, but I guarantee I’m not the only one, or even one of only a couple thousand. You severely overestimate how knowledgeable the general public is on AI. Most haven’t even heard of Chat GPT, and that’s in the news, let alone expecting everyone to be interested in it enough to actually educate themselves on it.

Like you’re the only person in this thread that’s even mentioned Microsoft’s version, yet you think “the public” knows about it?

[-] Sekoia 29 points 1 year ago

Uh no people definitely did. Mostly the people that actually knew how this shit worked. But even laypeople complained when it was just Dall-E and Midjourney.

[-] ShadowRam@kbin.social 14 points 1 year ago

What are you talking about? When MS had the only image AI in the game, it was garbage and couldn't do anything useful. Of course no one was threatened.

But after researchers got their hands on nVidia 3000 series cards, and finally had access to hardware.

More advanced research papers started spilling out, which has caused this crazy leap in AI tech.

Now the image/audio AI is advanced enough to be useful, hence now the threats..

load more comments (2 replies)
load more comments (1 replies)
[-] c0mbatbag3l@lemmy.world 5 points 1 year ago

It's only wrong when done for profit.

Otherwise you're just having their material as data for an algorithm and a personal use case.

[-] Sneptaur@pawb.social 5 points 1 year ago

I don't know what someone would use AI art for "personal use" aside from trying to make some sort of porn or something for themselves

[-] c0mbatbag3l@lemmy.world 4 points 1 year ago

Use the voices for a film project or machinima if you want, use the picture generation models to create wallpapers, it's not my fault you insist on being obtuse about this by pretending you can't figure out a use case that isn't based around making money.

load more comments (3 replies)
[-] pjhenry1216@kbin.social 48 points 1 year ago

AI can very easily be abused and I don't see how this is related to the tech being open sourced or not. Fighting to ensure you aren't exploited is fine and I support anyone to fight against exploitation.

load more comments (1 replies)
[-] SickPanda@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

They downvoted him because he spoke the truth.

It's funny how all (or at least most of them) of the parents of those "artists" told them to do/learn something real and now they get their recipe for their bad choice.

I've discussed with someone about how pictures made by stable diffusion is not Art while there are literally "paintings" where the "artist" just jizzed on the canvas which then got declared as Art. I trolled him by sending him multiple generated anime pictures and asked him which is "Art" because he said he could recognize Art. He chose one and fell into the trap.

load more comments (1 replies)
[-] MargotRobbie@lemmy.world 8 points 1 year ago

See, I'm pulling the smartest move right now: AI can't take your job if you use AI to take your own job first.

Besides, I think Hollywood is pretty behind on tech overall. The current state of the art voice generator quality is still pretty bad, it'll be a very long time before it can replace actors in quality (if ever): if you train the AI voice on audiobooks, the generated voice is going to sound like someone narrating an audiobook, which really doesn't sound natural for dialogues at all.

I think then the key point isn't to ban generative transformer based AI: once the tech out of its box, you can't exactly put it back in again. (heh) The real question to ask is, who should own this technology so that it does good and help people in the world, instead of being used to take away people's livelihood?

load more comments (1 replies)
[-] monobot@lemmy.ml 6 points 1 year ago

Since it is paywalled I can only guess from the title.

I don't understand the problem. He was payed for reading books and now we all have his voice. What did he expect?

Is there an AI imitating his voice making money? Is it being represented with his name? If not, what would be the difference with some person imitating his voice, whould that be stealing too?

Basically I don't see any problem with me buying those books training local model and give it other books to read. That can not be illegal, right?

Giving it to other people mentioning his name would definitely be fraud. But stealing? I don't know.

Selling it to other people under other name... I don't see a problem.

But than we come to AI generated images and I do start thinking in that way. Thou if they can find someone that looks like him, and other person sounding like him... they are all good?

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 18 Sep 2023
824 points (100.0% liked)

Technology

60083 readers
2850 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS