539
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn't that it'll kill you. It's that a small group of billionaires will control the tech forever.

you are viewing a single comment's thread
view the rest of the comments

This is why we need large-scale open-source AI efforts, even if it scares the everliving shit out of me.

[-] uriel238 10 points 1 year ago

AI safety experts are worried that capitalists will be too eager to get AGI first and will discard caution (friendly AI principles) for mad science.

And I, for one, welcome our new robot overlords!

[-] PsychedSy@sh.itjust.works 6 points 1 year ago

If we have to choose between corporations or the government ruling us with AI I think I'm gonna just take a bullet.

[-] Kedly@lemm.ee 1 points 1 year ago

Anarchy with never exist as anything but the exception to the rule, governments are a form of power that the population can at least influence. Weaker government will always mean stronger either nobility or corporations

[-] PsychedSy@sh.itjust.works 1 points 1 year ago

We're failing at influencing now.

You may think you're choosing the best yoke, but I'd prefer none.

[-] Kedly@lemm.ee 1 points 1 year ago

Maybe in the future we can go back to smaller tribes/groups of people that take care of each other, but in the world as it exists today? An entity will come by sooner or later to conquer said group. We influence our government FAR better than we influence a corporation or dictator. Right now we need an equalizing big power, and at least with democratic governments, these big powers at least have to pretend to work for their people. Which, again, corporations and dictators do not

[-] zbyte64 4 points 1 year ago

Any AI safety experts that believes these oligarchs are going to get AGI and not some monkey paw are also drinking the cool aide.

[-] uriel238 1 points 1 year ago

Actually AI safety experts are worried that corporations are just interested in getting technology that achieves specific ends, and don't care that it is dangerous or insufficiently tested. Our rate of industrial disasters kinda demonstrates their views regarding risk.

For now, we are careening towards giving smart drones autonomy to detect, identify, target and shoot weapons at enemies long before they're smart enough to build flat-packed furniture from the IKEA visual instructions.

[-] frezik@midwest.social 8 points 1 year ago

I've been thinking about how to do that. The code for most AI is pretty basic and uninteresting. It's mostly modifying the input for something usable. Companies could open source their entire code base without letting anything important out.

The dataset is the real problem. Say you want to classify fruit to check if it's ripe enough for harvesting. You'll need a whole lot of pictures of your preferred fruit where it's both ripe and not ripe. You'll want people who know the fruit to classify those images, and then you can feed it into a model. It's a lot of work, and needs to attract a bunch of people to volunteer their time. Largely the sort of people who haven't traditionally been a part of open source software.

[-] pinkdrunkenelephants@lemmy.cafe 2 points 1 year ago

If we set up some kind of blockchain to just pay people to honestly differentiate between pictures, it could be done.

[-] echodot@feddit.uk 11 points 1 year ago

There is no problem in this world so serious that someone will not suggest blockchain as a potential solution.

[-] Corkyskog@sh.itjust.works 3 points 1 year ago

Your being hyperbolic and silly. Find me a solution to mass shootings or racism using blockchain.

[-] ICastFist@programming.dev 4 points 1 year ago

Nah, using Recaptcha is the way to get free labor for that training

[-] errer@lemmy.world 6 points 1 year ago

Might be one of the key democratizing forces us plebs will have…I do suggest people try out some of the open solutions out there already just to have that skill in their back pockets (e.g. GPT4All).

[-] r3df0x@7.62x54r.ru 3 points 1 year ago

Yep. As dangerous as that could be, it's better then centralizing it. There are already systems like GPT4all that come with good models that are slower then things like Chat GPT but work similarly well.

this post was submitted on 30 Oct 2023
539 points (100.0% liked)

Technology

59407 readers
2466 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS