539
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn't that it'll kill you. It's that a small group of billionaires will control the tech forever.

top 50 comments
sorted by: hot top controversial new old
[-] treefrog@lemm.ee 194 points 1 year ago

Business Insider warning about late stage capitalism feels more than a little ironic.

[-] CrabAndBroom@lemmy.ml 58 points 1 year ago

As does being warned of technological oligarchs monopolizing AI by someone who works for fucking Meta.

[-] Peanutbjelly@sopuli.xyz 10 points 1 year ago* (last edited 1 year ago)

Not to mention the reason we can all fuck around with llama models despite the fact. Props to yann and other meta AI researchers. Also eager to see future jepa stuff.

If only openAI was so open.

[-] uriel238 34 points 1 year ago

Today on PBS, we got an insider warning from a lifelong Republican that the fascism got put of hand and is going for full autocracy, even though he'd been pushing through pro-fash policies for the last thirty years.

Everyone thinks The One Ring will be theirs to control.

[-] Mirshe@lemmy.world 9 points 1 year ago

And in other news, the Leopards Eating Faces Party continues to eat faces, confusing Leopards Eating Faces voters...

[-] MirthfulAlembic@lemmy.world 7 points 1 year ago

Was that the Adam Kinzinger one? It's a low bar, but I'll give him a modicum of credit for saying his vote against the first impeachment was cowardice and that he'd vote for Biden in 2024 if Trump is the Republican nominee. Doesn't totally feel like a lesson learnt that he still considers himself a Republican, though.

[-] p03locke@lemmy.dbzer0.com 11 points 1 year ago

They should rename themselves to Business Balls Deep Insider.

This is why we need large-scale open-source AI efforts, even if it scares the everliving shit out of me.

[-] uriel238 10 points 1 year ago

AI safety experts are worried that capitalists will be too eager to get AGI first and will discard caution (friendly AI principles) for mad science.

And I, for one, welcome our new robot overlords!

[-] PsychedSy@sh.itjust.works 6 points 1 year ago

If we have to choose between corporations or the government ruling us with AI I think I'm gonna just take a bullet.

load more comments (3 replies)
[-] zbyte64 4 points 1 year ago

Any AI safety experts that believes these oligarchs are going to get AGI and not some monkey paw are also drinking the cool aide.

load more comments (1 replies)
[-] frezik@midwest.social 8 points 1 year ago

I've been thinking about how to do that. The code for most AI is pretty basic and uninteresting. It's mostly modifying the input for something usable. Companies could open source their entire code base without letting anything important out.

The dataset is the real problem. Say you want to classify fruit to check if it's ripe enough for harvesting. You'll need a whole lot of pictures of your preferred fruit where it's both ripe and not ripe. You'll want people who know the fruit to classify those images, and then you can feed it into a model. It's a lot of work, and needs to attract a bunch of people to volunteer their time. Largely the sort of people who haven't traditionally been a part of open source software.

[-] pinkdrunkenelephants@lemmy.cafe 2 points 1 year ago

If we set up some kind of blockchain to just pay people to honestly differentiate between pictures, it could be done.

[-] echodot@feddit.uk 11 points 1 year ago

There is no problem in this world so serious that someone will not suggest blockchain as a potential solution.

[-] Corkyskog@sh.itjust.works 3 points 1 year ago

Your being hyperbolic and silly. Find me a solution to mass shootings or racism using blockchain.

[-] ICastFist@programming.dev 4 points 1 year ago

Nah, using Recaptcha is the way to get free labor for that training

load more comments (1 replies)
[-] errer@lemmy.world 6 points 1 year ago

Might be one of the key democratizing forces us plebs will have…I do suggest people try out some of the open solutions out there already just to have that skill in their back pockets (e.g. GPT4All).

[-] r3df0x@7.62x54r.ru 3 points 1 year ago

Yep. As dangerous as that could be, it's better then centralizing it. There are already systems like GPT4all that come with good models that are slower then things like Chat GPT but work similarly well.

[-] clearleaf@lemmy.world 33 points 1 year ago

At first the fear mongering was about how AI is so good that you'll be able to replace your entire workforce with it for a fraction of the cost, which would be sooo horrible. Pwease investors pwease oh pwease stop investing in my company uwu

Now they're straight up saying that the people who invest the most in AI will dominate the world. If tech companies were really all that scared of AI they would be calling for more regulations yet none of these people ever seem to be interested in that at all.

[-] Sharklaser@sh.itjust.works 28 points 1 year ago

I think you've spotted the grift here. AI investment has faltered quickly, so a final pump before the dump. Get the suckers thinking it's a no-brainer and dump the shitty stock. Business insider caring for humanity lol

[-] Pohl@lemmy.world 4 points 1 year ago

Either ML is going to scale in an unpredictable way, or it is a complete dead end when it comes to artificial intelligence. The “godfathers” of ai know it’s a dead end.

Probabilistic computing based on statistical models has value and will be useful. Pretending it is a world changing AI tech was a grift from day 1. The fact that art, that cannot be evaluated objectively, was the first place it appeared commercially should have been the clue.

[-] ricdeh@lemmy.world 5 points 1 year ago

Probabilistic computing based on statistical models has value and will be useful. Pretending it is a world changing AI tech was a gift from day 1.

That is literally modelling how your and all our brains work, so no, neuromorphic computing / approximate computing is still the way to go. It's just that neuromorphic computing does not necessarily equal LLMs. Paired with powerful mixed analogue and digital signal chips based on photonics, we will hopefully at some point be able to make neural networks that can scale the simulation of neurons and synapses to a level that is on par or even superior to thr human brain.

[-] Pohl@lemmy.world 3 points 1 year ago

A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture.

If we had a theory of mind that was complete, it would simply be a matter of counting up the number of transistors required to approximate varying degrees of intelligence. We do not. We have no idea how the computational meat we all possess enables us to translate sensory input into a contiguous sense of self.

It is totally valid to believe that ML computing is a match to the biological model and that it will cross a barrier at some point. But it is a belief that does not support itself with empirical evidence. At least not yet.

[-] Restaldt@lemm.ee 5 points 1 year ago* (last edited 1 year ago)

A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture

Mathematical actually. See the 1943 McCulloch and Pitts paper for why Neural networks are called such.

We use logic and math to approximate neurons

load more comments (1 replies)
load more comments (1 replies)
load more comments (3 replies)
[-] frezik@midwest.social 5 points 1 year ago* (last edited 1 year ago)

ML isn't a dead end. I mean, if your target is strong AI at human-like intelligence, then maybe, maybe not. If your goal is useful tools for getting shit done, then ML is already a success. Almost every push for AI in the last 60 years has born fruit, even if it didn't meet its final end goal.

[-] Pohl@lemmy.world 3 points 1 year ago

That’s pretty much what I meant. ML has a lot of value, promising that it will deliver artificial intelligence is probably hogwash.

Useful tools? yes. AI? No. But never let the truth get in the way of an investor bonanza.

[-] SCB@lemmy.world 4 points 1 year ago

I love hearing these takes.

"TVs are just a fad. All the good content is on radio!"

"The Internet is just a sandbox for nerds. No normal person will use it."

"AI is just a grift. It won't ever be useful."

Lmao sure Jan.

[-] Sharklaser@sh.itjust.works 10 points 1 year ago

AI has been, is and will be very useful, but it's in an over hype phase poised for a drop. I don't think you understood what I was saying

[-] SCB@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

it’s in an over hype phase poised for a drop

AI isn't a stock.

a final pump before the dump.

This is not how investment capital works.

I understood what you were saying.

[-] SmoothIsFast@citizensgaming.com 6 points 1 year ago

AI isn't a stock.

No but investments into AI from companies has completely ballooned stock prices of certain companies, which is due for a correction.

a final pump before the dump.

This is not how investment capital works.

This is exactly how investment capital works. You pump gain value on the up side and dump while getting into short positions to profit from the creation and implosion of a bubble. They then before a bubble burst draw up public support to unload the bags on. Risne repeat and move on, it's the playbook for VC.....

[-] SCB@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

This is exactly how investment capital works. You pump gain value on the up side and dump while getting into short positions to profit

Investment capital and stock purchases are different things.

VC means "venture capitalist" and the "venture" part is when you invest in a private (i.e. "doesn't have stock") company. You may have confused this with VCs wanting to get in early before an IPO (Initial Public Offering), so they can get out big and early.

So no, this is not how any of this works.

Very few AI companies are publicly traded, because the industry is still almost entirely startups (hence the capital interest)

Contrary to what GME cultists will tell you, pump and dumps are pretty rare outside of crypto. Crypto is vulnerable to it because there are no fundamentals and market value based is entirely on speculation.

[-] SmoothIsFast@citizensgaming.com 2 points 1 year ago

Investment capital and stock purchases are different things.

No shit, it's why if you are in the market to offset risk you are going to open a short position through your family office once that company eventually tries to IPO which is able to skirt reporting requirements via equity swaps. As I said investment captial is used for pumping and artificially moving goal posts so that post eventual ipo you can have untenable growth targets that justifies your new found short exposure for those "fundamentals" you describe.

VC means "venture capitalist" and the "venture" part is when you invest in a private (i.e. "doesn't have stock") company. You may have confused this with VCs wanting to get in early before an IPO (Initial Public Offering), so they can get out big and early.

Yes as I explained, they use private investment before public scrutiny to create untenable growth targets, generate hype around the ipo, cash out and short the fucker to the ground to essentially doubling any gains. It's extremely common place, which is why it's always a trope around VC firms and their evaluations in any business type media.

So no, this is not how any of this works.

Yes once again this exactly how this all works.

Very few AI companies are publicly traded, because the industry is still almost entirely startups (hence the capital interest)

How many times does one need to state the bubble is not on the AI tech itself, it's on those companies introducing AI into their workflow where the asset bubble is occurring as they are inflating the gains AI will actually bring. The VC funds are there so that the AI companies can be sized up with untenable growth projections and evaluations to prevent long term growth allowing companies like Microsoft to come in and scoop up the IP for significantly cheaper than developing it themselves, cough openAI cough cough.

Contrary to what GME cultists will tell you, pump and dumps are pretty rare outside of crypto. Crypto is vulnerable to it because there are no fundamentals and market value based is entirely on speculation.

Fuck you wall street shills are always so predictable, no one is a cultist with GME we are all household investors who saw a completely overshorted company and invested. Continued to investigate the market and are now trying to apply regulatory pressure as individuals to make sure we have the same access and ability to work in our financial markets as any wall street company does. Pump and dumps are not rare outside of crypto, for fucks sake just listen to cramer for a week you probably identified 10 stocks being gamed as he tries to sell his viewers on that "investment". Granted most money on your general stock market is siphoned off using off exchange trading, pfof, and etf share fabrication under market making exemptions.

load more comments (2 replies)
[-] Peanutbjelly@sopuli.xyz 9 points 1 year ago* (last edited 1 year ago)

You're conflating polarized opinions of very different people and groups.

That being said your antagonism towards investors and wealthy companies is very sound as a foundation.

Hinton only gave his excessive worry after he left his job. There is no reason to suspect his motives.

Lecun is the opposite side and believes the danger is in companies hoarding the technology. He is why the open community has gained so much traction.

OpenAI are simultaneously being criticized for putting AI out for public use, as well a for not being open enough about the architecture, or allowing the public to actually have control of the state of AI developments. That being said they are leaning towards more authoritarian control from united governments and groups.

I'm mostly geared towards yann lecun and being more open despite the risks, because there is more risk and harm from hindering development of or privatizing the growth of AI technology.

The reality is that every single direction they try is heavily criticized because the general public has jumped onto a weird AI hate train.

See artists still complaining about adobe AI regardless of the training data, and hating on the open model community despite giving power to the people who don't want to join the adobe rent system.

[-] worldsayshi@lemmy.world 3 points 1 year ago

I've heard some of them are calling for regulation, that favours them.

[-] echo64@lemmy.world 21 points 1 year ago

God I can't stand these people who are only basically only worried about AI's affect on the stock market. No normal person would even notice. we have more realistic issues with AI.

[-] TragicNotCute@lemmy.world 7 points 1 year ago

Sure AI is going to kill us all, but what about the Dow?!

[-] SailorMoss@sh.itjust.works 8 points 1 year ago* (last edited 1 year ago)

Raytheon is going to make a killing selling terminators!!! BUY!BUY!BUY!

[-] autotldr@lemmings.world 8 points 1 year ago

This is the best summary I could come up with:


He named OpenAI's Sam Altman, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei in a lengthy weekend post on X.

"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment," LeCun wrote, referring to these founders' role in shaping regulatory conversations about AI safety.

That's significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.Altman, Hassabis, and Amodei did not immediately respond to Insider's request for comment.

Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone.

In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.

Those risks include worker exploitation and data theft that generates profit for "a handful of entities," according to the Distributed AI Research Institute (DAIR).


The original article contains 768 words, the summary contains 163 words. Saved 79%. I'm a bot and I'm open source!

[-] jayrodtheoldbod@midwest.social 7 points 1 year ago

Well we know that, but anybody who does anything less than clap and sing about it gets treated like trash by the huge wave of people who immediately trusted the crazy thing with their lives. It's the fucking iPhone all over again. So hooray for AI.

[-] Lucidlethargy@sh.itjust.works 5 points 1 year ago

Yeah, my own Dad calls me an "activist" now (in a deragotory manner). I never leave my house most days... But okay. I'm an activist because I think AI is a tangible threat to the working class. I've said only a few sentences to my Dad about it. But yeah... I guess I'm the problem for not finding some creative way to profit off LLM's yet.

[-] jcdenton@lemy.lol 4 points 1 year ago

No one can fucking run it locally right now only people who have 1%er money can run it

[-] SupraMario@lemmy.world 20 points 1 year ago

Uhh what? You can totally run LLMs locally.

[-] MooseBoys@lemmy.world 11 points 1 year ago

Inference, yes. Training, no. Derived models don’t count.

[-] Jeremyward@lemmy.world 7 points 1 year ago

I have Llama 2 running on localhost, you need a fairly powerful GPU but it can totally be done.

[-] SailorMoss@sh.itjust.works 4 points 1 year ago

I’ve run one of the smaller models on my i7-3770 with no GPU acceleration. It is painfully slow but not unusably slow.

[-] jcdenton@lemy.lol 2 points 1 year ago

To get the same level as something like chat gpt?

load more comments
view more: next ›
this post was submitted on 30 Oct 2023
539 points (100.0% liked)

Technology

59407 readers
2472 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS