539
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn't that it'll kill you. It's that a small group of billionaires will control the tech forever.

you are viewing a single comment's thread
view the rest of the comments
[-] Sharklaser@sh.itjust.works 28 points 1 year ago

I think you've spotted the grift here. AI investment has faltered quickly, so a final pump before the dump. Get the suckers thinking it's a no-brainer and dump the shitty stock. Business insider caring for humanity lol

[-] SCB@lemmy.world 4 points 1 year ago

I love hearing these takes.

"TVs are just a fad. All the good content is on radio!"

"The Internet is just a sandbox for nerds. No normal person will use it."

"AI is just a grift. It won't ever be useful."

Lmao sure Jan.

[-] Sharklaser@sh.itjust.works 10 points 1 year ago

AI has been, is and will be very useful, but it's in an over hype phase poised for a drop. I don't think you understood what I was saying

[-] SCB@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

it’s in an over hype phase poised for a drop

AI isn't a stock.

a final pump before the dump.

This is not how investment capital works.

I understood what you were saying.

[-] SmoothIsFast@citizensgaming.com 6 points 1 year ago

AI isn't a stock.

No but investments into AI from companies has completely ballooned stock prices of certain companies, which is due for a correction.

a final pump before the dump.

This is not how investment capital works.

This is exactly how investment capital works. You pump gain value on the up side and dump while getting into short positions to profit from the creation and implosion of a bubble. They then before a bubble burst draw up public support to unload the bags on. Risne repeat and move on, it's the playbook for VC.....

[-] SCB@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

This is exactly how investment capital works. You pump gain value on the up side and dump while getting into short positions to profit

Investment capital and stock purchases are different things.

VC means "venture capitalist" and the "venture" part is when you invest in a private (i.e. "doesn't have stock") company. You may have confused this with VCs wanting to get in early before an IPO (Initial Public Offering), so they can get out big and early.

So no, this is not how any of this works.

Very few AI companies are publicly traded, because the industry is still almost entirely startups (hence the capital interest)

Contrary to what GME cultists will tell you, pump and dumps are pretty rare outside of crypto. Crypto is vulnerable to it because there are no fundamentals and market value based is entirely on speculation.

[-] SmoothIsFast@citizensgaming.com 2 points 1 year ago

Investment capital and stock purchases are different things.

No shit, it's why if you are in the market to offset risk you are going to open a short position through your family office once that company eventually tries to IPO which is able to skirt reporting requirements via equity swaps. As I said investment captial is used for pumping and artificially moving goal posts so that post eventual ipo you can have untenable growth targets that justifies your new found short exposure for those "fundamentals" you describe.

VC means "venture capitalist" and the "venture" part is when you invest in a private (i.e. "doesn't have stock") company. You may have confused this with VCs wanting to get in early before an IPO (Initial Public Offering), so they can get out big and early.

Yes as I explained, they use private investment before public scrutiny to create untenable growth targets, generate hype around the ipo, cash out and short the fucker to the ground to essentially doubling any gains. It's extremely common place, which is why it's always a trope around VC firms and their evaluations in any business type media.

So no, this is not how any of this works.

Yes once again this exactly how this all works.

Very few AI companies are publicly traded, because the industry is still almost entirely startups (hence the capital interest)

How many times does one need to state the bubble is not on the AI tech itself, it's on those companies introducing AI into their workflow where the asset bubble is occurring as they are inflating the gains AI will actually bring. The VC funds are there so that the AI companies can be sized up with untenable growth projections and evaluations to prevent long term growth allowing companies like Microsoft to come in and scoop up the IP for significantly cheaper than developing it themselves, cough openAI cough cough.

Contrary to what GME cultists will tell you, pump and dumps are pretty rare outside of crypto. Crypto is vulnerable to it because there are no fundamentals and market value based is entirely on speculation.

Fuck you wall street shills are always so predictable, no one is a cultist with GME we are all household investors who saw a completely overshorted company and invested. Continued to investigate the market and are now trying to apply regulatory pressure as individuals to make sure we have the same access and ability to work in our financial markets as any wall street company does. Pump and dumps are not rare outside of crypto, for fucks sake just listen to cramer for a week you probably identified 10 stocks being gamed as he tries to sell his viewers on that "investment". Granted most money on your general stock market is siphoned off using off exchange trading, pfof, and etf share fabrication under market making exemptions.

[-] SCB@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

Imagine knowing so little but writing so much.

Bonus funny points for calling me a "wall street shill" in a thread about an interaction that specifically does not involve the stock market.

no one is a cultist with GME we are all household investors

Lmao this explains so much. TO THE MOOOOOOON

[-] SmoothIsFast@citizensgaming.com 1 points 1 year ago

Imagine knowing so little but writing so much.

Lmao ok bud if that was true refute this "nonsense" right? Or do you not want me pointing to the constant fines handed out in regards to this shit? Do you want me to explain the legal loopholes in the regulations governing our markets that you will want to point to as proof this isn't a widespread issue?

Bonus funny points for calling me a "wall street shill" in a thread about an interaction that specifically does not involve the stock market.

Bonus points for sneaking in a GME subversion that had no relevance in the first place to try and make it seem anyone talking about market reform is some cultist. It's a tell for meltdowners and shills lmaoo.

Lmao this explains so much. TO THE MOOOOOOON

Damn I really love all the space in your brain I get to take up rent free it's so spacious in here with your lack of critical thinking skills, ahhhhh so comfy.

[-] Pohl@lemmy.world 4 points 1 year ago

Either ML is going to scale in an unpredictable way, or it is a complete dead end when it comes to artificial intelligence. The “godfathers” of ai know it’s a dead end.

Probabilistic computing based on statistical models has value and will be useful. Pretending it is a world changing AI tech was a grift from day 1. The fact that art, that cannot be evaluated objectively, was the first place it appeared commercially should have been the clue.

[-] frezik@midwest.social 5 points 1 year ago* (last edited 1 year ago)

ML isn't a dead end. I mean, if your target is strong AI at human-like intelligence, then maybe, maybe not. If your goal is useful tools for getting shit done, then ML is already a success. Almost every push for AI in the last 60 years has born fruit, even if it didn't meet its final end goal.

[-] Pohl@lemmy.world 3 points 1 year ago

That’s pretty much what I meant. ML has a lot of value, promising that it will deliver artificial intelligence is probably hogwash.

Useful tools? yes. AI? No. But never let the truth get in the way of an investor bonanza.

[-] ricdeh@lemmy.world 5 points 1 year ago

Probabilistic computing based on statistical models has value and will be useful. Pretending it is a world changing AI tech was a gift from day 1.

That is literally modelling how your and all our brains work, so no, neuromorphic computing / approximate computing is still the way to go. It's just that neuromorphic computing does not necessarily equal LLMs. Paired with powerful mixed analogue and digital signal chips based on photonics, we will hopefully at some point be able to make neural networks that can scale the simulation of neurons and synapses to a level that is on par or even superior to thr human brain.

[-] Pohl@lemmy.world 3 points 1 year ago

A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture.

If we had a theory of mind that was complete, it would simply be a matter of counting up the number of transistors required to approximate varying degrees of intelligence. We do not. We have no idea how the computational meat we all possess enables us to translate sensory input into a contiguous sense of self.

It is totally valid to believe that ML computing is a match to the biological model and that it will cross a barrier at some point. But it is a belief that does not support itself with empirical evidence. At least not yet.

[-] Restaldt@lemm.ee 5 points 1 year ago* (last edited 1 year ago)

A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture

Mathematical actually. See the 1943 McCulloch and Pitts paper for why Neural networks are called such.

We use logic and math to approximate neurons

[-] SmoothIsFast@citizensgaming.com 1 points 1 year ago

We have also recently trained a model against a small fly or worm, I can't remember which it was, and it behaved identically to the original organism, it's the complicated networks which have multiple sub networks essentially that are our current weak spot.

[-] SmoothIsFast@citizensgaming.com 1 points 1 year ago

We have literally created small organism brains with neural networks that behaved nearly identical, using ML and neural nets. Idk but i would call that pretty damn good emphical evidence. We do not know the specific mechanism on how a brain generates its weight so to speak chemically and computes but we understand that at its simplest form it is a neuron, with a weight, and depending on that weight/sensitivity what ever you want to call it produces a an output pretty damn consistently. The brain is multiple networks working simultaneously with the ability to self learn, this architecture is what is missing in our ML models now if you wanted general artificial intelligence, but we are missing foundational algoriths for chossing wieghts instead of randomly assigning them and hoping for the best to facilitate memroy and cleaner network integrations. You need specialized networks for each critical function, motor control, emotional regulation, etc then you need a system that can interpret weights or create weights in a way that you can imprint an "image", for lack of a better term, to create memories. Consciousness would then just be the network that facilitates interpretation from each networks output and decides which systems need to be engaged next or if an end state was reached. Which imo can be clearly demonstrated by split brain individuals.

If we had a theory of mind that was complete, it would simply be a matter of counting up the number of transistors required to approximate varying degrees of intelligence.

I think this would be our fundamental lacking to interpret how our brain calculates and uses chemical weights so to speak to vary output. If we can't judge that efficiency, we can't just count all the transitors and say it's this smart because the model could literally be trained to just output the letter s for everything even if its the size of chat gpt. I think we very well could state the capacity and limits of our brains by counting the number of neurons but whether it reaches its potential is dependent on how efficiently it was trained and that is where approximating intelligence becomes insanely difficult.

[-] Sharklaser@sh.itjust.works 1 points 1 year ago

Neural networks have been phenomenal in the results they have achieved, out doing support vector machines, random trees, Markov models etc... But I do wonder if there is a bias towards it being able to mimick what the brain does like the other post said, and where are the limits.

For example in medicine, we want to spot unknown correlations to improve things like drug discovery, stratified medince, strange patterns in disease within a population that suggests unknown factors at play... There might be a mathematical model better that convolutional neural networks that doesn't mimick the brain, but we maybe need an ai to develop that, maybe like deep thought in hgttg!

this post was submitted on 30 Oct 2023
539 points (100.0% liked)

Technology

59407 readers
2466 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS