324
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Oct 2025
324 points (100.0% liked)
Technology
76310 readers
1947 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
I feel like these people aren't even really worried about superintelligence as much as hyping their stock portfolio that's deeply invested in this charlatan ass AI shit.
There's some useful AI out there, sure, but superintelligence is not around the corner and pretending like it is acts just another way to hype the stock price of these companies who claim it is.
looks dubious
Altman and a few others, maybe. But this is a broad collection of people. Like, the computer science professors on the signatory list there aren't running AI companies. And this isn't saying that it's imminent.
EDIT: I'll also add that while I am skeptical about a ban on development, which is what they are proposing, I do agree with the "superintelligence does represent a plausible existential threat to humanity" message. It doesn't need OpenAI to be a year or two away from implementing it for that to be true.
In my eyes, it would be better to accelerate work on AGI safety rather than try to slow down AGI development. I think that the Friendly AI problem is a hard one. It may not be solveable. But I am not convinced that it is definitely unsolvable. The simple fact is that today, we have a lot of unknowns. Worse, a lot of unknown unknowns, to steal a phrase from Rumsfeld. We don't have a great consensus on what the technical problems to solve are, or what any fundamental limitations are. We do know that we can probably develop superintelligence, but we don't know whether developing superintelligence will lead to a technological singularity, and there are some real arguments that it might not
and that's one of the major, "very hard to control, spirals out of control" scenarios.
And while AGI promises massive disruption and risk, it also has enormous potential. The harnessing of fire permitted humanity to destroy at almost unimaginable levels. Its use posed real dangers that killed many, many people. Just this year, some guy with a lighter wiped out $25 billion in property here in California. Yet it also empowered and enriched us to an incredible degree. If we had said "forget this fire stuff, it's too dangerous", I would not be able to be writing this comment today.
You realize that even if these individuals aren't personally working at AI companies that most if not all of them have dumped all kinds of money into investing in these companies, right? That's part of why the stocks for those companies are so obscenely high because people keep investing more money into them because of current the insane returns on investment.
I have no doubt Wozniak, for example, has dumped money into AI despite not being involved with it on a personal level.
So yes, they are literally invested in promoting the idea that AGI is just around the corner to hype their own investment cash cows.
What do these people have to profit from getting what they're asking for? They're advocating for pulling the plug on that cow.