31
submitted 4 months ago* (last edited 4 months ago) by BigMuffin69@awful.systems to c/sneerclub@awful.systems

you are viewing a single comment's thread
view the rest of the comments
[-] Shitgenstein1@awful.systems 23 points 4 months ago

A year and two and a half months since his Time magazine doomer article.

No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.

Just another note in a panic that accomplished nothing.

[-] fartsparkles@sh.itjust.works 16 points 4 months ago

It’s also a bunch of brainfarting drivel that could be summarized:

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

Or

Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.

[-] Shitgenstein1@awful.systems 15 points 4 months ago

Before we accidentally make an AI capable of posing existential risk to human being safety

It's cool to know that this isn't a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.

load more comments (7 replies)
load more comments (15 replies)
this post was submitted on 15 Jun 2024
31 points (100.0% liked)

SneerClub

983 readers
6 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS