26
submitted 1 year ago* (last edited 1 year ago) by froztbyte@awful.systems to c/techtakes@awful.systems

archive

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said.

just days after poor lil sammyboi and co went out and ran their mouths! the horror!

Sources told Reuters that the warning to OpenAI's board was one factor among a longer list of grievances that led to Altman's firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed..., but it wasn't fundamentally about a concern like that.

god I want to see the boardroom leaks so bad. STOP TEASING!

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

this appears to be a vaguely good statement, but I'm gonna (cynically) guess that it's more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure

wonder if they'll release a business policy update about usage suitability for *GPT and friends

you are viewing a single comment's thread
view the rest of the comments
[-] gerikson@awful.systems 11 points 1 year ago

Surprise level zero.

The idea that anyone would take "alignment alarmists" seriously is ludicrous. They love to compare themselves to the concerned atomic scientists, but those people were a) plugged in to the system in a way these dorks aren't and b) could actually point to a real fucking atomic bomb.

The people who were worried about nuclear tech prior to the Manhattan Project were more worried that actual fascists would get to the tech first.

[-] swlabr@awful.systems 10 points 1 year ago

Why does it always have to be fascists? Can't it be an eldritch force without a scrutable motivation?

[-] froztbyte@awful.systems 8 points 1 year ago

paging Dr Stross. Dr Stross to the author room please

[-] froztbyte@awful.systems 6 points 1 year ago

(Which I mean as a pun not as a tag)

[-] Shitgenstein1@awful.systems 9 points 1 year ago* (last edited 1 year ago)

Surprise level zero after so-called effective altruists uncritically adopted the Californian ideology, whether about AI alignment or anything else, and furthermore refused any deep critique of capitalism, suddenly seeing the entrepreneurial interests ditch them as soon as their humanistic PR actually threatens the bottom line.

[-] mountainriver@awful.systems 7 points 1 year ago

In response to the last sentence, you have a HG Wells story with from before world war one with pilots tossing nukes from the biplanes. (The nukes has smaller explosions but keep on burning for decades.) There's also Karel Capek's the God Machine from the 1920s where an inventor creates a machine that transforms matter into energy, but I'm the process creating a by product of God (turns out God is in all matter, but not all energy), leading to all sorts of problems.

But neither Wells nor Capek took their own writing seriously enough to create a cult around it.

this post was submitted on 01 Dec 2023
26 points (100.0% liked)

TechTakes

1485 readers
129 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS