30

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(page 2) 50 comments
sorted by: hot top controversial new old
[-] froztbyte@awful.systems 10 points 4 months ago

bit of a combo-sneer this morning

CNBC put out an article uncritically repeating yet another round of lieboy pulling the exact same shit. I don't recognize the authors immediately, not sure if they're typical bootlickers or not

openai is going in hard, hiring ex-NSA person that was appointed by the walking talk racist mop (via dan gillmor). for the .... safety role! ah yes, I'm sure we'll all be so very surprised by this attempt consolidation of power.

[-] dgerard@awful.systems 8 points 4 months ago

to their credit, they did manage to get past the editor:

This is all far-out stuff even for Musk, who is notorious for making ambitious promises to investors and customers that don’t pan out — from developing software that can turn an existing Tesla into a self-driving vehicle with an upload, to EV battery swapping stations.

[-] froztbyte@awful.systems 9 points 4 months ago

yeah, fair call on that. I just live in hope that we can get to a point where “lying fucker with history of failures and grandiose statements has made another ridiculous grandiose statement probably composed of lies” can be the actual type of headline, instead of this constant simping bullshit

load more comments (2 replies)
[-] dgerard@awful.systems 10 points 4 months ago
[-] froztbyte@awful.systems 7 points 4 months ago

lol at the “delved” there. Wonder if it’s some intentional contrarian

load more comments (6 replies)
load more comments (6 replies)
[-] slopjockey@awful.systems 10 points 4 months ago
[-] Soyweiser@awful.systems 10 points 4 months ago* (last edited 4 months ago)

immediately know which represents pragmatism vs esoteric theory

What neither of them are doing 'The Stare', Moore can do it, Crowley could do it, Rasputin can do it. Get these low budget farces out of here, and give me a proper Stare.

Moldbug tries, but he mostly just looks in a way that makes you think 'he just farted and is trying to figure out if I noticed'.

[-] 200fifty@awful.systems 9 points 4 months ago

Ah yes, pragmatists, well known for their constantly sunny and optimistic outlook on the future, consequences be damned (?)

load more comments (5 replies)
[-] o7___o7@awful.systems 10 points 4 months ago* (last edited 4 months ago)

Lawrence Lessig falls victim to the siren song of the blarney engines. Also, lol cnn

Many people refer to concerns about the technology as a question of “AI safety.” That’s a terrible term to describe the risks that many people in the field are deeply concerned about. Some of the leading AI researchers, including Turing Prize winner Yoshua Bengio and Sir Geoffrey Hinton, the computer expert and neuroscientist sometimes referred to as “the godfather of AI,” fear the possibility of runaway systems creating not just “safety risks,” but catastrophic harm.

And while the average person can’t imagine how anyone could lose control of a computer (“just unplug the damn thing!”), we should also recognize that we don’t actually understand the systems that these experts fear.

Companies operating in the field of AGI — artificial general intelligence, which broadly speaking refers to the theoretical AI research attempting to create software with human-like intelligence, including the ability to perform tasks that it is not trained or developed for — are among the least regulated, inherently dangerous companies in America today. There is no agency that has legal authority to monitor how the companies develop their technology or the precautions they are taking.

https://www.cnn.com/2024/06/06/opinions/artificial-intelligence-risks-chat-gpt-lessig/index.html

[-] dgerard@awful.systems 10 points 4 months ago

i ran the NVidia CEO's press conference through my ChatGPT based translator and it came out "lol this is gonna bomb in two quarters but holy shit it's fun while it lasts and i can def get a few scheduled insider sales done, now where's the coke"

[-] saucerwizard@awful.systems 9 points 4 months ago
load more comments (12 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 10 Jun 2024
30 points (100.0% liked)

TechTakes

1374 readers
56 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS