[-] Soyweiser@awful.systems 1 points 4 hours ago

It can do trillions of calculations per second. All of them wrong.

[-] Soyweiser@awful.systems 4 points 4 hours ago* (last edited 4 hours ago)

So, they are planning to use an ai to fix the sec bugs that their ai generates? Good hussle, if a bit obvious.

[-] Soyweiser@awful.systems 4 points 1 day ago

Yeah, I intentionally only mentioned the start of the article and the Swartz bit because I didn't want to lead with what I thought of it all, and was curious what others thought. (And I had not finished it yet because it is a bit long).

I was struck with the notion how many of them are all true AGI believers (which as you said the author took at face value) or rich greedy assholes (like you said), and how we, the people of the sneer, are right that you simply can't work with these people. Like I feel more validated in the idea that EA is not the right way.

Another detail I noticed, nobody mentioned deepseek, again.

[-] Soyweiser@awful.systems 2 points 1 day ago

Yep, and would make us all happier, and keep us in control. (deleting all the HP printers is next).

[-] Soyweiser@awful.systems 3 points 2 days ago* (last edited 8 hours ago)

Very interesting, thanks for posting.

E: thinking about it, this will absolutely wreck a lot of cryptocurrencies/people in the CC space.

[-] Soyweiser@awful.systems 11 points 2 days ago* (last edited 2 days ago)

New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).

"New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI."

[-] Soyweiser@awful.systems 9 points 2 days ago

Which skeletons are in your closet?

I'm sure you already have lists of those and are ready to publish them Trace.

[-] Soyweiser@awful.systems 3 points 2 days ago

Our framing for superintelligence is a humanist superintelligence, and that means that there’s a very clear test that everyone should use to judge whether we are living up to our principles, and that is: does this technology make us all healthier, happier as a species, and keep us all in control.

Going to be difficult, as soon as they develop a superintelligence it tries to delete the entire microsoft codebase.

[-] Soyweiser@awful.systems 5 points 4 days ago

So if Bender took over he wouldn't count. As he wants to 'kill all humans (except Fry)'. Seems like a loophole.

[-] Soyweiser@awful.systems 5 points 5 days ago* (last edited 5 days ago)

Ah the Epstein drive. (oof that aged...)

Small note however, iirc James S. A. Corey has mentioned the expanse is not hard sf. I don't have a quote for that however.

[-] Soyweiser@awful.systems 7 points 5 days ago

Yeah realized a while ago that vibe coding is a massive technical debt creation machine.

[-] Soyweiser@awful.systems 14 points 6 days ago

Not sure if I should post it here or under the pivot article, somebody went through the claude code https://neuromatch.social/@jonny/116324676116121930 (via @aliettedebodard.com and @olivia.science on bsky)

15

Via reddits sneerclub. Thanks u/aiworldism.

I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult.

https://www.aipanic.news/p/the-rationality-trap the original link for the people who dont like archive.is used the archive because I dont like substack and want to discourage its use.

22
submitted 7 months ago* (last edited 7 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

As found by @gerikson here, more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post.

Original link.

I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone's amusement. Learn about your own true motives today. (This could be a troll of course, boy does he drop a lot of names and thinks that is enough to link things).

E: alternative title: Ideological Turing Test, a critical failure

15
submitted 8 months ago* (last edited 8 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

Original title 'What we talk about when we talk about risk'. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about

11
submitted 11 months ago* (last edited 11 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

12
submitted 2 years ago* (last edited 2 years ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

The interview itself

Got the interview via Dr. Émile P. Torres on twitter

Somebody else sneered: 'Makings of some fantastic sitcom skits here.

"No, I can't wash the skidmarks out of my knickers, love. I'm too busy getting some incredibly high EV worrying done about the Basilisk. Can't you wash them?"

https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/

19

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›

Soyweiser

joined 2 years ago