[-] Soyweiser@awful.systems 3 points 2 hours ago

Somebody on bsky mentioned this is prob because Disney wants to be seen on the stockmarket as a tech company, and not a cartoon/theme park company.

(See how Tesla went from cars to self driving to now robots)

[-] Soyweiser@awful.systems 4 points 2 hours ago

But paying workers gives workers money, which means you lose. Instead of buying a robot, which gives another capitalist/company money, which means they win. They are the most class conscious class.

[-] Soyweiser@awful.systems 3 points 2 hours ago* (last edited 1 hour ago)

Nah he has brainrot. He deadnamed and misgendered Rebecca Heinemann in his eulogy of her. Transphobia seems to really make people worse at thinking.

(Not a big shock considering how bad his answer was towards the 'epic own' of the person asking him if he would hire more women. (He said: "we are having a hard time hiring all the people that we want. It doesn’t matter what they look like.", which sounds like a great own, but leaves one big question. Why aren't you trying to educate more people in what you want then? What are you doing to fix that problem if it is such a big problem for you? (which then leads to, how will you ensure that this teaching system has women in it, etc)))

E: and if that doesn't convince people he sucks: https://web.archive.org/web/20230528051421/https://www.giantbomb.com/john-carmack/3040-4576/forums/never-meet-your-heroes-john-carmack-throws-lot-in--1911724/ (this article seems to no longer exist live).

[-] Soyweiser@awful.systems 2 points 2 hours ago

What, that doesnt even make sense. I dont type what im thinking. (My only real usage of llms was trying to break them, so imagine the output of that. And you could then imagine what you think I was trying to attempt in messing with the system. But then you would do the work. Say you see mee send the same message twice. A logical conclusion would be that I tried to see if it gave different results if prompted the same way twice. However, it is more likely I just made a copy paste error and accidentally send the wrong text the second time. So the person reading the logs is doing a lot of work here). Ignoring all that he also didnt think of the next case: people using an llm to fake chatlogs to optimize being hired. Good way to hire a lot of North Koreans.

[-] Soyweiser@awful.systems 2 points 2 hours ago

If only people had stuffed Land in a locker instead of buying drugs of him.

[-] Soyweiser@awful.systems 2 points 4 hours ago* (last edited 4 hours ago)

Fun detail about this hype about 'innovation' (there was not just a poster but also an email) aircraft carriers use physical chits and a table for their planning. (Bonus, cant be hacked or emped). So gonna be fun to see if he will try to get rid of that and we might see a carrier sunk in the venezuelan/taiwan wars.

[-] Soyweiser@awful.systems 4 points 5 hours ago

Two things, first thinking the llm stuff will help in robotics doesnt seem to fly, as llms are based on the whole internet and all books, a massive amount of data, data which for menial tasks doesnt exist yet. (And is also harder to get, creating text is easy).

And the story about how humanoid bots are great for working in a warehouse seems also wrong to me, as one of the problems we all have had is that of you are carrying things, the big box you are carrying obscures part of your vision. Different designs would be better for that. (Even a humanoid robot who has eyes on the back of its hands for example). Such a lack of imagination.

[-] Soyweiser@awful.systems 9 points 3 days ago

I forgot to mention it last week, but this is Scott Adams shit. The stuff which made him declare that Trump would win in a landslide in 2016 due to movie rules. Iirc he also claimed he was right on that, despite Trump not winning in a landslide, the sort of goalpost moving bs which he judges others harshly for (despite in the other situations it not applying)

So up next, Yud will claim some pro AI people want him dead and after that Yud will try to convince people he can bring people to orgasm by words alone. I mean those are the 'adamslike' genre tropes now.

[-] Soyweiser@awful.systems 8 points 3 days ago

She also said she basically wants to focus less on the sort of 'callout' content which does well on yt and more focus on actual physics stuff. Which is great, and also good she realized how slippery a slide that sort of content is for your channel.

(I mentioned before how sad it is to see 'angry gamer culture war' channels be stuck in that sort of content, as when they do non rage shit, nobody watches them. (I mean sad for them in an 'if i was them' way btw, dont get me wrong, fuckem for chosing that path (and fuck the system for that they are now financially stuck in that, and that they made this an available path anyway (while making it hard for lgbt people to make a channel about their experiences)), so many people hurt/radicalized for a few clicks and ad money))

[-] Soyweiser@awful.systems 2 points 3 days ago

The empathy bit was added by people talking about the study, sorry if that wasn't clear.

[-] Soyweiser@awful.systems 5 points 4 days ago* (last edited 4 days ago)

Ah, prophet-maxxing. 'they have no hope of understanding and I have no hope of explaining in 30 seconds'

The first and oldest reason I stay sane is that I am an author, and above tropes. Going mad in the face of the oncoming end of the world is a trope.

This guy wrote this (note I don't think there is anything wrong with looking like a nerd (I mean I have a mirror somewhere, so I don't want to be a hypocrite on this), but looking like one and saying you are above tropes is something, there is also HPMOR)

[-] Soyweiser@awful.systems 4 points 4 days ago* (last edited 4 days ago)

New preprint just dropped, and noticed some seemingly pro AI people talk about it and conclude that people who have more success with genAI have better empathy, are more social and have theory of mind. (I will not put those random people on blast, I also have not read the paper itself (aka, I didn't do the minimum actually required research so be warned), just wanted to give people a heads up on it).

But yes, that does describe the AI pushers, social people who have good empathy and theory of mind. (Also, ow got genAI runs on fairy rules, you just gotta believe it is real (I'm joking a bit here, it is prob fine, as it helps that you understand where a model is coming from and you realize its limitations it helps, and the research seems to be talking about humans + genAI vs just genAI)).

14

Via reddits sneerclub. Thanks u/aiworldism.

I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult.

https://www.aipanic.news/p/the-rationality-trap the original link for the people who dont like archive.is used the archive because I dont like substack and want to discourage its use.

21
submitted 3 months ago* (last edited 3 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

As found by @gerikson here, more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post.

Original link.

I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone's amusement. Learn about your own true motives today. (This could be a troll of course, boy does he drop a lot of names and thinks that is enough to link things).

E: alternative title: Ideological Turing Test, a critical failure

15
submitted 4 months ago* (last edited 4 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

Original title 'What we talk about when we talk about risk'. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about

11
submitted 7 months ago* (last edited 7 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

12
submitted 2 years ago* (last edited 2 years ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

The interview itself

Got the interview via Dr. Émile P. Torres on twitter

Somebody else sneered: 'Makings of some fantastic sitcom skits here.

"No, I can't wash the skidmarks out of my knickers, love. I'm too busy getting some incredibly high EV worrying done about the Basilisk. Can't you wash them?"

https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/

19

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›

Soyweiser

joined 2 years ago