1
21

This is worth delurking for.

A ficus-lover on the forums for iNaturalist (where people crowdsource identifications of nature pics) is clearly brain-poisoned by LW or their ilk, and perforce doesn't understand why the bug-loving iNat crew don't agree that

inaturalist should be a market, so that our preferences, as revealed through our donations, directly influence the supply of observations and ids.

Personally, I have spent enough time on iNat that I can identify a Rat when I see one.

I can't capture the glory of this in a few pull quotations; you'll have to go there to see the batshit.

(h/t hawkpartys @ tumblr)

2
18

In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent "The Curve" conference -- a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:

That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI.

His view is that there is almost no scenario in which we could build a super intelligence that wouldn't either enslave us or hurt us, kill all of us, right? So he's been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him.

People fired a bunch of questions at him. And we should say, he's a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.

[...]

Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we're going to get into a world where these models are incredibly powerful.

And all that stuff just turned out to be true. So, that's why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn't see coming.

Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they've built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?

3
18
4
98
5
27

amazing to watch American right-wingers flounder at the incredibly obvious questions from UK media guys, even Piers fucking Morgan on a podcast

Morgan - as a fellow right winger - asks Thiel about Mangione quoting Thiel

watch Thiel melting in the excruciating video clip

6
47
7
15
TPOT hits the big time! (sfstandard.com)
8
19
9
19
submitted 2 weeks ago* (last edited 2 weeks ago) by swlabr@awful.systems to c/sneerclub@awful.systems

Abstracted abstract:

Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming.

I saw this posted here a moment ago and reported it*, and it looks to have been purged. I am reposting it to allow us to sneer at it.

*

10
32
submitted 2 weeks ago* (last edited 2 weeks ago) by dgerard@awful.systems to c/sneerclub@awful.systems
11
26
submitted 2 weeks ago* (last edited 2 weeks ago) by blakestacey@awful.systems to c/sneerclub@awful.systems

The UCLA news office boasts, "Comparative lit class will be first in Humanities Division to use UCLA-developed AI system".

The logic the professor gives completely baffles me:

"Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically."

I'm trying to parse that. Really and truly I am. But it just sounds like this: "Normally, I would [do work]. But now, I can actually [do the same work]."

I mean, was this person somehow teaching comparative literature in a way that didn't involve reading the primary sources and, I'unno, comparing them?

The sales talk in the news release is really going all in selling that undercoat.

Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching — and offer students a very similar experience. And with AI-generated lesson plans and writing exercises for TAs, students in each discussion section can be assured they’re receiving comparable instruction to those in other sections.

Back in my day, we called that "having a book" and "writing a lesson plan".

Yeah, going from lecture notes and slides to something shaped like a book is hard. I know because I've fuckin' done it. And because I put in the work, I got the benefit of improving my own understanding by refining my presentation. As the old saying goes, "Want to learn a subject? Teach it." Moreover, doing the work means that I can take a little pride in the result. Serving slop is the cafeteria's job.

(Hat tip.)

12
15
13
26
14
20

featuring nobody's favourite e/acc bro, BasedBeffJezos

https://pivot-to-ai.com/2024/12/01/does-ai-startup-extropic-actually-do-anything/

15
30

https://nonesense.substack.com/p/lesswrong-house-style

Given that they are imbeciles given, occasionally, to dangerous ideas, I think it’s worth taking a moment now and then to beat them up. This is another such moment.

16
33
submitted 1 month ago* (last edited 1 month ago) by GorillasAreForEating@awful.systems to c/sneerclub@awful.systems
17
15
18
41
19
18

oh yes, this one is about our very good friends

20
13
21
16

Would've been way better if the author didn't feel the need to occasionally hand it to siskind for what amounts to keeping the mask on, even while he notes several instances where scotty openly discusses how maintaining a respectable facade is integral to his agenda of infecting polite society with neoreactionary fuckery.

22
13
submitted 1 month ago* (last edited 1 month ago) by Shitgenstein1@awful.systems to c/sneerclub@awful.systems
23
31
24
60

I haven't read the whole thread yet, but so far the choice line is:

I like how you just dropped the “Vance is interested in right authoritarianism” like it’s a known fact to base your entire point on. Vance is the clearest demonstration of a libertarian the republicans have in high office. It’s an absurd ad hominem that you try to mask in your wall of text.

25
13
view more: next ›

SneerClub

1010 readers
1 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS