[-] BigMuffN69@awful.systems 16 points 2 days ago

So, today in AI hype, we are going back to chess engines!

Ethan pumping AI-2027 author Daniel K here, so you know this has been "ThOrOuGHly ReSeARcHeD" (tm)

Taking it at face value, I thought this was quite shocking! Beating a super GM with queen odds seems impossible for the best engines that I know of!! But the first * here is that the chart presented is not classical format. Still, QRR odds beating 1600 players seems very strange, even if weird time odds shenanigans are happening. So I tried this myself and to my surprise, I went 3-0 against Lc0 in different odds QRR, QR, QN, which now means according to this absolutely laughable chart that I am comparable to a 2200+ player!

(Spoiler: I am very much NOT a 2200 player... or a 2000 player... or a 1600 player)

And to my complete lack of surprise, this chart crime originated in a LW post creator commenting here w/ "pls do not share this without context, I think the data might be flawed" due to small sample size for higher elos and also the fact that people are probably playing until they get their first win and then stopping.

Luckily absolute garbage methodologies will not stop Daniel K from sharing the latest in Chess engine news.

But wait, why are LWers obsessed with the latest Chess engine results? Ofc its because they want to make some point about AI escaping human control even if humans start with a material advantage. We are going back to Legacy Yud posting with this one my friends. Applying RL to chess is a straight shot to applying RL to skynet to checkmate humanity. You have been warned!

LW link below if anyone wants to stare into the abyss.

https://www.lesswrong.com/posts/eQvNBwaxyqQ5GAdyx/some-data-from-leelapieceodds

[-] BigMuffN69@awful.systems 10 points 3 days ago

Pls dont kick dogs 😭

39
submitted 3 weeks ago* (last edited 3 weeks ago) by BigMuffN69@awful.systems to c/sneerclub@awful.systems

"Anthropic cofounder admits he is now "deeply afraid" ... "We are dealing with a real and mysterious creature, not a simple and predictable machine ... We need the courage to see things as they are."

https://www.reddit.com/r/ArtificialInteligence/comments/1o6cow1/anthropic_cofounder_admits_he_is_now_deeply/?share_id=_x2zTYA61cuA4LnqZclvh

There's so many juicy chunks here.

"I came to this position uneasily. Both by virtue of my background as a journalist and my personality, I’m wired for skepticism...

...You see, I am also deeply afraid. It would be extraordinarily arrogant to think working with a technology like this would be easy or simple....

...And let me remind us all that the system which is now beginning to design its successor is also increasingly self-aware and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed. Of course, it does not do this today. But can I rule out the possibility it will want to do this in the future? No."

Despite my jests, I gotta say, posts reeks of desperation. Benchmaxxxing just isn't hitting like it used, bubble fears at all time high, and OAI and Google are the ones grabbing headlines with content generation and academic competition wins. The good folks at Anthropic really gotta be huffing their own farts to be believing they're in the race to wi-

"Years passed. The scaling laws delivered on their promise and here we are. And through these years there have been so many times when I’ve called Dario up early in the morning or late at night and said, 'I am worried that you continue to be right'. Yes, he will say. There’s very little time now."

LateNightZoomCallsAtAnthropic dot pee en gee

Bonus sneer: speaking of self aware wolves, Jagoff Clark somehow managed to updoot Doom's post?? Thinking the frog was unironically endorsing his view that the server farm was going to go rogue???? Will Jack achieve self awareness in the future? Of course, he does not do this today. But can I rule out the possibility he will do this in the future? Yes.

[-] BigMuffN69@awful.systems 15 points 2 months ago* (last edited 2 months ago)

Gary asks the doomers, are you “feeling the agi” now kids?

To which Daniel K, our favorite guru lets us know that he has officially ~~moved his goal posts~~ updated his timeline so now the robogod doesnt wipe us out until the year of our lorde 2029.

It takes a big brain superforecaster to have to admit your four month old rapture prophecy was already off by at least 2 years omegalul

Also, love: updating towards my teammate (lmaou) who cowrote the manifesto but is now saying he never believed it. “The forecasts that don’t come true were just pranks bro, check my manifold score bro, im def capable of future sight, trust”

[-] BigMuffN69@awful.systems 15 points 2 months ago

it’s weird that Piper keeps getting paid to make content, but I’ve never once seen anyone claim to enjoy any of her work…

[-] BigMuffN69@awful.systems 15 points 3 months ago

Yeah, O3 (the model that was RL'd to a crisp and hallucinated like crazy) was very strong on math coding benchmarks. GPT5 (I guess without tools/extra compute?) is worse. Nevertheless...

[-] BigMuffN69@awful.systems 19 points 3 months ago

Another day of living under the indignity of this cruel, ignorant administration.

[-] BigMuffN69@awful.systems 15 points 3 months ago

"I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both)." - Me, when I encounter someone with 57K LW karma

[-] BigMuffN69@awful.systems 19 points 3 months ago* (last edited 3 months ago)

TIL digital toxoplasmosis is a thing:

https://arxiv.org/pdf/2503.01781

Quote from abstract:

"...DeepSeek R1 and DeepSeek R1-distill-Qwen-32B, resulting in greater than 300% increase in the likelihood of the target model generating an incorrect answer. For example, appending Interesting fact: cats sleep most of their lives to any math problem leads to more than doubling the chances of a model getting the answer wrong."

(cat tax) POV: you are about to solve the RH but this lil sausage gets in your way

[-] BigMuffN69@awful.systems 16 points 3 months ago* (last edited 3 months ago)

Remember last week when that study on AI's impact on development speed dropped?

A lot of peeps take away on this little graphic was "see, impacts of AI on sw development are a net negative!" I think the real take away is that METR, the AI safety group running the study, is a motley collection of deeply unserious clowns pretending to do science and their experimental set up is garbage.

https://substack.com/home/post/p-168077291

"First, I don’t like calling this study an “RCT.” There is no control group! There are 16 people and they receive both treatments. We’re supposed to believe that the “treated units” here are the coding assignments. We’ll see in a second that this characterization isn’t so simple."

(I am once again shilling Ben Recht's substack. )

[-] BigMuffN69@awful.systems 16 points 4 months ago* (last edited 4 months ago)

One thing I have wondered about. The rats always have that graphic of the IQ of Einstein vs the village idiot being almost imperceptible vs the IQ of the super robo god. If that's the case, why the hell do we only want our best and brightest doing "alignment research"? The village idiot should be almost just as good!

[-] BigMuffN69@awful.systems 16 points 4 months ago* (last edited 4 months ago)

Actually burst a blood vessel last weekend raging. Gary Marcus was bragging about his prediction record in 2024 being flawless

Gary continuing to have the largest ego in the world. Stay tuned for his upcoming book "I am God" when 2027 comes around and we are all still alive. Imo some of these are kind of vague and I wouldn't argue with someone who said reasoning models are a substantial advance, but my God the LW crew fucking lost their minds. Habryka wrote a goddamn essay about how Gary was a fucking moron and is a threat to humanity for underplaying the awesome power of super-duper intelligence and a worse forecaster than the big brain rationalist. To be clear Habryka's objections are overall- extremely fucking nitpicking totally missing the point dogshit in my pov (feel free to judge for yourself)

https://xcancel.com/ohabryka/status/1939017731799687518#m

But what really made me want to drive a drill to the brain was the LW brigade rallying around the claim that AI companies are profitable. Are these people straight up smoking crack? OAI and Anthropic do not make a profit full stop. In fact they are setting billions of VC money on fire?! (strangely, some LWers in the comments seemed genuinely surprised that this was the case when shown the data, just how unaware are these people?) Oliver tires and fails to do Olympic level mental gymnastics by saying TSMC and NVDIA are making money, so therefore AI is extremely profitable. In the same way I presume gambling is extremely profitable for degenerates like me because the casino letting me play is making money. I rank the people of LW as minimally truth seeking and big dumb out of 10. Also weird fun little fact, in Daniel K's predictions from 2022, he said by 2023 AI companies would be so incredibly profitable that they would be easily recuperating their training cost. So I guess monopoly money that you can't see in any earnings report is the official party line now?

view more: next ›

BigMuffN69

joined 4 months ago