[-] lurker@awful.systems 2 points 5 hours ago

I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI.

That is absolutely the reason, or at least part of it. See: Pete Hegseth Got His Happy Meal and how AGI-is-nigh doomers own-goaled themselves

[-] lurker@awful.systems 4 points 22 hours ago* (last edited 22 hours ago)

Reading comments cause I was bored, and had the misfortune to stumble upon this horribly formatted piece of work allegedly written by Claude

[-] lurker@awful.systems 16 points 1 day ago

I mean after the Epstein Files you have to be either deliberately ignorant or incredibly dense to not realise the rich get off easily

[-] lurker@awful.systems 12 points 1 day ago* (last edited 1 day ago)

the Pentagon's CTO has AI psychosis now. sighhhhhhhhh

The whole argument can just be countered with "if the Pentagon believes Claude is sentient and a danger to the military, then why make a deal with OpenAI to use ChatGPT, another LLM similar to Claude? Wouldn't that also be a danger of becoming sentient? and why are Pete Hegseth and Donald Trump planning to force Anthropic to comply after 6 months if they believe Claude shouldn't be in the military?? Why did you ask Anthropic to let you use Claude for mass surveillance and autonomous weapons if you believed it was sentient and a danger??"

It just reeks of bullshit. "uhm actually we made Anthropic a supply chain risk because Claude is actually very dangerous and not because we're doing banana republic shit to anyone who disagrees with us. we are a very responsible and safe government. please dont impeach trump."

71
submitted 1 day ago* (last edited 1 day ago) by lurker@awful.systems to c/techtakes@awful.systems
[-] lurker@awful.systems 5 points 3 days ago* (last edited 3 days ago)

to follow this one up: there is now a new study about AI agents being dogshit at keeping code working over the long term

60

Originally posted in the Stubsack, but decided to make it its own post because why not

[-] lurker@awful.systems 7 points 4 days ago* (last edited 3 days ago)

Anthropic is suing the Pentagon

This whole saga is a resounding “everyone sucks here”. but I’m gonna have to side with Anthropic on this one because at least they have some incredibly basic standards, which is far more than I can say for the current government and OpenAI, though the real best outcome is if the government and the AI industry destroy each other

(this has now been deemed high-quality enough for its own post)

[-] lurker@awful.systems 5 points 6 days ago

I fucking hope it’s soon

[-] lurker@awful.systems 30 points 1 week ago* (last edited 1 week ago)

Incredibly ballsy move to keep using their tech after you literally branded them a supply chain threat and implied you would take legal action against them, but that’s this administration for ya

(they did say there would be a six-month phase out period after which if Anthropic still didn’t comply, they’d force them to, but still)

12
submitted 1 month ago* (last edited 4 weeks ago) by lurker@awful.systems to c/sneerclub@awful.systems

this was already posted on reddit sneerclub, but I decided to crosspost it here so you guys wouldn’t miss out on Yudkowsky calling himself a genre savy character, and him taking what appears to be a shot at the Zizzians

29
submitted 1 month ago* (last edited 1 month ago) by lurker@awful.systems to c/sneerclub@awful.systems

originally posted in the thread for sneers not worth a whole post, then I changed my mind and decided it is worth a whole post, cause it is pretty damn important

Posted on r/HPMOR roughly one day ago

full transcript:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

[-] lurker@awful.systems 18 points 1 month ago

it’s all coming together. every single techbro and current government moron, they all loop back around to epstein in the end

33

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

view more: next ›

lurker

joined 1 month ago