[-] Architeuthis@awful.systems 10 points 3 months ago* (last edited 3 months ago)

The arguments made against the book in the review are that it doesn't make the case for LLMs being capable of independent agency, it reduces all material concerns of an AI takeover to broad claims of ASI being indistinguishable from magic and that its proposed solutions are dumb and unenforceable (again with the global GPU prohibition and the unilateral bombing of rogue datacenters).

That towards the end they note that the x-risk framing is a cognitive short-circuit that causes the faithful to ignore more pressing concerns like the impending climate catastrophe in favor of a mostly fictitious problem like AI doom isn't really a part of their core thesis against the book.

[-] Architeuthis@awful.systems 10 points 6 months ago* (last edited 6 months ago)

Yet, under Aron Peterson’s LinkedIn posts about these video clips, you can find the usual comments about him being “a Luddite”, being “in denial” etc.

And then there's this:

transcript

From: Rupert Breheny Bio: Cobalt AI Founder | Google 16 yrs | International Keynote Speaker | Integration Consultant AI Comment: Nice work. I've been playing around myself. First impressions are excellent. These are crisp, coherent images that respect the style of the original source. Camera movements are measured, and the four candidate videos generated are generous. They are relatively fast to render but admittedly do burn through credits.

From: Aron Peterson (Author) Bio: My body is 25% photography, 25% film, 25% animation, 25% literature and 0% tolerating bs on the internet. Comment: Rupert Breheny are you a bot? These are not crisp images. In my review above I have highlighted these are terrible.

[-] Architeuthis@awful.systems 10 points 6 months ago* (last edited 6 months ago)

Hey now, there's plenty of generalization going on with LLM networks, it's just that we've taken to calling it hallucinations these days.

[-] Architeuthis@awful.systems 10 points 7 months ago* (last edited 7 months ago)

Given the volatility of the space I don't think it could have been doing stuff much better, doubt it's getting out of alpha before the bubble bursts and stuff settles down a bit, if at all.

Automatic pr generation sounds like something that would need a prompt and a ten-line script rather than langchain, but it also seems both questionable and unnecessary.

If someone wants to know an LLM's opinion on what the changes in a branch are meant to accomplish they should be encouraged to ask it themselves, no need to spam the repository.

[-] Architeuthis@awful.systems 10 points 8 months ago* (last edited 8 months ago)

It's pick-me objectivism, only more overtly culty the closer you are to it irl. Imagine scientology if it was organized around AI doomerism and naive utilitarianism while posing as a get-smart-quick scheme.

It's main function (besides getting the early adopters laid) is to provide court philosophers for the technofeudalist billionaire class, while grooming talented young techies into a wide variety of extremist thought both old and new, mostly by fostering contempt of established epistemological authority in the same way Qanons insist people do their own research, i.e. as a euphemism for only paying attention to ingroup approved influencers.

It seems to have both a sexual harassment and a suicide problem, with a lot of irresponsible scientific racism and drug abuse in the mix.

[-] Architeuthis@awful.systems 10 points 8 months ago* (last edited 8 months ago)

SMBC using the ratsphere as comics fodder, part the manyeth:

transcriptionRetrofuturistic Looking Ghost: SCROOOOOGE! I am the ghost of christmas extreme future! Why! Why did you not find a way to indicate to humans 400 generation from now where toxic waste was storrrrrrrred! Look how Tiny Tim's cyborg descendant has to make costrly RNA repaaaaaaairs!

Byline: The Longtermist version of A Christmas Carol is way better.

bonus

spoiler transcription Scrooge: I tried, but no, no, I just don't give a shit. :::

[-] Architeuthis@awful.systems 10 points 9 months ago

Could be an SSC type situation: you write an interminable pretend research post in a superficially serious manner on an obviously flawed premise and let the algorithm help it find its audience of mostly people who won't read it but will be left with the impression that the premise is at least defensible.

This will be made considerably easier once siskind puts it in his regular link roundup with a cheeky comment about how he doesn't really truly endorse this sort of thing.

[-] Architeuthis@awful.systems 10 points 9 months ago* (last edited 9 months ago)

I don't think him having previously done undefined PR work for companies that include alleged AI startups is the smoking gun that mastopost is presenting it as.

Going through a Zitron long form article and leaving with the impression that he's playing favorites between AI companies seems like a major failure of reading comprehension.

[-] Architeuthis@awful.systems 10 points 2 years ago

weight classes are for wokies

This used to be a Joe Rogan staple: no weight classes, no time limits and the ring should be the size of a basketball court.

It's really just the umpteenth reiteration of the meathead mantra of how I'd do really well in [popular combat sport] if it weren't for those pesky rules holding me back.

[-] Architeuthis@awful.systems 10 points 2 years ago* (last edited 2 years ago)

To be really precise it was about measuring the size and distribution of all sorts of skull irregularities (the proverbial 'bumps') and mapping them to various traits, it's basically palm reading for the head.

Siskind is just being his usual disingenuous self, i.e. 'everyone always uses skull shape' (to indicate that my intellectual precursors were clowns) is obviously referencing phrenology, then immediately motte-and-baiieys it to a claim of correlation of cranial capacity and IQ.

Except for M&B sleight of hand to work the claim shift shouldn't happen in the same sentence, otherwise it's extremely obvious that you are claiming one thing while carrying water for the other thing (phrenology), which is probably why he ended up deleting the post.

[-] Architeuthis@awful.systems 10 points 2 years ago

HPTMOR is so obviously and unequivocally terrible that I can't help thinking I must be missing something significant about it, like how it could be scratching a very specific itch in young people on the spectrum.

As always, all bets are off if it happens to be the first long form literature someone read.

[-] Architeuthis@awful.systems 10 points 2 years ago* (last edited 2 years ago)

How did Sam and Caroline get into taking high doses of ADHD medication? We think it was via Scott Alexander Siskind, the psychiatrist behind the rationalist blog Slate Star Codex.

Siskind occasionally writes up particular psychiatric drugs as public education. One popular piece was “Adderall Risks: Much More Than You Wanted To Know” from December 28, 2017.

Not to cast further aspersions or anything, but siskind did write a sort of follow up (titled psychopharmacology of ftx or something like that if you feel like googling it) where he explicitly denies ever having met the FTX psychiatrist/dealer, even though a) he admits they actually worked in the same hospital for a time and, perhaps more tellingly, b) no one asked.

Also according to the birdsite the ftx psychiatrist may have in fact been a huge creep.

view more: ‹ prev next ›

Architeuthis

joined 2 years ago