In other news, bodhidave reported a case of Google AI and ChatGPT making identical citation fuck-ups:
In other news, I've stumbled across some AI slop trying to sell a faux-nostalgic image of the 1980s:
Unsurprisingly, its getting walloped in the quotes - there's people noting how it misrepresents the '80s, people noting much the '80s sucked and how its worst aspects are getting repeated today, people noting the video's whiter than titanium dioxide, people suggesting there's suicidal undertones to it, and a few comparisons to San Junipero from Black Mirror here and there.
Personally, this whole thing has negative nostalgic value to me - I was born in 2000, well after the decade ended (temporally and culturally), and the faux-nostalgic uncanny-valley vibe this slop has reminds me more of analog horror than anything else.
Xe Iaso's chimed in on the GPT-5 fallout, giving her thoughts on chatbots' use as assistants/therapists.
So state-owned power company Vattenfall here in Sweden are gonna investigate building "small modular reactors" as a response to government's planned buildout of nuclear.
Either Rolls-Royce or GE Vernova are in the running.
Note that this is entirely dependent on the government guaranteeing a certain level of revenue ("risk sharing"), and of course that that level survives an eventual new government.
Apparently Eliezer is actually against throwing around P(doom) numbers: https://www.lesswrong.com/posts/4mBaixwf4k8jk7fG4/yudkowsky-on-don-t-use-p-doom ?
The objections to using P(doom) are relatively reasonable by lesswrong standards... but this is in fact once again all Eliezer's fault. He started a community centered around 1) putting overconfident probability "estimates" on subjective uncertain things 2) need to make a friendly AI-God, he really shouldn't be surprised that people combine the two. Also, he has regularly expressed his certainty that we are all going to die to Skynet in terms of ridiculously overconfident probabilities, he shouldn't be surprised that other people followed suit.
"wRiTiNg WiTh LlM iS nOt A sHaMe" https://awful.systems/post/5390645
not even hn is having it lmoa
also claims that llms are good for proofreading, clearly didn't do that
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community