Guys according to LW you’re reading Omelas all wrong (just like LeGuin was wrong)
https://www.lesswrong.com/posts/n83HssLfFicx3JnKT/omelas-is-perfectly-misread
Guys according to LW you’re reading Omelas all wrong (just like LeGuin was wrong)
https://www.lesswrong.com/posts/n83HssLfFicx3JnKT/omelas-is-perfectly-misread
Do we have a word for people that are kind of like… AI concern trolls? Like they say they are critical of AI, or even against AI, but only ever really put forward pro-AI propaganda, especially in response to actual criticisms of AI. Kind of centrists or (neo) libs. But for AI.
Bonus points if they also for some reason say we should pivot to more nuclear power, because in their words, even though AI doesn’t use as much electricity as we think, we should still start using more nuclear power to meet the energy demands. (ofc this is bullshit)
E: Maybe it's just sealion
sealAIon
Tyler Cowen saying some really weird shit about an AI 'actress'.
(For people who might wonder why he is relevant. See his 'see also' section on wikipedia)
E: And you might wonder, rightfully imho, that this cannot be real, that this must be an edit. https://archive.is/vPr1B I have bad news.
The Wikipedia editors are on it.
image description
screenshot of Tyler Cowen's Wikipedia article, specifically the "Personal life" section. The concluding sentence is "He also prefers virgin actresses."
Been tangentially involved in a discussion about how much LLMs have improved in the past year (I don’t care), but now that same space has a discussion of how annoying the stupid pop-up chat boxes are on websites. Don’t know what the problem is, they’ve gotten so much better in the past year?
I mean that's the fundamental problem, right? No matter how much better it gets, the things it's able to do aren't really anything people need or want. Like, if I'm going to a website looking for information it's largely because I don't want to deal with asking somebody for the answer. Even a flawless chatbot that can always provide the information I need - something that is far beyond the state of the art and possibly beyond some fundamental limitation of the LLM structure - wouldn't actually be preferable to just navigating a smooth and well-structured site.
Yup, exactly. The chatbot, no matter how helpful, exists in a context of dark patterned web design, incredibly bad resource usage and theft. Its purpose is to make the customer’s question go away, so not even the fanatics are interested.
See also how youtube tutorials have mostly killed (*) text based tutorials/wikis and are just inferior to good wikis/text based ones. Both because listening to a person talk is a linear experience, and a text one allows for easy scrolling, but also because most people are just bad at yt tutorials. (shoutout to the one which had annoyingly long random pauses in/between sentences even at 2x speed).
This is not helped because now youtube is a source of revenue, and updating a wiki/tutorial often is not. So the incentives are all wrong. A good example of this is the gaming wiki fextralife: See this page on dragons dogma 2 npcs. https://dragonsdogma2.wiki.fextralife.com/NPCs (the game has been out for over a year, if the weirdness doesn't jump out at you). But the big thing for fextralife is their youtube tutorials and it used to have an autoplaying link to their streams. This isn't a wiki, it is an advertisement for their youtube and livestreams. And while this is a big example the problem persists with smaller youtubers, who suffer from extreme publish, do not deviate from your niche or perish. They can't put in the time to update things, because they need to publish a new video (on their niche, branching out is punished) soon or not pay rent. (for people who play videogames and or watch youtube out there, this is also why somebody like the spiffing brit is has long ago went from 'I exploit games' to 'I grind and if you grind enough in this single player game you become op', the content must flow, but eventually you will run out of good new ideas (also why he tried to push his followers into doing risky cryptocurrency related 'cheats' (follow Elon, if he posts a word that can be cryptocoined, pump and dump it for a half hour))).
*: They still exist but tend to be very bad quality, even worse now people are using genAI to seed/update them.
People can't just have a hobby anymore, can they?
Nope. And tbh, did some dd2 recently, and for a very short while I was tempted to push the edit button, but then I remembered that fextralife just tries to profit off my wiki editing labor. (I still like the idea of wikis, but do not have the fortitude and social calm to edit a mainstream one like wikipedia). (I did a quick check, and yeah I also really hate the license fextralife/valnet uses "All contributions to any Fextralife.com Wiki fall under the below Contribution Agreement and become the exclusive copyrighted property of Valnet.", and their editor sucks ass (show me the actual code not this wysiwyg shit)).
Gross :/ how come people have to be like that?
It wouldn't be so bad if they just didn't care and stopped maintaining, but their site is one of the first ones you get. Which is a regular problem with these things.
The Guardian shat out its latest piece of AI hype, violating Betteridge's Law of headlines by asking "Parents are letting little kids play with AI. Are they wrong?"
Jeff "Coding Horror" Atwood is sneering — at us! On Mastodon:
bad news "AI bubble doomers". I've found the LLMs to be incredibly useful … Is it overhyped? FUCK Yes. … But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, … and the first dot-com bubble … If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.
T. chatbot booster rhetoric. So what are those examples, buddy? Very specifically? He replies:
a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.
Um hello‽ Maybe Jeff doesn't have a spare room or room to sublet, but surely he can spare a couch or a mailbox? Let your friend use your mailing address. Store some of their stuff in your garage. To use the jargon of hackers, Jeff should be a better neighbor. This is a common issue for unhoused folks and they cannot climb back up the ladder into society without some help. Jeff's reinvented the Hulk tacos meme but they can't even eat it because printer paper tastes awful.
The "unhoused friend" story is about as likely to be true as the proverbial Canadian girlfriend story. "You wouldn't know her."
jeff’s follow-up after the backlash clarifies: you wouldn’t know her because he donated right under the limit to incur a taxable event and didn’t establish a trust like a normal millionaire and also the LLM printout only came pointlessly after months of research and financially supporting the unhoused friend and also you’re no longer allowed to ask publicly about the person he brought up in public, take it to email
Alex, I'll take "Things that never happened" for $1000.
I really hope atwood’s unhoused friend got the actual infrastructural support you mentioned (a temporary mailing address and an introduction letter emailed to an employer is only slightly more effort than generating slop, jeff, please) but from direct experience with philanthropists like him, I’m fairly sure Jeff now considers the matter solved forever
Thanks for this.
there's been something that's really been rubbed me the wrong way about jeff in the last few years. he was annoying before and had some insights but lately I've been using him as a sort of a jim crameresque tech-take-barometer.
What really soured me was after he started picking fights with some python people a few years back because someone dared post that a web framework? (couldn't dig up the link) was a greater contribution to the world than S/O? His response was pretty horrid to the point where various python leaders were telling to stop being a massive dick because he was trying to be a bully with this "do you know who I am" attitude because he personally had not heard of the framework so it wasn't acshually at all that relevant compared to S/O.
and now this combined with his stupid teehee I am giving away my wealth guise look how altruistic I am really is a bit eugh
"Provide an overview of local homeless services" sounds like a standard task for a volunteer or a search engine, but yes "you can use my address for mail and store some things in my garage and I will email some contacts about setting you up with contract work" would be a better answer than just handing out secondhand information! Many "amazing things AI can do" are things the Internet + search engines could do ten years ago.
I would also like to hear from the friend "was this actually helpful?"
Friend: "I have a problem"
Me, with a stack of google printouts: "My time to shine!".
E: ow god, I thought the examples were multiple and the friend one was just a random one. No, it was the first example. 'I gave my friend a printout, which saved me time'. Also, as I assume the friend still is unhoused, and they didn't actually use the printout yet, he doesn't know if this actually helped. Atwood isn't a 'helping the unhoused' expert. He just assumed it was a good source. The story ends when he hands over the paper.
Also very funny that he is also going 'you just need to know how to ask questions the right way, which I learned by building stackoverflow'. Yeah euh, that is not a path a lot of people can follow up in.
Soyweiser
Its even worse when I read the whole thread, Atwood claims to have $140 million, and the best he can do for "a friend" who is homeless is handing out some printouts with a few sections highlighted? And he thinks this makes him look good because he promises to give away half his wealth one day?
Like Clinton starting a go fund me for a coworker with cancer, the rich and their money are not voluntarily parted.
This also shows problems with the "effective altruist" approach. Donating to the local theater or "to raise awareness of $badThing" might not be the best way of using funds, but when a friend needs help now, you have the resources to help them, and you say "no, that might not be as efficient as creating a giant charity to help strangers one day" something is wrong.
A bit odd to start out throwing shade at the Segway considering that the concept has been somewhat redeemed with e-bikes and e-scooters.
Some Rat content got shared on HN, and the rats there are surprised and outraged not everyone shares their deathly fear of the AI god:
https://news.ycombinator.com/item?id=45451971
"Stop bringing up Roko's Basilisk!!!" they sputter https://news.ycombinator.com/item?id=45452426
"The usual suspects are very very worried!!!" - https://news.ycombinator.com/item?id=45452348 (username 'reducesuffering checks out!)
``Think for at least 5 seconds before typing.'' - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743
https://news.ycombinator.com/item?id=45453386
nobody mentioned this particular incident, dude just threw it into the discussion himself
incredible how he rushes to assure us that this was "a really hot 17 year old"
Always fun trawling thru comments
Government banning GPUs: absolutely neccessary: https://news.ycombinator.com/item?id=45452400
Government banning ICE vehicles - eh, a step too far: https://news.ycombinator.com/item?id=45440664
Amusing to see him explaining to you the connection between Bay Area rationalists and AI safety people.
The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:
If you claim that there is no AI-risk, then which of the following bullets do you want to bite?
- If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
- There’s no way that AI with an IQ of 300 will arrive within the next few decades.
- We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.
Ignoring that IQ doesn't really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn't stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.
Frankly I wish that they'd understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl
works.
``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743
Read that last one against my better judgment, and found a particularly sneerable line:
And in this case we're talking about a system that's smarter than you.
Now, I'm not particularly smart, but I am capable of a lot of things AI will never achieve. Like knowing something is true, or working out a problem, or making something which isn't slop.
Between this rat and Saltman spewing similar shit on Politico, I have seen two people try to claim text extruders are smarter than living, thinking human beings. Saltman I can understand (he is a monorail salesman who lies constantly), but seeing someone who genuinely believes this shit is just baffling. Probably a consequence of chatbots destroying their critical thinking and mental acuity.
Let's not forget the perennial favorite "humans are just stochastic parrots too durr" https://news.ycombinator.com/item?id=45452238
to be scrupulously fair, the submission is flagged, and most of the explicit rat comments are downvoted
There have been a lot of cases in history of smart people being bested by the dumbest people around who just had more guns/a gun/copious amounts of meth/a stupid idea but they got lucky once, etc.
I mean, if they are so smart, why are they stuck in a locker?
It's practically a proverb that you don't ask a scientist to explain how a "psychic" is pulling off their con, because scientists are accustomed to fair play; you call a magician.
In today’s torment nexus development news… you know how various cyberpunky type games let you hack into an enemy’s augmentations and blow them up? Perhaps you thought this was stupid and unrealistic, and you’d be right.
Maybe that’s the wrong example. How about a cursed evil ring that when you put it on, you couldn’t take it off and it wracks you with pain? Who hasn’t wanted one of those?
Happily, hard working torment nexus engineers have brought that dream one step closer, by having “smart rings”, powered by lithium polymer batteries. Y’know, the things that can go bad, and swell up and catch fire? And that you shouldn’t puncture, because that’s a fire risk too, meaning cutting the ring off is somewhat dangerous? Fun times abound!
https://bsky.app/profile/emily.gorcen.ski/post/3m25263bs3c2g
image description
A pair of tweets, containing the text
Daniel aka ZONEofTECH on x.com: “Ahhh…this is…not good. My Samsung Galaxy Ring’s battery started swelling. While it’s on my finger 😬. And while I’m about to board a flight 😬 Now I cannot take it off and this thing hurts. Any quick suggestions
Update:
- I was denied boarding due to this (been travelling for ~47h straight so this is really nice 🙃). Need to pay for a hotel for the night now and get back home tomorrow👌
- was sent to the hospital, as an emergency
- ring got removed
You can see the battery all swollen. Won’t be wearing a smart ring ever again.
TERF obsessed with AI finds out the "degenerate" ani skin for grok has an X account, loses her shit
https://xcancel.com/groks_therapist/status/1972848657625198827#m
then follows up with this wall of text
https://xcancel.com/groks_therapist/status/1973127375107006575#m
I got bored and flipped to the replies. The first was this by "TERFs 'r' us":
Excellent overview!
This is transhumanism.
This is going to destroy humanity, @elonmusk.
Put the breaks on!
I hate transhumanism because it's eugenics for 1990s Wired magazine.
You hate it because it has "trans" in the name.
We are not the same.
Anybody else notice that the Ani responses seem to follow a formula, depending on the... sentiment I guess... of the input? All the defensive responses start with "hey", and end with crude rebukes. It all seems like xAI made an Eliza that will either flirt or swear.
Also I can guarantee that "her" system prompt includes the phrases "truth-seeking" "fun loving" and "kinda hot".
funny thing is she literally talks to ani like a terf talks to a trans woman including saying 'at least I'm a real woman'
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community