[-] blakestacey@awful.systems 30 points 3 months ago

Don't worry; this post is not going to be cynical or demeaning to you or your AI companion.

If you're worried that your "AI companion" can be demeaned by pointing out the basic truth about it, then you deserve to be demeaned yourself.

[-] blakestacey@awful.systems 29 points 3 months ago* (last edited 3 months ago)

The 1950s and ’60s are the middle and end of the Golden Age of science fiction

Incorrect. As everyone knows, the Golden Age of science fiction is 12.

Asimov’s stories were often centered around robots, space empires, or both,

OK, this actually calls for a correction on the facts. Asimov didn't combine his robot stories with his "Decline and Fall of the Roman Empire but in space" stories until the 1980s. And even by the '50s, his robot stories were very unsubtly about how thoughtless use of technology leads to social and moral decay. In The Caves of Steel, sparrows are exotic animals you have to go to the zoo to see. The Earth's petroleum supply is completely depleted, and the subway has to be greased with a bioengineered strain of yeast. There are ration books for going to the movies. Not only are robots taking human jobs, but a conspiracy is deliberately stoking fears about robots taking human jobs in order to foment unrest. In The Naked Sun, the colony world of Solaria is a eugenicist society where one of the murder suspects happily admits that they've used robots to reinvent the slave-owning culture of Sparta.

[-] blakestacey@awful.systems 29 points 4 months ago

Bringing over aio's comment from the end of last week's stubsack:

This week the WikiMedia Foundation tried to gather support for adding LLM summaries to the top of every Wikipedia article. The proposal was overwhelmingly rejected by the community, but the WMF hasn't gotten the message, saying that the project has been "paused". It sounds like they plan to push it through regardless.

Way down in the linked wall o' text, there's a comment by "Chaotic Enby" that struck me:

Another summary I just checked, which caused me a lot more worries than simple inaccuracies: Cambrian. The last sentence of that summary is "The Cambrian ended with creatures like myriapods and arachnids starting to live on land, along with early plants.", which already sounds weird: we don't have any fossils of land arthropods in the Cambrian, and, while there has been a hypothesis that myriapods might have emerged in the Late Cambrian, I haven't heard anything similar being proposed about arachnids. But that's not the worrying part.

No, the issue is that nowhere in the entire Cambrian article are myriapods or arachnids mentioned at all. Only one sentence in the entire article relates to that hypothesis: "Molecular clock estimates have also led some authors to suggest that arthropods colonised land during the Cambrian, but again the earliest physical evidence of this is during the following Ordovician". This might indicate that the model is relying on its own internal knowledge, and not just on the contents of the article itself, to generate an "AI overview" of the topic instead.

Further down the thread, there's a comment by "Gnomingstuff" that looks worth saving:

There was an 8-person community feedback study done before this (a UI/UX text using the original Dopamine summary), and the results are depressing as hell. The reason this was being pushed to prod sure seems to be the cheerleading coming from 7 out of those 8 people: "Humans can lie but AI is unbiased," "I trust AI 100%," etc.

Perhaps the most depressing is this quote -- "This also suggests that people who are technically and linguistically hyper-literate like most of our editors, internet pundits, and WMF staff will like the feature the least. The feature isn't really "for" them" -- since it seems very much like an invitation to ignore all of us, and to dismiss any negative media coverage that may ensue (the demeaning "internet pundits").

Sorry for all the bricks of text here, this is just so astonishingly awful on all levels and everything that I find seems to be worse than the last.

Another comment by "CMD" evaluates the summary of the dopamine article mentioned there:

The first sentence is in the article. However, the second sentence mentions "emotion", a word that while in a couple of reference titles isn't in the article at all. The third sentence says "creating a sense of pleasure", but the article says "In popular culture and media, dopamine is often portrayed as the main chemical of pleasure, but the current opinion in pharmacology is that dopamine instead confers motivational salience", a contradiction. "This neurotransmitter also helps us focus and stay motivated by influencing our behavior and thoughts". Where is this even from? Focus isn't mentioned in the article at all, nor is influencing thoughts. As for the final sentence, depression is mentioned a single time in the article in what is almost an extended aside, and any summary would surely have picked some of the examples of disorders prominent enough to be actually in the lead.

So that's one of five sentences supported by the article. Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it.

[-] blakestacey@awful.systems 30 points 7 months ago* (last edited 7 months ago)

Are we actually going with vibe coding as the name for this behavior? Surely we could introduce an alternative that is more disparaging and more dramatic, like bong-rip coding or shart coding.

[-] blakestacey@awful.systems 31 points 7 months ago

Hashemi and Hall (2020) published research demonstrating that convolutional neural networks could distinguish between "criminal" and "non-criminal" facial images with a reported accuracy of 97% on their test set. While this paper was later retracted for ethical concerns rather than methodological flaws,

That's not really a sentence that should begin with "While", now, is it?

it highlighted the potential for facial analysis to extend beyond physical attributes into behavior prediction.

What the fuck is wrong with you?

[-] blakestacey@awful.systems 29 points 8 months ago

Working in the field of genetics is a bizarre experience. No one seems to be interested in the most interesting applications of their research. [...] The scientific establishment, however, seems to not have gotten the memo. [...] I remember sitting through three days of talks at a hotel in Boston, watching prominent tenured professors in the field of genetics take turns misrepresenting their own data [...] It is difficult to convey the actual level of insanity if you haven’t seen it yourself.

Like Yudkowsky writing about quantum mechanics, this is cult shit. "The scientists refuse to see the conclusion in front of their faces! We and we alone are sufficiently Rational to embrace the truth! Listen to us, not to scientists!"

Gene editing scales much, much better than embryo selection.

"... Mister Bond."

The graphs look like they were made in Matplotlib, but on another level, they're giving big crayon energy.

[-] blakestacey@awful.systems 33 points 9 months ago

When I got back home and regaled my friends with my mountain stories, one of my friends joked that I should work for Elon and Vivek at DOGE and help America get off its current crash to defaulting on its own debt. So I reached out to some people and got in.

What a fucking idiot. Also a fascist collaborator, but importantly, a fucking idiot.

[-] blakestacey@awful.systems 33 points 1 year ago

shot:

The upper bound for how long to pause AI is only a century, because “farming” (artificially selecting) higher-IQ humans could probably create competent IQ 200 safety researchers.

It just takes C-sections to enable huge heads and medical science for other issues that come up.

chaser:

Indeed, the bad associations ppl have with eugenics are from scenarios much less casual than this one

going full "villain in a Venture Bros. episode who makes the Monarch feel good by comparison":

Sure, I don't think it's crazy to claim women would be lining up to screw me in that scenario

[-] blakestacey@awful.systems 31 points 1 year ago

Too much posting by racist assholes, for sure.

[-] blakestacey@awful.systems 29 points 1 year ago

There is an entire second Earth right here on Earth.

"... Mister Bond."

[-] blakestacey@awful.systems 34 points 1 year ago

Some of Kurzweil's predictions in 1999 about 2019:

A $1,000 computing device is now approximately equal to the computational ability of the human brain. Computers are now largely invisible and are embedded everywhere. Three-dimensional virtual-reality displays, embedded in glasses and contact lenses, provide the primary interface for communication with other persons, the Web, and virtual reality. Most interaction with computing is through gestures and two-way natural-language spoken communication. Realistic all-encompassing visual, auditory, and tactile environments enable people to do virtually anything with anybody regardless of physical proximity. People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.

Also:

Three‐dimensional nanotube lattices are now a prevalent form of computing circuitry.

And:

Autonomous nanoengineered machines can control their own mobility and include significant computational engines.

And:

ʺPhoneʺ calls routinely include high‐resolution three‐dimensional images projected through the direct‐eye displays and auditory lenses. Three‐dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.

And:

The all‐enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all of the facets of the tactile sense, including the sensing of pressure, temperature, textures, and moistness. Although the visual and auditory aspects of virtual reality involve only devices you have on or in your body (the direct‐eye lenses and auditory lenses), the ʺtotal touchʺ haptic environment requires entering a virtual reality booth. These technologies are popular for medical examinations, as well as sensual and sexual interactions with other human partners or simulated partners. In fact, it is often the preferred mode of interaction, even when a human partner is nearby, due to its ability to enhance both experience and safety.

And:

Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads.

And:

The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual‐experience software, which ranges from simulations of ʺrealʺ experiences to abstract environments with little or no corollary in the physical world.

And:

The expected life span, which, as a (1780 through 1900) and the first phase result of the first Industrial Revolution of the second (the twentieth century), almost doubled from less than forty, has now substantially increased again, to over one hundred.

[-] blakestacey@awful.systems 32 points 1 year ago* (last edited 1 year ago)

The given link contains exactly zero evidence in favor of Orchestrated Objective Reduction — "something interesting observed in vitro using UV spectroscopy" is a far cry from anything having biological relevance, let alone significance for understanding consciousness. And it's not like Orch-OR deserves the lofty label of theory, anyway; it's an ill-defined, under-specified, ad hoc proposal to throw out quantum mechanics and replace it with something else.

The fact that programs built to do spicy autocomplete turn out to do spicy autocomplete has, as far as I can tell, zero implications for any theory of consciousness one way or the other.

18

In which a man disappearing up his own asshole somehow fails to be interesting.

22
submitted 2 years ago* (last edited 2 years ago) by blakestacey@awful.systems to c/techtakes@awful.systems

So, there I was, trying to remember the title of a book I had read bits of, and I thought to check a Wikipedia article that might have referred to it. And there, in "External links", was ... "Wikiversity hosts a discussion with the Bard chatbot on Quantum mechanics".

How much carbon did you have to burn, and how many Kenyan workers did you have to call the N-word, in order to get a garbled and confused "history" of science? (There's a lot wrong and even self-contradictory with what the stochastic parrot says, which isn't worth unweaving in detail; perhaps the worst part is that its statement of the uncertainty principle is a blurry JPEG of the average over all verbal statements of the uncertainty principle, most of which are wrong.) So, a mediocre but mostly unremarkable page gets supplemented with a "resource" that is actively harmful. Hooray.

Meanwhile, over in this discussion thread, we've been taking a look at the Wikipedia article Super-recursive algorithm. It's rambling and unclear, throwing together all sorts of things that somebody somewhere called an exotic kind of computation, while seemingly not grasping the basics of the ordinary theory the new thing is supposedly moving beyond.

So: What's the worst/weirdest Wikipedia article in your field of specialization?

93

The day just isn't complete without a tiresome retread of freeze peach rhetorical tropes. Oh, it's "important to engage with and understand" white supremacy. That's why we need to boost the voices of white supremacists! And give them money!

28

With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety"/doomer nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes).

6
submitted 2 years ago* (last edited 2 years ago) by blakestacey@awful.systems to c/sneerclub@awful.systems

Flashback time:

One of the most important and beneficial trainings I ever underwent as a young writer was trying to script a comic. I had to cut down all of my dialogue to fit into speech bubbles. I was staring closely at each sentence and striking out any word I could.

"But then I paid for Twitter!"

6

AI doctors will revolutionize medicine! You'll go to a service hosted in Thailand that can't take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that you're following a script; so you can get the prescription the first AI told you to get.

Can't get mifepristone or puberty blockers? Just have a chatbot teach you how to cast Persuasion!

24
1

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

2

Steven Pinker tweets thusly:

My friend & Harvard colleague Howard Gardner, offers a thoughtful critique of my book Rationality -- but undermines his cause, as all skeptics of rationality must do, by using rationality to make it.

"My colleague and fellow esteemed gentleman of Harvard neglects to consider the premise that I am rubber and he is glue."

1

Geoffrey "primalpoly" Miller tweets thusly:

Imagine you're single & want to use a dating app to find a good mate.

What's one question you wish everyone would answer in their dating app profile?

PS in my experience, the question 'What's the heritability of IQ?' tends to separate the wheat from the chaff.

3

In the far-off days of August 2022, Yudkowsky said of his brainchild,

If you think you can point to an unnecessary sentence within it, go ahead and try. Having a long story isn't the same fundamental kind of issue as having an extra sentence.

To which MarxBroshevik replied,

The first two sentences have a weird contradiction:

Every inch of wall space is covered by a bookcase. Each bookcase has six shelves, going almost to the ceiling.

So is it "every inch", or are the bookshelves going "almost" to the ceiling? Can't be both.

I've not read further than the first paragraph so there's probably other mistakes in the book too. There's kind of other 'mistakes' even in the first paragraph, not logical mistakes as such, just as an editor I would have... questions.

And I elaborated:

I'm not one to complain about the passive voice every time I see it. Like all matters of style, it's a choice that depends upon the tone the author desires, the point the author wishes to emphasize, even the way a character would speak. ("Oh, his throat was cut," Holmes concurred, "but not by his own hand.") Here, it contributes to a staid feeling. It emphasizes the walls and the shelves, not the books. This is all wrong for a story that is supposed to be about the pleasures of learning, a story whose main character can't walk past a bookstore without going in. Moreover, the instigating conceit of the fanfic is that their love of learning was nurtured, rather than neglected. Imagine that character, their family, their family home, and step into their library. What do you see?

Books — every wall, books to the ceiling.

Bam, done.

This is the living-room of the house occupied by the eminent Professor Michael Verres-Evans,

Calling a character "the eminent Professor" feels uncomfortably Dan Brown.

and his wife, Mrs. Petunia Evans-Verres, and their adopted son, Harry James Potter-Evans-Verres.

I hate the kid already.

And he said he wanted children, and that his first son would be named Dudley. And I thought to myself, what kind of parent names their child Dudley Dursley?

Congratulations, you've noticed the name in a children's book that was invented to sound stodgy and unpleasant. (In The Chocolate Factory of Rationality, a character asks "What kind of a name is 'Wonka' anyway?") And somehow you're trying to prove your cleverness and superiority over canon by mocking the name that was invented for children to mock. Of course, the Dursleys were also the start of Rowling using "physically unsightly by her standards" to indicate "morally evil", so joining in with that mockery feels ... It's aged badly, to be generous.

Also, is it just the people I know, or does having a name picked out for a child that far in advance seem a bit unusual? Is "Dudley" a name with history in his family — the father he honored but never really knew? His grandfather who died in the War? If you want to tell a grown-up story, where people aren't just named the way they are because those are names for children to laugh at, then you have to play by grown-up rules of characterization.

The whole stretch with Harry pointing out they can ask for a demonstration of magic is too long. Asking for proof is the obvious move, but it's presented as something only Harry is clever enough to think of, and as the end of a logic chain.

"Mum, your parents didn't have magic, did they?" [...] "Then no one in your family knew about magic when Lily got her letter. [...] If it's true, we can just get a Hogwarts professor here and see the magic for ourselves, and Dad will admit that it's true. And if not, then Mum will admit that it's false. That's what the experimental method is for, so that we don't have to resolve things just by arguing."

Jesus, this kid goes around with L's theme from Death Note playing in his head whenever he pours a bowl of breakfast crunchies.

Always Harry had been encouraged to study whatever caught his attention, bought all the books that caught his fancy, sponsored in whatever maths or science competitions he entered. He was given anything reasonable that he wanted, except, maybe, the slightest shred of respect.

Oh, sod off, you entitled little twit; the chip on your shoulder is bigger than you are. Your parents buy you college textbooks on physics instead of coloring books about rocketships, and you think you don't get respect? Because your adoptive father is incredulous about the existence of, let me check my notes here, literal magic? You know, the thing which would upend the body of known science, as you will yourself expound at great length.

"Mum," Harry said. "If you want to win this argument with Dad, look in chapter two of the first book of the Feynman Lectures on Physics.

Wesley Crusher would shove this kid into a locker.

view more: ‹ prev next ›

blakestacey

joined 2 years ago
MODERATOR OF