[-] Architeuthis@awful.systems 18 points 2 months ago* (last edited 2 months ago)

Saltman has a new blogpost out he calls 'Three Observations' that I feel too tired to sneer properly but I'm sure will be featured in pivot-to-ai pretty soon.

Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the "observation" that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it's officially diminishing returns from now on.

Second observation is that when a thing gets cheaper it's used more, i.e. they'll be pushing even harded to shove it into everything.

Third observation is that

The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

which is hilarious.

The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn't read too closely.

[-] Architeuthis@awful.systems 18 points 3 months ago* (last edited 3 months ago)

I mean, you could have answered by naming one fabled new ability LLM's suddenly 'gained' instead of being a smarmy tadpole, but you didn't.

[-] Architeuthis@awful.systems 19 points 3 months ago

What new AI abilities, LLMs aren't pokemon.

[-] Architeuthis@awful.systems 20 points 4 months ago

No shot is over two seconds, because AI video can’t keep it together longer than that. Animals and snowmen visibly warp their proportions even over that short time. The trucks’ wheels don’t actually move. You’ll see more wrong with the ad the more you look.

Not to mention the weird AI lighting that makes everything look fake and unnatural even in the ad's dreamlike context, and also that it's the most generic and uninspired shit imaginable.

[-] Architeuthis@awful.systems 20 points 7 months ago

"When asked about buggy AI [code], a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”

Strong they cut all my deadlines in half and gave me an OpenAI API key, so fuck it energy.

He stressed that this is not from want of care on the developer’s part but rather a lack of interest in “copy-editing code” on top of quality control processes being unprepared for the speed of AI adoption.

You don't say.

[-] Architeuthis@awful.systems 18 points 9 months ago* (last edited 9 months ago)

Ah yes, Alexander's unnumbered hordes, that endless torrent of humanity that is all but certain to have made a lasting impact on the sparsely populated subcontinent's collective DNA.

edit: Also, the absolute brain on someone who would think that before entertaining a random recent western ancestor like a grandfather or whateverthefuckjesus.

[-] Architeuthis@awful.systems 18 points 9 months ago

IKR like good job making @dgerard look like King Mob from the Invisibles in your header image.

If the article was about me I'd be making Colin Robinson feeding noises all the way through.

edit: Obligatory only 1 hour 43 minutes of reading to go then

[-] Architeuthis@awful.systems 19 points 9 months ago

It hasn't worked 'well' for computers since like the pentium, what are you talking about?

The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.

There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.

So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.

[-] Architeuthis@awful.systems 19 points 10 months ago

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

[-] Architeuthis@awful.systems 19 points 10 months ago* (last edited 10 months ago)

So LLM-based AI is apparently such a dead end as far as non-spam and non-party trick use cases are concerned that they are straight up rolling out anti-features that nobody asked or wanted just to convince shareholders that ground breaking stuff is still going on, and somewhat justify the ocean of money they are diverting that way.

At least it's only supposed to work on PCs that incorporate so-called neural processor units, which if I understand correctly is going to be its own thing under a Windows PC branding.

edit: Yud must love that instead of his very smart and very implementable idea of the government enforcing strict regulations on who gets to own GPUs and bombing non-compliants we seem to instead be trending towards having special deep learning facilitating hardware integrated in every new device, or whatever NPUs actually are, starting with iPhones and so-called Windows PCs.

edit edit: the branding appears to be "Copilot+ PCs" not windows pcs.

[-] Architeuthis@awful.systems 19 points 1 year ago* (last edited 1 year ago)

Sticking numbers next to things and calling it a day is basically the whole idea behind bayesian rationalism.

[-] Architeuthis@awful.systems 18 points 1 year ago* (last edited 1 year ago)

Hi, my name is Scott Alexander and here's why it's bad rationalism to think that widespread EA wrongdoing should reflect poorly on EA.

The assertion that having semi-frequent sexual harassment incidents go public is actually an indication of health for a movement since it's evidence that there's no systemic coverup going on and besides everyone's doing it is uh quite something.

But surely of 1,000 sexual harassment incidents, the movement will fumble at least one of them (and often the fact that you hear about it at all means the movement is fumbling it less than other movements that would keep it quiet). You’re not going to convince me I should update much on one (or two, or maybe even three) harassment incidents, especially when it’s so easy to choose which communities’ dirty laundry to signal boost when every community has a thousand harassers in it.

view more: ‹ prev next ›

Architeuthis

joined 2 years ago