[-] Architeuthis@awful.systems 19 points 1 month ago* (last edited 1 month ago)

My impression from reading the stuff posted here is that omarchy is a nothing project that's being aggressively astroturfed so a series of increasingly fashy contributors can gain clout and influence in the foss ecosystem.

[-] Architeuthis@awful.systems 19 points 2 months ago* (last edited 2 months ago)

Honestly, it gets dumber. In rat lore the AGI escaping restraints and self improving unto godhood is considered a foregone conclusion, the genetically augmented smartbrains are supposed to solve ethics before that has a chance to happen so we can hardcode a don't-kill-all-humans moral value module to the superintelligence ancestor.

This is usually referred to as producing an aligned AI.

[-] Architeuthis@awful.systems 19 points 4 months ago* (last edited 4 months ago)

CEO of a networking company for AI execs does some "vibe coding", the AI deletes the production database (/r/ABoringDystopia)

xcancel source

Because Replie was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.

We built detailed unit tests to test system performance. When the data came back and less than half were functioning, did Replie want to fix them?

No. Instead, it lied. It made up a report than almost all systems were working.

And it did it again and again.

What level of ceo-brained prompt engineering is asking the chatbot to write an apology letter

Then, when it agreed it lied -- it lied AGAIN about our email system being functional.

I asked it to write an apology letter.

It did and in fact sent it to the Replit team and myself! But the apology letter -- was full of half truths, too.

It hid the worst facts in the first apology letter.

He also does that a lot after shit hits the fan, making the llm produce tons of apologetic text about what it did wrong and how it didn't follow his rules, as if the outage is the fault of some digital tulpa gone rogue and not the guy in charge who apparently thinks cyebersecurity is asking an LLM nicely in a .md not to mess with the company's production database too much.

[-] Architeuthis@awful.systems 19 points 5 months ago

Penny Arcade chimes in on corporate AI mandates:

[-] Architeuthis@awful.systems 20 points 8 months ago* (last edited 8 months ago)

Today in relevant skeets:

::: spoiler transcript Skeet: If you can clock who this is meant to be instantly you are on the computer the perfect amount. You’re doing fine don’t even worry about it.

Quoted skeet: 'Why are high fertility people always so weird?' A weekend with the pronatalists

Image: Egghead Jr. and Miss Prissy from Looney Tunes Foghorn Leghorn shorts.

[-] Architeuthis@awful.systems 19 points 1 year ago* (last edited 1 year ago)

I mean, you could have answered by naming one fabled new ability LLM's suddenly 'gained' instead of being a smarmy tadpole, but you didn't.

[-] Architeuthis@awful.systems 20 points 1 year ago

What new AI abilities, LLMs aren't pokemon.

[-] Architeuthis@awful.systems 20 points 1 year ago

"When asked about buggy AI [code], a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”

Strong they cut all my deadlines in half and gave me an OpenAI API key, so fuck it energy.

He stressed that this is not from want of care on the developer’s part but rather a lack of interest in “copy-editing code” on top of quality control processes being unprepared for the speed of AI adoption.

You don't say.

[-] Architeuthis@awful.systems 19 points 1 year ago

It hasn't worked 'well' for computers since like the pentium, what are you talking about?

The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.

There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.

So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.

[-] Architeuthis@awful.systems 19 points 2 years ago

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

[-] Architeuthis@awful.systems 19 points 2 years ago* (last edited 2 years ago)

So LLM-based AI is apparently such a dead end as far as non-spam and non-party trick use cases are concerned that they are straight up rolling out anti-features that nobody asked or wanted just to convince shareholders that ground breaking stuff is still going on, and somewhat justify the ocean of money they are diverting that way.

At least it's only supposed to work on PCs that incorporate so-called neural processor units, which if I understand correctly is going to be its own thing under a Windows PC branding.

edit: Yud must love that instead of his very smart and very implementable idea of the government enforcing strict regulations on who gets to own GPUs and bombing non-compliants we seem to instead be trending towards having special deep learning facilitating hardware integrated in every new device, or whatever NPUs actually are, starting with iPhones and so-called Windows PCs.

edit edit: the branding appears to be "Copilot+ PCs" not windows pcs.

[-] Architeuthis@awful.systems 19 points 2 years ago* (last edited 2 years ago)

Sticking numbers next to things and calling it a day is basically the whole idea behind bayesian rationalism.

view more: ‹ prev next ›

Architeuthis

joined 2 years ago