[-] rook@awful.systems 35 points 1 month ago

It’s just more llm output, in the style of “imagine you can reason about the question you’ve just been asked. Explain how you might have come about your answer.” It has no resemblance to how a neural network functions, nor to the output filters the service providers use.

It’s how the ai doomers get themselves into a flap over “deceptive” models… “omg it lied about its train of thought!” because if course it didn’t lie, it just edited a stream of tokens that were statistically similar to something classified as reasoning during training.

[-] rook@awful.systems 24 points 1 month ago

I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:

https://quantumai.google/

Quantum fucking ai? Motherfucker,

  • You don’t have ai, you have a chatbot
  • You don’t have a quantum computer, you have a tech demo for a single chip
  • Even if you had both of those things, you wouldn’t have “quantum ai”
  • if you have a very specialist and probably wallet-vaporisingly expensive quantum computer, why the hell would anyone want to glue an idiot chatbot to it, instead of putting it in the hands of competent experts who could actually do useful stuff with it?

Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says “ai” to them.

[-] rook@awful.systems 19 points 1 month ago

LLMs aren’t profitable even if they never had to pay a penny on license fees. The providers are losing money on every query, and can only be sustained by a firehose of VC money. They’re all hoping for a miracle.

[-] rook@awful.systems 20 points 1 month ago* (last edited 1 month ago)

Did you know there’s a new fork of xorg, called x11libre? I didn’t! I guess not everyone is happy with wayland, so this seems like a reasonable

It's explicitly free of any "DEI" or similar discriminatory policies.. [snip]

Together we'll make X great again!

Oh dear. Project members are of course being entirely normal about the whole thing.

Metux, one of the founding contributors, is Enrico Weigelt, who has reasonable opinions like everyone except the nazis were the real nazis in WW2, and also had an anti vax (and possibly eugenicist) rant on the linux kernel mailing list, as you do.

In sure it’ll be fine though. He’s a great coder.

(links were unashamedly pillaged from this mastodon thread: https://nondeterministic.computer/@mjg59/114664107545048173)

[-] rook@awful.systems 25 points 2 months ago* (last edited 2 months ago)

When confronted with a problem like “your search engine imagined a case and cited it”, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.

The other thing to be concerned about is how lazy and credulous your legal team are that they cannot be bothered to verify anything. That requires a significant improvement in professional ethics, which isn’t something that is really amenable to technological fixes.

[-] rook@awful.systems 19 points 2 months ago

Today’s man-made and entirely comprehensible horror comes from SAP.

(two rainbow stickers labelled “pride@sap”, with one saying “I support equality by embracing responsible ai” and the other saying “I advocate for inclusion through ai”)

Don’t have any other sources or confirmation yet, so it might be a load of cobblers, but it is depressingly plausible. From here: https://catcatnya.com/@ada/114508096636757148

[-] rook@awful.systems 22 points 3 months ago* (last edited 3 months ago)

From linkedin, not normally known as a source of anti-ai takes so that’s a nice change. I found it via bluesky so I can’t say anything about its provenance:

We keep hearing that AI will soon replace software engineers, but we're forgetting that it can already replace existing jobs... and one in particular.

The average Founder CEO.

Before you walk away in disbelief, look at what LLMs are already capable of doing today:

  • They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
  • They regurgitate material they read somewhere online without really understanding its meaning.
  • They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative they're trying to sell you.
  • They are heavily influenced by the last conversations they had.
  • They contradict themselves, pretending they aren't.
  • They politely apologize for their mistakes, but don't take any real steps to fix the underlying problem that caused them in the first place.
  • They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
  • They are victims of the Dunning–Kruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
  • They can make pretty slides in high volumes.
  • They're very good at consuming resources, but not as good at turning a profit.
[-] rook@awful.systems 84 points 6 months ago

A real ceo does everything. Delegation is for losers who can’t cope. Can’t move fast enough and break enough things if you’re constantly waiting for your lackeys to catch up.

If those numbers people were cleverer than the ceo, they’d be the ones in charge, and they aren’t. Checkmate. Do you even read Ayn Rand, bro?

[-] rook@awful.systems 20 points 8 months ago* (last edited 8 months ago)

It’s a long read, but a good one (though not a nice one).

  • learn about how all the people who actually make decisions in c++ world are complete assholes!
  • liking go (the programming language) correlated with brain damage!
  • in c++ world, it is ok to throw an arbitrary number of highly competent non-bros out of the window in order to keep a bro on board, even if said bro drugged and raped a minor!
  • the c++ module system is like a gunshot wound to the ass!
  • c++ leadership is delusional about memory safety!
  • even more assholes!

Someone on mastodon (can’t remember who right now) joked that they were expecting the c++ committee to publicly support trump, in the hopes he would retract the usg memory safety requirements. I can now believe that they might have considered that, and are probably hoping he’ll come down in their favour now that he’s coming in.

[-] rook@awful.systems 24 points 10 months ago

They’re rebranding American Christian milenaranism. Much like the second coming and/or the rapture, the AGI god will be here Real Soon Now, so please pay your tithes and trust that the church fathers are doing the right thing.

Much like the older cults it mirrors, it isn’t capable of delivering on its promises, but it is capable of doing substantial amounts of regular damage in the meantime, and that’s the only thing worth freaking out about.

[-] rook@awful.systems 19 points 1 year ago* (last edited 1 year ago)

Do any “ai” companies have a business plan more sophisticated than

  1. steal everything on the web
  2. buy masses of compute with vc money
  3. become too important to be busted for mass copyright infringement
  4. ?
  5. profit

I don’t recall seeing any signs of creativity, or even any good ideas as to what their product is even for, so I wouldn’t hold my breath waiting for one of the current crop to manifest creativity now.

Perhaps I missed something, though?

[-] rook@awful.systems 31 points 1 year ago

They could have just sat there and slurped up enormous profits from the bubble as all the people who can’t find a use for their “AI” systems buy nvidia hardware, but no. They had to get high from their own supply. I can’t see this boding well for them.

view more: next ›

rook

joined 2 years ago