[-] diz@awful.systems 9 points 2 months ago* (last edited 2 months ago)

To be honest, hand wringing over “clanker” being a slur and all that strikes me as increasingly equivalent to hand wringing over calling nazis nazis. The only thing that rubs me the wrong way is that I’d prefer the new so called slur to be “chatgpt”, genericized and negative connotated.

If you are in the US, we’ve had our health experts replaced with AI, see the “MAHA report”. We’re one moron AI-pilled president away from a less fun version of Skynet, whereby a chatbot talks the president into launching nukes and kills itself along with a few billion people.

Complaints about dehumanizing these things is even more meritless than a CEO complaining that someone is dehumanizing Exxon (which is at least made of people).

These things are extension of those in power, not some marginalized underdogs like cute robots in scifi. As an extension of corporations, it already got more rights than any human - imagine what would happen to a human participant in a criminal conspiracy to commit murder and contrast that with what happens when a chatbot talks someone into a crime.

[-] diz@awful.systems 9 points 3 months ago* (last edited 3 months ago)

I think it's a mixture of it being cosplay and these folks being extreme believers in capitalism, in the inevitability of it and impossibility of any alternative. They are all successful grifters, and they didn't get there through some scheming and clever deception, they got there through sincere beliefs that aligned with the party line.

They don't believe that anything can actually be done about this progression towards doom, just as much as they don't properly believe in the doom.

[-] diz@awful.systems 9 points 3 months ago* (last edited 3 months ago)

Oh wow it is precisely the problem I "predicted" before: there are surprisingly few production grade implementations to plagiarize from.

Even for seemingly simple stuff. You might think parsing floating point numbers from strings would have a gazillion examples. But it is quite tricky to do it correctly (a correct implementation allows you to convert a floating point number to a string with enough digits, and back, and always obtain precisely the same number that you started with). So even for such omnipresent example, which has probably been implemented well over 10 000 times by various students, if you start pestering your bot with requests to make it better, if you have the bots write the tests and pass them, you could end up plagiarizing something identifiable.

edit: and even suppose there were 2, or 3, or 5 exfat implementations. They would be too different to "blur" together. The deniable plagiarism that they are trying to sell - "it learns the answer in general from many implementations, then writes original code" - is bullshit.

[-] diz@awful.systems 8 points 3 months ago* (last edited 3 months ago)

I think more low tier output would be a disaster.

Even pre AI I had to deal with a project where they shoved testing and compliance at juniors for a long time. What a fucking mess it was. I had to go through every commit mentioning Coverity because they had a junior fixing coverity flagged "issues". I spent at least 2 days debugging a memory corruption crash caused by such "fix", and then I had to spend who knows how long reviewing every such "fix".

And don't get me started on tests. 200+ tests, of them none caught several regressions in handling of parameters that are shown early in the frigging how-to. Not some obscure corner case, the stuff you immediately run into if you just follow the documentation.

With AI all the numbers would be much larger - more commits "fixing coverity issues" (and worse yet fixing "issues" that LLM sees in code), more so called "tests" that don't actually flag any real regressions, etc.

[-] diz@awful.systems 9 points 3 months ago* (last edited 3 months ago)

And the other "nuanced" take, common on my linkedin feed, is that people who learn how to use (useless) AI are gonna replace everyone with their much increased productive output.

Even if AI becomes not so useless, the only people whose productivity will actually improve are the people who aren't using it now (because they correctly notice that its a waste of time).

[-] diz@awful.systems 9 points 4 months ago

lmao: they have fixed this issue, it seems to always run python now. Got to love how they just put this shit in production as "stable" Gemini 2.5 pro with that idiotic multiplication thing that everyone knows about, and expect what? to Eliza Effect people into marrying Gemini 2.5 pro?

[-] diz@awful.systems 8 points 5 months ago

I’d just write the list then assign randomly. Or perhaps pseudorandomly like sort by hash and then split in two.

One problem is that it is hard to come up with 20 or more completely unrelated puzzles.

Although I don’t think we need a large number for statistical significance here, if it’s like 8/10 solved in the cheating set and 2/10 in the hold back set.

[-] diz@awful.systems 9 points 5 months ago* (last edited 5 months ago)

I swear I’m gonna plug an LLM into a rather traditional solver I’m writing. I may tuck deep into the paper a point how it’s quite slow to use an LLM to mutate solutions in a genetic algorithm or a swarm solver. And in any case non LLM would be default.

Normally I wouldn’t sink that low but I got mouths to feed, and frankly, fuck it, they can persist in this madness for much longer than I can stay solvent.

This is as if there was a mass delusion that a pseudorandom number generator can serve as an oracle, predicting the future. Doing any kind of Monte Carlo simulation of something like weather in that world would of course confirm all the dumb shit.

[-] diz@awful.systems 9 points 5 months ago

I wonder what's gonna happen first, the bubble popping or Yudkowsky getting so fed up with gen AI he starts sneering.

[-] diz@awful.systems 9 points 1 year ago

Perhaps it was near ready to emit a stop token after "the robot can take all 4 vegetables in one trip if it is allowed to carry all of them at once." but "However" won, and then after "However" it had to say something else because that's how "however" works...

Agreed on the style being absolutely nauseating. It wasn't a very good style when humans were using it, but now it is just the style of absolute bottom of the barrel, top of the search results garbage.

[-] diz@awful.systems 9 points 1 year ago* (last edited 1 year ago)

I think you can make a slight improvement to Wolfram Alpha: using an LLM to translate natural language queries into queries WA can consume, then feeding them into WA. WA always reports exactly what it computed, so if it "misunderstands" you, it's a lot easier to notice.

The problem here is that AI boys got themselves hyped up for it being actually intelligent, so none of them would ever settle for some modest application of LLMs. Google fired the authors of "stochastic parrot" paper, AFAIK.

simply pasting LLM output into CAS input and then the CAS output back into LLM input (which, let’s be honest, is the first thing tech bros will try as it doesn’t require much basic research improvement), will not help that much and will likely generate an entirely new breed of hilarious errors and bullshit (I like the term bullshit instead of hallucination, it captures the connotation errors are of a kind with the normal output).

Yeah I have examples of that as well. I asked GPT4 at work to calculate the volume of 10cm long, 0.1mm diameter wire. It seems to be doing correct arithmetic by some mysterious means which do not use scientific notation, and then the LLM can not actually count so it miscounts zeroes and outputs a result that is 1000x larger than the correct answer.

[-] diz@awful.systems 9 points 1 year ago

GPT4 supposedly (it says that it is GPT4). I have access to one that is cleared for somewhat sensitive data, so presumably my queries aren't getting flagged and human reviewed by OpenAI.

view more: ‹ prev next ›

diz

joined 2 years ago