[-] rook@awful.systems 18 points 1 month ago

KeepassXC (my password manager of choice) are “experimenting” with ai code assistants 🫩

https://www.reddit.com/r/KeePass/comments/1lnvw6q/comment/n0jg8ae/

I'm a KeePassXC maintainer. The Copilot PRs are a test drive to speed up the development process. For now, it's just a playground and most of the PRs are simple fixes for existing issues with very limited reach. None of the PRs are merged without being reviewed, tested, and, if necessary, amended by a human developer. This is how it is now and how it will continue to be should we choose to go on with this. We prefer to be transparent about the use of AI, so we chose to go the PR route. We could have also done it locally and nobody would ever know. That's probably how most projects work these days. We might publish a blog article soon with some more details.

The trace of petulance in the response… “we could have done it secretly, that’s how most projects do it” is not the kind of attitude I’m happy to see attached to a security critical piece of software.

[-] rook@awful.systems 23 points 1 month ago

Mmm. There’s certainly nothing else about any of the people or projects involved that’s likely to be a source of fuss, either.

No sir, nothing but apolitical dramaless software development as far as the eye can see.

[-] rook@awful.systems 22 points 2 months ago* (last edited 2 months ago)

Bluesky going to bad for that poor, downtrodden, victimised and underrepresented demographic, uh, ai slop posters?

https://bsky.app/profile/carrion.bsky.social/post/3m2kf3rottc2h

alt textA screenshot of an email sent to a bluesky user, reading

Hi there, Your Bluesky account (@carrion.bsky.social) has created a list called "Al Slop Posters" that may violate our Community Guidelines. We've temporarily hidden this list from other users because it contains one or more of these issues.

  • Harmful language such as insults or slurs
  • Unverified claims
  • Appears intended to shame or abuse users
[-] rook@awful.systems 20 points 2 months ago

I was always faintly baffled by ladybird… why, in this day and age, would you start a complex new project using a complex and deeply un-memory-safe language when you could just… not? I’m guessing kling is one of those rockstar devs who is certain that they never make mistakes.

Swift is a surprisingly OK language. It’s just a shame it is hitched to apple who seem to have real problems with making dev tooling that isn’t awful. Maybe in a few years the cross-platform experience won’t suck.

[-] rook@awful.systems 20 points 2 months ago* (last edited 2 months ago)

In today’s torment nexus development news… you know how various cyberpunky type games let you hack into an enemy’s augmentations and blow them up? Perhaps you thought this was stupid and unrealistic, and you’d be right.

Maybe that’s the wrong example. How about a cursed evil ring that when you put it on, you couldn’t take it off and it wracks you with pain? Who hasn’t wanted one of those?

Happily, hard working torment nexus engineers have brought that dream one step closer, by having “smart rings”, powered by lithium polymer batteries. Y’know, the things that can go bad, and swell up and catch fire? And that you shouldn’t puncture, because that’s a fire risk too, meaning cutting the ring off is somewhat dangerous? Fun times abound!

https://bsky.app/profile/emily.gorcen.ski/post/3m25263bs3c2g

image descriptionA pair of tweets, containing the text

Daniel aka ZONEofTECH on x.com: “Ahhh…this is…not good. My Samsung Galaxy Ring’s battery started swelling. While it’s on my finger 😬. And while I’m about to board a flight 😬 Now I cannot take it off and this thing hurts. Any quick suggestions

Update:

  • I was denied boarding due to this (been travelling for ~47h straight so this is really nice 🙃). Need to pay for a hotel for the night now and get back home tomorrow👌
  • was sent to the hospital, as an emergency
  • ring got removed

You can see the battery all swollen. Won’t be wearing a smart ring ever again.

[-] rook@awful.systems 24 points 6 months ago

I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:

https://quantumai.google/

Quantum fucking ai? Motherfucker,

  • You don’t have ai, you have a chatbot
  • You don’t have a quantum computer, you have a tech demo for a single chip
  • Even if you had both of those things, you wouldn’t have “quantum ai”
  • if you have a very specialist and probably wallet-vaporisingly expensive quantum computer, why the hell would anyone want to glue an idiot chatbot to it, instead of putting it in the hands of competent experts who could actually do useful stuff with it?

Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says “ai” to them.

[-] rook@awful.systems 19 points 6 months ago

LLMs aren’t profitable even if they never had to pay a penny on license fees. The providers are losing money on every query, and can only be sustained by a firehose of VC money. They’re all hoping for a miracle.

[-] rook@awful.systems 20 points 6 months ago* (last edited 6 months ago)

Did you know there’s a new fork of xorg, called x11libre? I didn’t! I guess not everyone is happy with wayland, so this seems like a reasonable

It's explicitly free of any "DEI" or similar discriminatory policies.. [snip]

Together we'll make X great again!

Oh dear. Project members are of course being entirely normal about the whole thing.

Metux, one of the founding contributors, is Enrico Weigelt, who has reasonable opinions like everyone except the nazis were the real nazis in WW2, and also had an anti vax (and possibly eugenicist) rant on the linux kernel mailing list, as you do.

In sure it’ll be fine though. He’s a great coder.

(links were unashamedly pillaged from this mastodon thread: https://nondeterministic.computer/@mjg59/114664107545048173)

[-] rook@awful.systems 19 points 7 months ago

Today’s man-made and entirely comprehensible horror comes from SAP.

(two rainbow stickers labelled “pride@sap”, with one saying “I support equality by embracing responsible ai” and the other saying “I advocate for inclusion through ai”)

Don’t have any other sources or confirmation yet, so it might be a load of cobblers, but it is depressingly plausible. From here: https://catcatnya.com/@ada/114508096636757148

[-] rook@awful.systems 23 points 7 months ago* (last edited 7 months ago)

From linkedin, not normally known as a source of anti-ai takes so that’s a nice change. I found it via bluesky so I can’t say anything about its provenance:

We keep hearing that AI will soon replace software engineers, but we're forgetting that it can already replace existing jobs... and one in particular.

The average Founder CEO.

Before you walk away in disbelief, look at what LLMs are already capable of doing today:

  • They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
  • They regurgitate material they read somewhere online without really understanding its meaning.
  • They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative they're trying to sell you.
  • They are heavily influenced by the last conversations they had.
  • They contradict themselves, pretending they aren't.
  • They politely apologize for their mistakes, but don't take any real steps to fix the underlying problem that caused them in the first place.
  • They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
  • They are victims of the Dunning–Kruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
  • They can make pretty slides in high volumes.
  • They're very good at consuming resources, but not as good at turning a profit.
[-] rook@awful.systems 20 points 1 year ago* (last edited 1 year ago)

It’s a long read, but a good one (though not a nice one).

  • learn about how all the people who actually make decisions in c++ world are complete assholes!
  • liking go (the programming language) correlated with brain damage!
  • in c++ world, it is ok to throw an arbitrary number of highly competent non-bros out of the window in order to keep a bro on board, even if said bro drugged and raped a minor!
  • the c++ module system is like a gunshot wound to the ass!
  • c++ leadership is delusional about memory safety!
  • even more assholes!

Someone on mastodon (can’t remember who right now) joked that they were expecting the c++ committee to publicly support trump, in the hopes he would retract the usg memory safety requirements. I can now believe that they might have considered that, and are probably hoping he’ll come down in their favour now that he’s coming in.

[-] rook@awful.systems 19 points 1 year ago* (last edited 1 year ago)

Do any “ai” companies have a business plan more sophisticated than

  1. steal everything on the web
  2. buy masses of compute with vc money
  3. become too important to be busted for mass copyright infringement
  4. ?
  5. profit

I don’t recall seeing any signs of creativity, or even any good ideas as to what their product is even for, so I wouldn’t hold my breath waiting for one of the current crop to manifest creativity now.

Perhaps I missed something, though?

view more: ‹ prev next ›

rook

joined 2 years ago