[-] JFranek@awful.systems 6 points 3 weeks ago

I have some thoughts about this goober (Simon Willison) that I need to get out of my head:

First the positives:

  • I think he's actually an experienced software engineer.
  • I think he care to check and test the LLM output.

But, by his own admission:

  • He uses LLM for tasks he knows well (So easier to check and little negative impact on learning)
  • He works mostly on hobby projects (so no obligation to actually maintain the stuff)
  • He can choose to not use new libraries (which in a professional setting is not always a luxury you can afford)

Tl;dr: an experienced dev who uses clankers to churn out tons of technically functional hobby software and thinks this gives him right to speak for all software engineers.

[-] JFranek@awful.systems 6 points 1 month ago

In my days, that was called micromanagement and was generally frowned upon.

[-] JFranek@awful.systems 6 points 1 month ago

"How dare you suggest that we pivoted to SlopTok and smut because of money if something that we totally cannot do right now is more lucrative?"

[-] JFranek@awful.systems 6 points 2 months ago

Slack CEO responded there that it was all a "billing mistake" and that they'll do better in the future and people are having none of it.

A rare orange site W, surprisingly heartwarming.

[-] JFranek@awful.systems 6 points 3 months ago

Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.

It talked about how it's almost impossible to detect whether a model was deliberately trained to output some "bad" output (like vulnerable code) for some specific set of inputs.

Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a "sleeper agent". But maybe some of y'all will find it interesting.

link

[-] JFranek@awful.systems 6 points 3 months ago

It's two guys in London and one guy in San Francisco. In London there's presumably no OpenAI office, in SF, you can't be at two places at once and Anthropic has more true believers/does more critihype.

Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle "BussyGyatt @feddit.org". Truly the dumbest timeline.

[-] JFranek@awful.systems 6 points 3 months ago

Yeah, didn't even cross their mind that it could be wrong, because it looked ok.

[-] JFranek@awful.systems 6 points 3 months ago

That’s how you get a codebase that kinda sorta works in a way but is more evolved than designed, full of security holes, slow as heck, and disorganized to the point where it’s impossible to fix bugs, adds features, or understand what’s going on.

Well, one of the ways *glancing at the code I'm responsible for, sweating profusely*

[-] JFranek@awful.systems 6 points 3 months ago* (last edited 3 months ago)

Yay! *pats myself on the back*

[-] JFranek@awful.systems 6 points 3 months ago

Second quote is classic "you must be prompting it wrong". No, it can't be that people which find a tool less useful will be using it less often.

[-] JFranek@awful.systems 6 points 4 months ago

I've been recommended more Veo 3 fails by The Algorithm. Apparently even some promptfans think it sucks.

https://youtu.be/3lzMkigMvD8

You WILL believe what happened when they tried to replicate Google's demos using the exact same prompts.

[-] JFranek@awful.systems 6 points 1 year ago

Then there is John Michael Greer...

Wow, that's a name I haven't heard in a long time.

A regular contributor at UnHerd...

I did not know that, and I hate that it doesn't surprise me. I tended to dismiss his peak oil doomerism as wishing for some imagined "harmony with nature". This doesn't help with that bias.

view more: ‹ prev next ›

JFranek

joined 1 year ago