[-] HedyL@awful.systems 7 points 2 days ago

It's a bit tangential, but using ChatGPT to write a press release and then being unable to answer any critical questions about it is a little bit like using an app to climb a mountain wearing shorts and flip-flops without checking the weather first and then being unable to climb back down once the inevitable thunderstorm has started.

[-] HedyL@awful.systems 43 points 3 weeks ago

New reality at work: Pretending to use AI while having to clean up after all the people who actually do.

[-] HedyL@awful.systems 21 points 3 weeks ago

Most searchers don’t click on anything else if there’s an AI overview — only 8% click on any other search result. It’s 15% if there isn’t an AI summary.

I can't get over that. An oligopolistic company imposes a source on its users that is very likely either hallucinating or plagiarizing or both, and most people seem to eat it up (out of convenience or naiveté, I assume).

[-] HedyL@awful.systems 27 points 1 month ago

... and just a few paragraphs further down:

The number of people tested in the study was n=16. That’s a small number. But it’s a lot better than the usual AI coding promotion, where n=1 ’cos it’s just one guy saying “I’m so much faster now, trust me bro. No, I didn’t measure it.”

I wouldn't call that "burying information".

[-] HedyL@awful.systems 25 points 1 month ago

Completely unrelated fact, but isn't the prevalence of cocaine use among U. S. adults considered to be more than 1% as well?

(Referring to this, of course - especially the last part: https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/)

[-] HedyL@awful.systems 29 points 1 month ago

Stock markets generally love layoffs, and they appear to love AI at the moment. To be honest, I'm not sure they thought beyond that.

[-] HedyL@awful.systems 23 points 1 month ago

And then we went back to “it’s rarely wrong though.”

I am often wondering whether the people who claim that LLMs are "rarely wrong" have access to an entirely different chatbot somehow. The chatbots I tried were rarely ever correct about anything except the most basic questions (to which the answers could be found everywhere on the internet).

I'm not a programmer myself, but for some reason, I got the chatbot to fail even in that area. I took a perfectly fine JSON file, removed one semicolon on purpose and then asked the chatbot to fix it. The chatbot came up with a number of things that were supposedly "wrong" with it. Not one word about the missing semicolon, though.

I wonder how many people either never ask the chatbots any tricky questions (with verifiable answers) or, alternatively, never bother to verify the chatbots' output at all.

[-] HedyL@awful.systems 56 points 1 month ago

FWIW, I work in a field that is mostly related to law and accounting. Unlike with coding, there are no simple "tests" to try out whether an AI's answer is correct or not. Of course, you could try these out in court, but this is not something I would recommend (lol).

In my experience, chatbots such as Copilot are less than useless in a context like ours. For more complex and unique questions (which is most of the questions we are dealing with everyday), it simply makes up smart-sounding BS (including a lot of nonexistent laws etc.). In the rare cases where a clear answer is already available in the legal commentaries, we want to quote it verbatim from the most reputable source, just to be on the safe side. We don't want an LLM to rephrase it, hide its sources and possibly introduce new errors. We don't need "plausible deniability" regarding plagiarism or anything like this.

Yet, we are being pushed to "embrace AI" as well, we are being told we need to "learn to prompt" etc. This is frustrating. My biggest fear isn't to be replaced by an LLM, not even by someone who is a "prompting genius" or whatever. My biggest fear is to be replaced by a person who pretends that the AI's output is smart (rather than filled with potentially hazardous legal errors), because in some workplaces, this is what's expected, apparently.

[-] HedyL@awful.systems 48 points 2 months ago

As usual with chatbots, I'm not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don't question wrong answers (especially when they're harder to check than a simple calculation).

[-] HedyL@awful.systems 35 points 2 months ago

Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people's handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn't any less correct than one that had been memorized (probably more so), the same couldn't be said about chatbots and LLMs. They aren't known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.

[-] HedyL@awful.systems 22 points 2 months ago

This somehow reminds me of a bunch of senior managers in corporate communications on LinkedIn who got all excited over the fact that with GenAI, you can replace the background of an image with something else! That's never been seen before, of course! I'm assuming that in the past, these guys could never be bothered to look into tools as widespread as Canva, where a similar feature had been present for many years (before the current GenAI hype, I believe, even if the feature may use some kind of AI technology - I honestly don't know). Such tools are only for the lowly peasants, I guess - and quite soon, AI is going to replace all the people who know where to click to access a feature like "background remover", anyway!

[-] HedyL@awful.systems 25 points 8 months ago

In any case, I think we have to acknowledge that companies are capable of turning a whistleblower's life into hell without ever physically laying a hand on them.

view more: next ›

HedyL

joined 2 years ago