[-] HedyL@awful.systems 14 points 6 days ago* (last edited 6 days ago)

Maybe it's also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it's expected to try and try again with different questions until one correct answer comes out and then use that one to "evangelize" about the virtues of AI.

[-] HedyL@awful.systems 36 points 6 days ago

Refusing to use AI tools or output. Sabotage!

Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).

I work in the field of law/accounting/compliance, btw.

[-] HedyL@awful.systems 6 points 6 days ago

I believe that promptfondlers and boosters are particularly good at "kissing up", which may help their careers even during an AI winter. This something we have to be prepared for, sadly. However, some of those people could still be in for a rude awakening if someone actually pays attention to the quality and usefulness of their work.

[-] HedyL@awful.systems 5 points 6 days ago

By the way, I know there is an argument that "low-skilled" jobs should not be eliminated because there are supposedly people who are unable to perform more demanding and varied tasks. But I believe this is partly a myth that was invented as a result of the industrial revolution, because back then, a very large number of people were needed to do such jobs. In addition, this doesn't even address the fact that many of these jobs require some type of specific skill anyway (which isn't getting rewarded appropriately, though).

The best example to this day are immigrants who have to do "low-skilled" jobs even though they possess academic degrees from their home countries. In such cases, I believe that automation could even lead to the creation of more jobs that match their true skill levels.

Another problem is that, especially in countries like the US, low-wage jobs are used as a substitute for a reasonable social safety net.

AI (especially large language models) is, of course, a separate issue, because it is claimed that AI could replace highly skilled and creative workers, which, on the one hand, is used as a constant threat and, on the other hand, is not even remotely true according to current experience.

[-] HedyL@awful.systems 7 points 6 days ago

In my experience, the large self-service kiosks at McDonald's are pretty decent (unless they crash, which happens too often). Many people (including myself) use them voluntarily, because if it is nice to have more control of and visual information about your order (including prices, product images, nutritional information, allergens etc.). You don't even need to wait in line anymore if their staff brings your order directly to your table. You don't need to use any tricks to speak to a human either, because you can always go to the counter and order there instead. However, this only works because the kiosks are customer-friendly enough that you don't have to force most people to use them.

I know that even those kiosks probably aren't great in the sense that they may replace some jobs, at least over the short-term. However, if customers truly like something, this might still lead to more demand and thus more jobs in other areas (people who carry your order to your table, people who prepare the food itself, people who code those apps - unless they are truly "vibe-coded", maintain the kiosks, design their content etc.).

However, the current "breed" of AI bots is a far cry away from even that, in my impression. They are really primarily used as a threat to “uppity” labor, and who cares about the customers?

[-] HedyL@awful.systems 43 points 1 month ago

New reality at work: Pretending to use AI while having to clean up after all the people who actually do.

[-] HedyL@awful.systems 27 points 1 month ago

... and just a few paragraphs further down:

The number of people tested in the study was n=16. That’s a small number. But it’s a lot better than the usual AI coding promotion, where n=1 ’cos it’s just one guy saying “I’m so much faster now, trust me bro. No, I didn’t measure it.”

I wouldn't call that "burying information".

[-] HedyL@awful.systems 25 points 2 months ago

Completely unrelated fact, but isn't the prevalence of cocaine use among U. S. adults considered to be more than 1% as well?

(Referring to this, of course - especially the last part: https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/)

[-] HedyL@awful.systems 29 points 2 months ago

Stock markets generally love layoffs, and they appear to love AI at the moment. To be honest, I'm not sure they thought beyond that.

[-] HedyL@awful.systems 56 points 2 months ago

FWIW, I work in a field that is mostly related to law and accounting. Unlike with coding, there are no simple "tests" to try out whether an AI's answer is correct or not. Of course, you could try these out in court, but this is not something I would recommend (lol).

In my experience, chatbots such as Copilot are less than useless in a context like ours. For more complex and unique questions (which is most of the questions we are dealing with everyday), it simply makes up smart-sounding BS (including a lot of nonexistent laws etc.). In the rare cases where a clear answer is already available in the legal commentaries, we want to quote it verbatim from the most reputable source, just to be on the safe side. We don't want an LLM to rephrase it, hide its sources and possibly introduce new errors. We don't need "plausible deniability" regarding plagiarism or anything like this.

Yet, we are being pushed to "embrace AI" as well, we are being told we need to "learn to prompt" etc. This is frustrating. My biggest fear isn't to be replaced by an LLM, not even by someone who is a "prompting genius" or whatever. My biggest fear is to be replaced by a person who pretends that the AI's output is smart (rather than filled with potentially hazardous legal errors), because in some workplaces, this is what's expected, apparently.

[-] HedyL@awful.systems 48 points 2 months ago

As usual with chatbots, I'm not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don't question wrong answers (especially when they're harder to check than a simple calculation).

[-] HedyL@awful.systems 35 points 2 months ago

Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people's handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn't any less correct than one that had been memorized (probably more so), the same couldn't be said about chatbots and LLMs. They aren't known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.

view more: next ›

HedyL

joined 2 years ago