[-] HedyL@awful.systems 1 points 32 minutes ago* (last edited 11 minutes ago)

Of course, it has long been known that some private investors would buy shares in any company just because its name contains letters like “.com” or “blockchain”. However, if a company invests half a million in an ".ai" company, shouldn't it make sure that the business model is actually AI-based?

Maybe, if we really wanted to replace something with AI, we should start with the VC investors themselves. In this case, we might not actually see any changes for the worse.

Edit: Of course, investors only bear part of the blame if fraud was involved. But the company apparently received a large part of its funding in 2023, following reports of similar lies in as early as 2019. I find it hard to imagine that tech-savvy investors really wouldn't have had a chance to spot the problems earlier.

[-] HedyL@awful.systems 9 points 1 day ago

As all the book authors on the list were apparently real, I guess the "author" of this supplemental insert remembered to google their names and to remove all references to fake books from fake authors made up by AI, but couldn't be bothered to do the same with the book titles (too much work for too little money, I suppose?). And for an author to actually read these books before putting them on a list is probably too much to ask for...

It's also funny how some people seem to justify this by saying that the article is just “filler material” around ads. I don't know, but I believe most people don't buy printed newspapers in order to read nonsensical “filler material” garnished with advertising. The use of AI is a big problem in this case, but not the only one.

[-] HedyL@awful.systems 17 points 3 weeks ago

Reportedly, some corporate PR departments "successfully" use GenAI to increase the frequency of meaningless LinkedIn posts they push out. Does this count?

[-] HedyL@awful.systems 9 points 3 weeks ago

For me, everything increasingly points to the fact that the main “innovation” here is the circumvention of copyright regulations. With possibly very erroneous results, but who cares?

[-] HedyL@awful.systems 17 points 1 month ago

It's also worth noting that your new variation of this “puzzle” may be the first one that describes a real-world use case. This kind of problem is probably being solved all over the world all the time (with boats, cars and many other means of transportation). Many people who don't know any logic puzzles at all would come up with the right answer straight away. Of course, AI also fails at this because it generates its answers from training data, where physical reality doesn't exist.

[-] HedyL@awful.systems 17 points 1 month ago

This is particularly remarkable because - as David pointed out - being a pilot is not even one of those jobs that nobody would want to do. There is probably still an oversupply of suitable people who would pass all the screening tests and really want to become pilots. Some of them would probably even work for a relatively average salary (as many did in the past outside the big airlines). The only problem for the airlines is probably that they can no longer count on enough people being willing (and able!) to take on the high training costs themselves. Therefore airlines would have to hire somewhat less affluent candidates and pay for all their training. However, AI probably looks a lot more appealing to them...

[-] HedyL@awful.systems 12 points 1 month ago

FWIW, years ago, some people who worked for a political think tank approached me for expert input. They subsequently published a report that cited many of the sources I had mentioned, but their recommendations in the report were exactly the opposite of what the cited sources said (and what I had told them myself). As far as I know, there was no GenAI at the time. I think these people were simply betting that no one would check the sources.

This is not to defend the use of AI, on the contrary - I think this shows quite well what sort of people would use such tools.

[-] HedyL@awful.systems 17 points 2 months ago

It is admittedly only tangential here, but it recently occurred to me that at school, there are usually no demerit points for wrong answers. You can therefore - to some extent - “game” the system by doing as much guesswork as possible. However, my work is related to law and accounting, where wrong answers - of course - can have disastrous consequences. That's why I'm always alarmed when young coworkers confidently use chatbots whenever they are unable to answer a question by themselves. I guess in such moments, they are just treating their job like a school assignment. I can well imagine that this will only get worse in the future, for the reasons described here.

[-] HedyL@awful.systems 25 points 5 months ago

In any case, I think we have to acknowledge that companies are capable of turning a whistleblower's life into hell without ever physically laying a hand on them.

[-] HedyL@awful.systems 21 points 7 months ago

Yes, even some influential people at my employer have started to peddle the idea that only “old-fashioned” people are still using Google, while all the forward-thinking people are prompting an AI. For this reason alone, I think that negative examples like this one deserve a lot more attention.

[-] HedyL@awful.systems 11 points 8 months ago

From the original article:

Crivello told TechCrunch that out of millions of responses, Lindy only Rickrolled customers twice.

Yes, but how many of them received other similarly "useful" answers to their questions?

[-] HedyL@awful.systems 12 points 1 year ago

I vividly remember how, in the days of Gamestop, even normally reasonable people on the left bought into that "Sticking it to Wall Street" narrative. Sadly, I'm not surprised at how things turned out.

view more: next ›

HedyL

joined 2 years ago