Some of the comments on this topic remind me a bit of the days when people insisted that Google could only ever be the “good guy” because Google had been sued by big publishing companies in the past (and the big publishers didn't look particularly good in some of these cases). So now, conversely, some people seem to assume that Disney must always be the only “bad guy” no matter what the other side does (and who else the other side had harmed besides Disney).
I guess the main question here is: Would their business model remain profitable even after licensing fees to Disney and possibly a lot of other copyright holders?
From what I've heard, it's often also the people tasked with ghostwriting the LinkedIn posts of the members of the C-suite, among other things (while not necessarily being highly paid/high in the pecking order themselves).
In the past, people had to possess a degree of criminal energy to become halfway convincing scammers. Today, a certain amount of laziness is enough. I'm really glad that at least in one place there are now serious consequences for this.
This somehow reminds me of a bunch of senior managers in corporate communications on LinkedIn who got all excited over the fact that with GenAI, you can replace the background of an image with something else! That's never been seen before, of course! I'm assuming that in the past, these guys could never be bothered to look into tools as widespread as Canva, where a similar feature had been present for many years (before the current GenAI hype, I believe, even if the feature may use some kind of AI technology - I honestly don't know). Such tools are only for the lowly peasants, I guess - and quite soon, AI is going to replace all the people who know where to click to access a feature like "background remover", anyway!
Reportedly, some corporate PR departments "successfully" use GenAI to increase the frequency of meaningless LinkedIn posts they push out. Does this count?
It's also worth noting that your new variation of this “puzzle” may be the first one that describes a real-world use case. This kind of problem is probably being solved all over the world all the time (with boats, cars and many other means of transportation). Many people who don't know any logic puzzles at all would come up with the right answer straight away. Of course, AI also fails at this because it generates its answers from training data, where physical reality doesn't exist.
This is particularly remarkable because - as David pointed out - being a pilot is not even one of those jobs that nobody would want to do. There is probably still an oversupply of suitable people who would pass all the screening tests and really want to become pilots. Some of them would probably even work for a relatively average salary (as many did in the past outside the big airlines). The only problem for the airlines is probably that they can no longer count on enough people being willing (and able!) to take on the high training costs themselves. Therefore airlines would have to hire somewhat less affluent candidates and pay for all their training. However, AI probably looks a lot more appealing to them...
It is admittedly only tangential here, but it recently occurred to me that at school, there are usually no demerit points for wrong answers. You can therefore - to some extent - “game” the system by doing as much guesswork as possible. However, my work is related to law and accounting, where wrong answers - of course - can have disastrous consequences. That's why I'm always alarmed when young coworkers confidently use chatbots whenever they are unable to answer a question by themselves. I guess in such moments, they are just treating their job like a school assignment. I can well imagine that this will only get worse in the future, for the reasons described here.
In any case, I think we have to acknowledge that companies are capable of turning a whistleblower's life into hell without ever physically laying a hand on them.
Yes, even some influential people at my employer have started to peddle the idea that only “old-fashioned” people are still using Google, while all the forward-thinking people are prompting an AI. For this reason alone, I think that negative examples like this one deserve a lot more attention.
Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn't surprising when a well-trained LLM "picks up" similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots "just for fun", by the way).
Of course, "love bombing" is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).