My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)
The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it's obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.
And that isn't even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.
I'm literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE's) unwillingness to adapt and onboard.
Looking for advice from people who have had to navigate similar crap. Because I feel like I'm at a point where I must adapt or eventually get fired.
If you don't mind me asking, what do you do and kind of AI? Maybe it's the autism but I find LLMs are bit limited and useless but other use cases aren't quite as bad Training image recognition into AI is a legitimately great use of it and extremely helpful. Already being used for such cases. Just installed a vision system on a few of my manufacturing lines. A bottling operation detects cap presence, as well as cross threads or un-torqued caps based on how the neck vs cap bottom angle and distance looks as it passes the camera. Checking 10,000 bottles a day as they scroll past would be a mind numbing task for a human. Other line is making fresnel lenses. Operators make the lenses, and are personally checking each lens for defects and power. Using a known background and training the AI to what distortion good lenses should create when presented is showing good progress at screening just as well as my operators. In this case it's doing what the human eye can't; determine magnification and defraction visually.
The AI in this case is, for all intents and purposes, using Copilot to write all the code. It is basically beginning to be promoted as being the first resort, rather than a supplement.
I don't know enough about copilot as work has made it optional for mostly accessibility related tasks: digging through the mass of extended Microsoft files in teams, outlook, OneDrive to find and summarize topics; record meeting notes, not that they're overly helpful compared to human taken notes due to a lack of context; and normalizing data, as every power BI report out is formatted as it's owner saw fit.
Given it's ability to make ridiculous errors confidently, I don't suppose it has the memory to be used more like a toddler helper? Small, frequent tasks that are pretty hard to fuck up, once it can reliably do these through repetition and guidance on what's a passing result, tieing more together?