My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)
The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it's obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.
And that isn't even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.
I'm literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE's) unwillingness to adapt and onboard.
Looking for advice from people who have had to navigate similar crap. Because I feel like I'm at a point where I must adapt or eventually get fired.
AI is pretty bad most things you do that are actually valuable so your critique definitely holds. It's bad for the environment and creates tech consolation and all round is creating around as many problems as it claims to solve.
AI as in neural networks are really good in most ways such as playing chess and detecting melonomas but I'm going to give some tips for spocifically LLMs.
Treat it as a dumb intern. You ask it to find research papers but you have to read them yourself to actually assess them. You can use it to draft an email but you still have to proofread it. You can use it to write code but expect bugs and unhandled edge cases.
I'm a software developer and I use an LLM to create code generators, internal tooling, a thing that takes a json and outputs SQL insert statements or to look up docs. The AI has not increased my productivity per se but the tooling I created with it has.
Another use case is to ask for critique, you paste some code block in and ask it to review performance for example and it can spot the "use a hash map there" cases pretty easily.
That's my 2 cents on the topic.