Not really possible in an environment were the most useless person you know keeps telling everyone how AI made him twelve point eight times more productive, especially when in hearing distance from the management.
A programmer automating his job is kind of his job, though. That's not so much the problem as the complete enshittification of software engineering that the culture surrounding these dubiously efficient and super sketchy tools seems to herald.
On the more practical side, enterprise subscriptions to the slop machines do come with assurances that your company's IP (meaning code and whatever else that's accessible from your IDE that your copilot instance can and will ingest) and your prompts won't be used for training.
Hilariously, github copilot now has an option to prevent it from being too obvious about stealing other people's code, called duplication detection filter:
If you choose to block suggestions matching public code, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match, or a near match, the suggestion is not shown to you.
Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”
who talks like this
Good parallel, the hands are definitely strategically hidden to not look terrible.
Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves.
Big deal, we'll just configure a few to be in a constant state of unparalleled bliss to cancel out the ones having a hard time of it.
Although I'd guess human level problem solving needn't imply a human-analogous subjective experience in a way that would make suffering and angst meaningful for them.
Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?
Summary of the summary: they fully expected OpenAI would've gone bust by now and MS would be looting the corpse for all it's worth.
So many low-hanging fruits. Unbelievable fruits. You wouldn’t believe how low they’re hanging.
In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."
Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
I'm not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I'm willing to bet it's extremely dumb.
I'm almost certain I've seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.
There's an actual explanation in the original article about some of the wardrobe choices. It's even dumber, and it involves effective altruism.
It is a very cold home. It’s early March, and within 20 minutes of being here the tips of some of my fingers have turned white. This, they explain, is part of living their values: as effective altruists, they give everything they can spare to charity (their charities). “Any pointless indulgence, like heating the house in the winter, we try to avoid if we can find other solutions,” says Malcolm. This explains Simone’s clothing: her normal winterwear is cheap, high-quality snowsuits she buys online from Russia, but she can’t fit into them now, so she’s currently dressing in the clothes pregnant women wore in a time before central heating: a drawstring-necked chemise on top of warm underlayers, a thick black apron, and a modified corset she found on Etsy. She assures me she is not a tradwife. “I’m not dressing trad now because we’re into trad, because before I was dressing like a Russian Bond villain. We do what’s practical.”
This was such a chore to read, it's basically quirk-washing TREACLES. This is like a major publication deciding to take an uncritical look at scientology focusing on the positive vibes and the camaraderie, while stark in the middle of operation snow white, which in fact I bet happened a lot at the time.
The doomer scene may or may not be a delusional bubble—we’ll find out in a few years
Fuck off.
The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”
The weirdness is eugenics and the repugnant conclusion, and abusing bayes rule to sidestep context and take epistimological shortcuts to cuckoo conclusions while fortifying a bubble of accepted truths that are strangely amenable to allowing rich people to do whatever the hell they want.
Writing a 7-8000 word insider expose on TREACLES without mentioning eugenics even once throughout should be all but impossible, yet here we are.
I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI they're currently at and if it's reversible.