Could be this will be mostly a vehicle for them marketing their AI writing coach.
Maybe wapo is an AI startup now.
Could be this will be mostly a vehicle for them marketing their AI writing coach.
Maybe wapo is an AI startup now.
Sounds like they should nationalize OpenAI.
Midnight Pals is pretty great.
What does solving the data problem supposed to look like exactly? A somewhat higher score in their already incredibly suspect benchmarks?
The data part of the whole hyperscaling thing seems predicated on the belief that the map will magically become the territory if only you map hard enough.
In an completely unprecedented turn of events, the word prediction machine has a hard time predicting numbers.
https://www.wired.com/story/google-ai-overviews-says-its-still-2024/
I wonder how the inevitable presidential pardon will be handled if it comes before the movie is completed.
So many low-hanging fruits. Unbelievable fruits. You wouldn’t believe how low they’re hanging.
In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."
Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
I'm not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I'm willing to bet it's extremely dumb.
I'm almost certain I've seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.
There's an actual explanation in the original article about some of the wardrobe choices. It's even dumber, and it involves effective altruism.
It is a very cold home. It’s early March, and within 20 minutes of being here the tips of some of my fingers have turned white. This, they explain, is part of living their values: as effective altruists, they give everything they can spare to charity (their charities). “Any pointless indulgence, like heating the house in the winter, we try to avoid if we can find other solutions,” says Malcolm. This explains Simone’s clothing: her normal winterwear is cheap, high-quality snowsuits she buys online from Russia, but she can’t fit into them now, so she’s currently dressing in the clothes pregnant women wore in a time before central heating: a drawstring-necked chemise on top of warm underlayers, a thick black apron, and a modified corset she found on Etsy. She assures me she is not a tradwife. “I’m not dressing trad now because we’re into trad, because before I was dressing like a Russian Bond villain. We do what’s practical.”
This was such a chore to read, it's basically quirk-washing TREACLES. This is like a major publication deciding to take an uncritical look at scientology focusing on the positive vibes and the camaraderie, while stark in the middle of operation snow white, which in fact I bet happened a lot at the time.
The doomer scene may or may not be a delusional bubble—we’ll find out in a few years
Fuck off.
The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”
The weirdness is eugenics and the repugnant conclusion, and abusing bayes rule to sidestep context and take epistimological shortcuts to cuckoo conclusions while fortifying a bubble of accepted truths that are strangely amenable to allowing rich people to do whatever the hell they want.
Writing a 7-8000 word insider expose on TREACLES without mentioning eugenics even once throughout should be all but impossible, yet here we are.
Kind of a nitpick but there has never been anything other than AI for automated transcription, OCR and speech recognition have been fundamental use cases for neural networks, and dev-kits to deaf kids is honestly kind of an honest mistake well within the known limitations of that technology.
LLM based audio transcription however does get goofy because apparently when it mishears stuff it might compound the error by, you guessed it, making more shit up: Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said