Microsoft's Visual Studio says it's going to incorporate coding 'agents' as soon as maybe the next minor version. I can't really see them buying up car factories or beating pokemon, but agent- as an AI marketing term is definitely a part of the current hype cycle.
That IQ after a certain level somehow turns into mana points is a core rationalist assumption about how intelligence works.
Nice to know even pre-LLM AI techniques remain eminently fuckupable if you just put your mind to it.
Didn't mean to imply otherwise, just wanted to point out that the call is coming from inside the house.
He claims he was explaining what others believe not what he believes
Others as in specifically his co-writer for AI2027 Daniel Kokotlajo, the actual ex-OpenAI researcher.
I'm pretty annoyed at having this clip spammed to several different subreddits, with the most inflammatory possible title, out of context, where the context is me saying "I disagree that this is a likely timescale but I'm going to try to explain Daniel's position" immediately before. The reason I feel able to explain Daniel's position is that I argued with him about it for ~2 hours until I finally had to admit it wasn't completely insane and I couldn't find further holes in it.
Pay no attention to this thing we just spent two hours exhaustively discussing that I totally wasn't into, it's not really relevant context.
Also the title is inflammatory only in the context of already knowing him for a ridiculous AI doomer, otherwise it's fine. Inflammatory would be calling the video economically illiterate bald person thinks evaluations force-buy car factories, China having biomedicine research is like Elon running SpaceX .
(Are there multiple ai Nobel prize winners who are ai doomers?)
There's Geoffrey Hinton I guess, even if his 2024 Nobel in (somehow) Physics seemed like a transparent attempt at trend chasing on behalf of the Nobel committee.
Also, add obvious and overdetermined to the pile of siskindisms next to very non-provably not-correct.
Scoot makes the case that agi could have murderbot factories up and running in a year if it wanted to https://old.reddit.com/r/slatestarcodex/comments/1kp3qdh/how_openai_could_build_a_robot_army_in_a_year/
edit: Wrote it up
What is the analysis tool?
The analysis tool is a JavaScript REPL. You can use it just like you would use a REPL. But from here on out, we will call it the analysis tool.
When to use the analysis tool
Use the analysis tool for:
- Complex math problems that require a high level of accuracy and cannot easily be done with "mental math"
- To give you the idea, 4-digit multiplication is within your capabilities, 5-digit multiplication is borderline, and 6-digit multiplication would necessitate using the tool.
uh
In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."
Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
excluding wages! (and probably also benefits, retirement, a cap on working hours per day etc)
Is that whole thing in the comments about unions bad because monopolies bad and unions are just monopolies of labor the latest in bootlicking theory? Hadn't really heard this take before.