users trade off decision quality against effort reduction
They should put that on the species' gravestone.
users trade off decision quality against effort reduction
They should put that on the species' gravestone.
What if quantum but magically more achievable at nearly current technology levels. Instead of qbits they have pbits (probabilistic bits, apparently) and this is supposed to help you fit more compute in the same data center.
Also they like to use the word thermodynamic a lot to describe the (proposed) hardware.
In an completely unprecedented turn of events, the word prediction machine has a hard time predicting numbers.
https://www.wired.com/story/google-ai-overviews-says-its-still-2024/
Scoot makes the case that agi could have murderbot factories up and running in a year if it wanted to https://old.reddit.com/r/slatestarcodex/comments/1kp3qdh/how_openai_could_build_a_robot_army_in_a_year/
edit: Wrote it up
What is the analysis tool?
The analysis tool is a JavaScript REPL. You can use it just like you would use a REPL. But from here on out, we will call it the analysis tool.
When to use the analysis tool
Use the analysis tool for:
- Complex math problems that require a high level of accuracy and cannot easily be done with "mental math"
- To give you the idea, 4-digit multiplication is within your capabilities, 5-digit multiplication is borderline, and 6-digit multiplication would necessitate using the tool.
uh
The post is using traditional orthodox frankincense scented machine learning techniques though, they aren't just asking an LLM.
This is AI from when we were using it to decide if an image is of a dog or a cat, not how to best disenfranchise all creatives.
Should be noted that it's mutual, Hanania has gone to great lengths to suck up to siskind, going back to at least the designer mouth bacteria thing.
The surface claim seems to be the opposite, he says that because of Moore's law AI rates will soon be at least 10x cheaper and because of Mercury in retrograde this will cause usage to increase muchly. I read that as meaning we should expect to see chatbots pushed in even more places they shouldn't be even though their capabilities have already stagnated as per observation one.
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
Maybe EAs should ditch malaria nets and start subsidizing bum ticklers
disclaimer
(not a claim made by linked OP)
And all that stuff just turned out to be true
Literally what stuff, that AI would get somewhat better as technology progresses?
I seem to remember Yud specifically wasn't that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.
OpenAI Declares ‘Code Red’ as Google Threatens AI Lead
I just wanted to point out this tidbit:
Apparently a fortunate side effect of google supposedly closing the gap is that it's a great opportunity to give up on agents without looking like complete clowns. And also make Pulse even more vapory.