Bruh. This is the moment I go full on Frank Grimes.
OAI announced their shiny new toy: DeepResearch (still waiting on DeeperSeek). A bot built off O3 which can crawl the web and synthesize information into expert level reports!
Noam is coming after you @dgerard, but don't worry he thinks it's fine. I'm sure his new bot is a reliable replacement for a decentralized repository of all human knowledge freely accessible to all. I'm sure this new system doesn't fail in any embarrassing wa-
After posting multiple examples of the model failing to understand which player is on which team (if only this information was on some sort of Internet Encyclopedia, alas), Professional AI bully Colin continues: "I assume that in order to cure all disease, it will be necessary to discover and keep track of previously unknown facts about the world. The discovery of these facts might be a little bit analogous to NBA players getting traded from team to team, or aging into new roles. OpenAI's "Deep Research" agent thinks that Harrison Barnes (who is no longer on the Sacramento Kings) is the Kings' best choice to guard LeBron James because he guarded LeBron in the finals ten years ago. It's not well-equipped to reason about a changing world... But if it can't even deal with these super well-behaved easy facts when they change over time, you want me to believe that it can keep track of the state of the system of facts which makes up our collective knowledge about how to cure all diseases?"
xcancel link if anyone wants to see some more glorious failure cases:
https://xcancel.com/colin_fraser/status/1886506507157585978#m
Neo-Nazi nutcase having a normal one.
It's so great that this isn't falsifiable in the sense that doomers can keep saying, well "once the model is epsilon smarter, then you'll be sorry!", but back in the real world: the model has been downloaded 10 million times at this point. Somehow, the diamanoid bacteria has not killed us all yet. So yes, we have found out the Yud was wrong. The basilisk is haunting my enemies, and she never misses.
Bonus sneer: "we are going to find out if Yud was right" Hey fuckhead, he suggested nuking data centers to prevent models better than GPT4 from spreading. R1 is better than GPT4, and it doesn't require a data center to run so if we had acted on Yud's geopolitical plans for nuclear holocaust, billions would have been for incinerated for absolutely NO REASON. How do you not look at this shit and go, yeah maybe don't listen to this bozo? I've been wrong before, but god damn, dawg, I've never been starvingInRadioactiveCratersWrong.
Actual message I got while renewing my insurance plan last night. Thank you for adding a shitty chat bot which will give me false information about my life and death decisions, bravo.
Smh, why do I feel like I understand the theology of their dumb cult better than its own adherents? If you believe that one day AI will foom into a 10 trillion IQ super being, then it makes no difference at all whether your ai safety researcher has 200 IQ or spends their days eating rocks like the average LW user.
Yann and co. just dropped llama 3.1. Now there's an open source model on par with OAI and Anthropic, so who the hell is going to pay these nutjobs for access to their apis when people can get roughly the same quality for free without the risk of having to give your data to a 3rd party?
These chuckle fucks are cooked.
my honest reacton:
Edit: Judit Polgár for ref if anyone wants to learn about one of the greatest of all times. Her dad claimed he was doing a nature/nurture experiment in order to prove that anyone could be great if they were trained to master a skill from a young age, so taught his 3 daughters chess. Judit achieved the rank of number 8 in the world OVERALL and beat multiple WC including Kasparov over her career.
idk its almost like if more girls were encouraged to play chess and felt welcome in the community these apparent skill differences might disappear
How many rounds of training does it take before AlphaGo realizes the optimal strategy is to simply eat its opponent?
Found in the wilds^
Giganto brain AI safety 'scientist'
If AIs are conscious right now, we are monsters. Nobody wants to think they're monsters. Ergo: AIs are definitely not conscious.
Internet rando:
If furniture is conscious right now, we are monsters. Nobody wants to think they're monsters. Ergo: Furniture is definitely not conscious.
David, please I was trying to have a nice day.
It's true. ChatGPT is slightly sentient in the same way a field of wheat is slightly pasta.
Deep thinker asks why?
Thus spoketh the Yud: "The weird part is that DOGE is happening 0.5-2 years before the point where you actually could get an AGI cluster to go in and judge every molecule of government. Out of all the American generations, why is this happening now, that bare bit too early?"
Yud, you sweet naive smol uwu baby~~esian~~ boi, how gullible do you have to be to believe that a) tminus 6 months to AGI kek (do people track these dog shit predictions?) b) the purpose of DOGE is just accountability and definitely not the weaponized manifestation of techno oligarchy ripping apart our society for the copper wiring in the walls?