18
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 12 Aug 2024
18 points (100.0% liked)
TechTakes
1493 readers
105 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
This came up in a podcast I listen to:
WaPo: "OpenAI illegally barred staff from airing safety risks, whistleblowers say "
archive link https://archive.is/E3M2p
While I'm not prepared to defend OpenAI here I suspect this is just to shut up the most hysterical employees who still actually believe they're building the P(doom) machine.
I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.
Short story: it's smoke and mirrors.
Longer story: This is now how software releases work I guess. Alot is running on open ai's anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there's no more training data. So the next trick is that for their next batch of models they have "solved" various problems that people say you can't solve with LLMs, and they are going to be massively better without needing more data.
But, as someone with insider info, it's all smoke and mirrors.
The model that "solved" structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it's a price optimization afaik).
The next large model launching with the new Q* change tomorrow is "approaching agi because it can now reliably count letters" but actually it's still just agents (Q* looks to be just a cost optimization of agents on the backend, that's basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they're so confident in this model that they don't run the resulting python themselves. It's still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um... checks notes count the number of letters in a sentence.
But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.
Expect more of this around GPT-5 which they promise "Is so scary they can't release it until after the elections". My guess? It's nothing different, but they have to create a story so that true believers will see it as something different.
Yeah, I'm not in any doubt that the C-level and marketing team are goosing the numbers like crazy to keep the buuble from bursting, but I also think they're the ones that are most cognizant of the fact that ChatGPT is definitely not the Doom Machine. But I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.
Part of me suspects they probably also aren't the sharpest knives in OpenAI's drawer.
It can be both. Like, probably OpenAI is kind of hoping that this story becomes wide and is taken seriously, and has no problem suggesting implicitly and explicitly that their employee's stocks are tied to how scared everyone is.
Remember when Altman almost got outed and people got pressured not to walk? That their options were at risk?
Strange hysteria like this doesn't need just one reason. It just needs an input dependency and ambiguity, the rest takes of itself.
Well, it's now yesterday's tomorrow and while there's an update I'm not seeing a Q* announcement.
My understanding is that it was renamed or rebranded to Strawberry which itself nebulous marketting maybe it's the new larger model or maybe it's GPT-5 or maybe...
it's all smoke and mirrors. I think my point is, they made some cost optimizations and mostly moved around things that existed, and they'll keep doing that.
OH
I first saw this then later saw the "openai employees tweeted 🍓" and thought the latter was them being cheeky dipshits about the former. admittedly I didn't look deeper (because ugh)
but this is even more hilarious and dumb
I'm not seeing a Strawberry announcement either.