OpenAI hss removed all the other models!
"Do not, my friends, become addicted to gpt 3. It will take hold of you, and you will resent its absence"
(Also typo).
OpenAI hss removed all the other models!
"Do not, my friends, become addicted to gpt 3. It will take hold of you, and you will resent its absence"
(Also typo).
Open Anus Impossibly? My friend, this is nothing new, that site has been around for at least 25 years
Different fruit, same result: https://lemmy.world/comment/18685606
@fubarx @dgerard in case you wondered how do spell blueberry with three b's, and why...
https://mastodon.social/@kjhealy/114990301650917094
Sam Altman is touting GPT-5 as a “Ph.D level expert.” You might expect a Ph.D could count.
So let’s try the very first question: how many R’s are the in the word strawberry? GPT-5 can do the specific word “strawberry.” Cool.
But I suspect they hard-coded that question, because it fails hard on other words: [ChatGPT]
(Seriously, this is extremely fucking basic stuff, how the fuck can you be so utterly shallow and creatively sterile to fuck this u- oh, yeah, I forgot OpenAI is full of promptfondlers and Business Idiots like Sam Altman.)
A while ago, I uploaded a .json file to a chatbot (MS Copilot, I believe). It was a perfectly fine .json, with just one semicolon removed (by me). The chatbot was unable to identify the problem. Instead, it claimed to have found various other "errors" in the file. Would be interesting to know if other models (such as GPT-5) would perform any better here, as to me (as a layperson) this sounds somewhat similar to the letter counting problem.
I've tested Gemini on this stuff. Sometimes it spots the syntax error and even suggests a more elegant rewrite. Sometimes it just completely shits itself.
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community