Oh god, that fall was really big. Trump's second term? The fast erosion of our rights and democracies? You might have a concussion. We're just in a second boring term of Joe Biden, with the usual liberal ineffectiveness. Don't you remember all the MAGAts crying on television and facebook, many asking how to become homosexual for some weird reason?
Are we sure that OpenAI didn't use a quantum computer that is accessing facts from an alternate timeline (those lucky bastards).
That's actually a hilarious idea. A true oracle device but it looks at a different universe.
The knowledge cut-off for GPT5 is 2024 just so you know. Obviously, it would be better if it didn't hallucinate a response to fill in its own blanks. But it's software, so if you're going to use it then please use it like software and not like it's magic.
In general I'm not too moved either way when somebody misuses AI and then posts gobsmacked about how bad it is. Really though, the blame is on AI companies for trying to push AI onto everyone rather than only to domain experts.
That's funny though because I know Copilot can google things and talk about them.
Like, a news story can appear that day, and you go "Did you hear about the guy that did X and Y?" and Copilot will google it and be like "Oh yeah you're referring to the news story that came out today about the guy that did X and Y. It was reported in Newspaper that Z was also involved" and then send you a link to the article.
So like... GPT5 should be able to supplement its knowledge with basic searching, it just doesn't.
This is the fundamental problem with LLMs and all the hype.
People with technology experience can understand the limitations of the tech, and will be more skeptical of the output from them.
But your average person?
If they go to Google and ask if vaccines cause autism, and the Google's AI search slop trough contains an answer they like, accurate or not there will be exactly no second guessing. I mean, this is supposed to be a PhD level person, and it was right about the other softball questions they asked, like what color is the sky. Surely they're right about that too, right?
Yeah. The average person just doesn't have a good intuition about AI, at least not yet. Maybe in a few years people will be burned by it and they'll start to grok its limits, but idk. I still blame the AI companies here.
start to grok its limits
Teehee
that was unintentional
If the knowledge cutoff for GPT5 is 2024 it should absolutely not be commenting on current day events and claiming accuracy.
This is not the defence you think it is. It still shows ChatGPT in an accurate and very negative light.
It seems as though you read the first sentence I wrote and not any of the sentences afterward.
Indeed I did. Especially the parts where you made excuses for it, saying..
it's software, so if you're going to use it then please use it like software and not like it's magic.
Nobody claimed it was magic. They gave it a very reasonable prompt that a grade 1 child could answer, and it failed. And this..
In general I'm not too moved either way when somebody misuses AI and then posts gobsmacked about how bad it is.
Again, you're claiming the prompt is misuse, "tHEyRe uSInG iT wRonG". Going on to say it's the 'AI companies fault really' for pushing it to everyone instead of just domain experts is again not getting the point. The AI should never respond with a confident answer to a prompt it has no idea about. That's nothing to do with the user or the targeted audience, that's just shit programming.
The AI should never respond with a confident answer to a prompt it has no idea about.
Agreed. But the technology isn't there yet. It's not shit programming, because the theory of how to solve this problem doesn't even exist yet. I mean, there are some attempts, but nobody has a good solution yet. It's like you're complaining that cars can't go at 500 miles per hour since the technology limits them to 200 mph or so, and blaming this on bad car design when it's actually the user's expectation that's the problem. The user has been mislead by the way things are presented by AI companies, so ultimately it's the AI company's fault for overmarketing their product.
(Fuck cars btw).
They gave it a very reasonable prompt that a grade 1 child could answer, and it failed.
LLMs don't work like grade 1 children. The real problem is that AIs are being marketed in such a way that people are expecting them to be able to be at least as good as anything a grade 1 child can do. But AIs are not humans. They are able to do some things better than any human yet on other tasks they can be outperformed by a kindergartner. This is just how the technology is.
Blame expectations, blame marketing, fuck AI in general, but you've been totally misled if you're expecting it to be able to, say, count the number of letters in a word or break a kanji into components when all it sees are tokens; not letters, not characters.
My best friend, in our late teens, once emphatically claimed that Eric Clapton wrote “I Shot The Sheriff” and that Bob Marley effectively stole the song from him. This was before the internet as we know it, so fact-checking took effort. He and I argued about this off and on for weeks. Until I wound up in a used record store and happened upon the Clapton album that had “I Shot The Sheriff”. Right there, plain as day, it stated “written by R. Marley.” So I bought the LP, even though I did not own a record player at the time, just so I could put it in front of his face and show him.
His reaction? “Well, I’ve seen a Cream album where it says he wrote it.” CLAPTON WASN’T WITH CREAM WHEN HE PUT OUT HIS COVER!11!
Similarly, my brother-in-law as a kid was quite assured that Elton John’s hit song was actually “Panty and the Jets” and refused to believe otherwise for years.
Both are pretty right-leaning guys these days and so maybe “confidently wrong” is just something that comes with a certain political persuasion? ChatGPT is just made in its makers’ image.
For those just joining us: The problem isn't that it doesn't know. The problem is that it confidently asserts a falsehood.
And yet there are still people in the thread claiming that 'oh ChatGPTs knowledge data cuts off at the end of 2024, this prompt is using ChatGPT wrong' completely missing your point.
If ChatGPT doesn't know something it just lies about it, all while being passed off as doctorate-level intelligence.
Inb4 defenders 'an AI can't lie it just asserts falsehoods as truth because it's having a scary dream/hallucination' as if semantics will save the day.
...And that people take the bait and anthropomorphize it, believing it is "reasoning" and "thinking".
It seems like people want to believe it because it makes the world more exciting and sci-fi for them. Even people who don't find gpt personally useful, get carried away when talking about the geopolitical race to develop agi first.
And I sort of understand why, because the alternative (and I think real explanation) is so depressing - namely we are wasting all this money, energy and attention on fools' gold.
If these things could not be induced to confidently lie, they would not be the target of billions of dollars of investments.
I wanna know how it breaks it down day by day. Is it gonna list every single day from the starting point?!????!!!! That be funny
OP is wrong and it's good that he admits it. Joe Biden is the actual president in 2025, and he will soon meet Steven Seagal the new Russian president about the war on Belgium.
GPT5 probably has access to the real poll data before Musk helped to "confirm" it. The cheezeit himself said that Musk knows the voting machines better than anyone.
(this is the same mentality that Trumpers used when Trump completely lost the election in 2020, and I will be damned if it doesn't feel right)
It's doing precisely what it's intended to do: telling you what it thinks you want to hear.
Bingo.
LLMs are increasingly getting a sycophancy bias, though that only applies here if you give them anything to go on in the chat history.
It makes benchmarks look better. Which are all gamed now anyway, but kinda all they have to go on.
Wait…
Take me with you
Wonder what else it’s wrong about, if it misses somehing this obvious. It seems to be ChatGPT is getting worse by every update
While AI reasoning models will continue to improve, the data sets they pull from will continually get worse.
It's not "reasoning" anything, though. LLMs have no logic beyond what word is most likely to follow the previous. It's getting worse exactly because it's NOT reasoning.
Fuck Reddit and Fuck Spez.
Can't even pass the concussion protocol. Hey, maybe Altman meant it's as good as a PhD with CTE.
Ha, even text prediction thinks Trump is a loser.
Honestly this is why you shouldnt trust statistical predictions on human behavior anyways.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.