72
top 4 comments
sorted by: hot top controversial new old
[-] AVHeiro@awful.systems 22 points 3 months ago

It's gonna turn out to be an filipino call center isn't it.

[-] dgerard@awful.systems 17 points 3 months ago

we have finally achieved A Guy in India

[-] YourNetworkIsHaunted@awful.systems 2 points 3 months ago

That's one way to put a sentient being on the other end of the request, I guess.

[-] zbyte64@awful.systems 7 points 3 months ago* (last edited 3 months ago)

That’s OpenAI admitting that o1’s “chain of thought” is faked after the fact. The “chain of thought” does not show any internal processes of the LLM — o1 just returns something that looks a bit like a logical chain of reasoning.

I think it's fake "reasoning" but I don't know if (all of) OpenAI thinks that. They probably think hiding this data prevents cot training data from being extracted. I just don't know how deep the stupid runs.

this post was submitted on 17 Sep 2024
72 points (100.0% liked)

TechTakes

1490 readers
29 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS