[-] diz@awful.systems 12 points 1 month ago* (last edited 1 month ago)

This is what peak altruism looks like: being a lazy fuck with a cult, and incidentally happening to help hype up investments into the very unfriendly AI you're supposed to save the world from. All while being too lazy to learn anything about any actual AI technologies.

In all seriousness, all of his stuff is just extreme narcissism. Altruism is good, therefore he's the most altruistic person in the world. Smart is good, therefore he's the mostest smartest person. Their whole cult can be derived entirely from such self serving axioms.

[-] diz@awful.systems 13 points 2 months ago* (last edited 2 months ago)

In case of code, what I find the most infuriating is that they didn't even need to plagiarize. Much of open source code is permissively enough licensed, requiring only attribution.

Anthropic plagiarizes it when they prompt their tool to claim that it wrote the code from some sort of general knowledge, it just learned from all the implementations blah blah blah to make their tool look more impressive.

I don't need that, in fact it would be vastly superior to just "steal" from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.

[-] diz@awful.systems 13 points 4 months ago

It's called sarcasm.

[-] diz@awful.systems 11 points 4 months ago

Having worked in computer graphics myself, it is spot on that this shit is uncontrollable.

I think the reason is fundamental - if you could control it more you would put it too far from any of the training samples.

That being said video enhancements along the lines of applying this as a filter to 3d rendered CGI or another video, that could (to some extent) work. I think the perception of realism will fade as it gets more familiar - it is pretty bad at lighting, but in a new way.

[-] diz@awful.systems 12 points 4 months ago

Thing is, it has tool integration. Half of the time it uses python to calculate it. If it uses a tool, that means it writes a string that isn't shown to the user, which runs the tool, and tool results are appended to the stream.

What is curious is that instead of request for precision causing it to use the tool (or just any request to do math), and then presence of the tool tokens causing it to claim that a tool was used, the requests for precision cause it to claim that a tool was used, directly.

Also, all of it is highly unnatural texts, so it is either coming from fine tuning or from training data contamination.

[-] diz@awful.systems 12 points 4 months ago

That's why I say "sack of shit" and not say "bastard".

[-] diz@awful.systems 12 points 4 months ago

Maybe he didn't read Dune he just had AI summarize it.

[-] diz@awful.systems 13 points 4 months ago* (last edited 4 months ago)

Yeah any time its regurgitating an IMO problem it’s a proof it’salmost superhuman, but any time it actually faces a puzzle with unknown answer, this is not what it is for.

[-] diz@awful.systems 12 points 5 months ago* (last edited 5 months ago)

I think it could work as a minor gimmick, like terminal hacking minigame in fallout. You have to convince the LLM to tell you the password, or you get to talk to a demented robot whose brain was fried by radiation exposure, or the like. Relatively inconsequential stuff like being able to talk your way through or just shoot your way through.

Unfortunately this shit is too slow and too huge to embed a local copy of, into a game. You need a lot of hardware compatibility. And running it in the cloud would cost too much.

[-] diz@awful.systems 13 points 5 months ago* (last edited 5 months ago)

It is as if there were people fantasizing about automaton mouths and lips and tongues and vocal cords for some reason, and come up with all these fantasies of how it'll be when automatons can talk.

And then Edison invents the phonograph.

And then they stick their you know what in the gearing between the cylinder and the screw.

Except somehow more stupid, because these guys are worried about AI apocalypse while boosting AI hype that pays for this supposed apocalypse.

edit: If someone said in 1850s "automatons won't be able to talk for another 150 years or longer because the vocal tract is too intricate", and some automaton fetishist says that they will be able to talk in 20 years, the phonograph shouldn't lend any credence whatsoever to the latter. What is different this time is that phonograph was genuinely extremely useful for what it is, while the generative AI is not quite as useful and they're going for the automaton fetishist money.

[-] diz@awful.systems 12 points 5 months ago* (last edited 5 months ago)

When confronted with a problem like “your search engine imagined a case and cited it”, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.

Exactly. Even if you ensure the cited cases or articles are real it will misrepresent what said articles say.

Fundamentally it is just blah blah blah ing until the point comes when a citation would be likely to appear, then it blah blah blahs the citation based on the preceding text that it just made up. It plain should not be producing real citations. That it can produce real citations is deeply at odds with it being able to pretend at reasoning, for example.

Ensuring the citation is real, RAG-ing the articles in there, having AI rewrite drafts, none of these hacks do anything to address any of the underlying problems.

[-] diz@awful.systems 11 points 7 months ago* (last edited 7 months ago)

Yeah, exactly. There's no trick to it at all, unlike the original puzzle.

I also tested OpenAI's offerings a few months back with similarly nonsensical results: https://awful.systems/post/1769506

All-vegetables no duck variant is solved correctly now, but I doubt it is due to improved reasoning as such, I think they may have augmented the training data with some variants of the river crossing. The river crossing is one of the top most known puzzles, and various people have been posting hilarious bot failures with variants of it. So it wouldn't be unexpected that their training data augmentation has river crossing variants.

Of course, there's very many ways in which the puzzle can be modified, and their augmentation would only cover obvious stuff like variation on what items can be left with what items or spots on the boat.

view more: ‹ prev next ›

diz

joined 2 years ago