Ignoring the lack of direction following, everything looks so plastic. Probably video game cutscene training showing through.
From the perspective of the company I work for (not a tech company, but has a pretty large development center) they truly believe that AI will 10x productivity. Not so much the FOOM stuff. Just typical Capitalism.
They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though. That’s management, a fairly different job. I’m not interested in managing. I’m certainly not interested in managing this bizarre polite lying daydream machine.
This is where I am right now. They are pushing AI hard at work, even shaming people that haven't signed up for copilot. They brought in some MS rep to tell us how the future of work was going to be wrangling AI agents. This is not the future that I want.
I'm reminded of that Folding Ideas video where he writes a book for some get rich quick book mill scheme. I bet that stuff is all AI now.
they're going to try to write COBOL with AI. let's see how that works out
Surprised this hasn't been mentioned yet: https://www.rollingstone.com/culture/culture-news/meta-ai-users-facebook-instagram-1235221430/
Facebook and Instagram to add AI users. I'm sure that's what everyone has been begging for...
Damn, HP doesn't mess around. I'm going to stop trashing them around the office.
Wait, is this how Those People claim that Copilot actually “improved their productivity”? They just don’t fucking read what the machine output?
Yes, that's exactly what it is. That and boilerplate, but it probably makes all kinds of errors that they don't noticed, because the build didn't fail.
Skimmed the paper, but i don't see the part where the game engine was being played. They trained an "agent" to play doom using vizdoom, and trained the diffusion model on the agents "trajectories". But i didn't see anything about giving the agents the output of the diffusion model for their gameplay, or the diffusion model reacting to input.
It seems like it was able to generate the doom video based on a given trajectory, and assume that trajectory could be real time human input? That's the best i can come up with. And the experiment was just some people watching video clips, which doesn't track with the claims at all.
It turns AI really is going to try to wipe out the humans. Just, you know, indirectly through human activity trying to make it happen.
it cannot handle subtraction going negative
late, but reminds me of this: https://www.youtube.com/watch?v=wB1X4o-MV6o