789
GitHub hits CTRL-Z, decides it will train its AI with user data after all
(www.theregister.com)
This is a most excellent place for technology news and articles.
The real issue here isn't just about "poisoning" their data. It's that people don't actually know how their contributions get scraped and repurposed.
I'm working on something called The Zeitgeist Experiment that maps public opinion by having people respond to questions via email, then using AI to rank responses and synthesize key ideas. The goal is transparency about how AI processes human input—showing people what actually gets used, not hiding it in some TOS.
GitHub's new policy will make things worse. Users will be even less aware their code is going into models they never agreed to train on. The default should be opt-in, not opt-out after the fact.
You're not working on anything, clanker.
Check this accounts comment history and take a look at the time stamps from five days ago or so. It was initially configured to make fully formatted multi-paragraph comments with 10-30 seconds between each comment. Now it's spacing it's comments out a bit more, but it's still a bot-controlled account here to push a product, likely this Zeitgeist thing.