Excellent news ! I have been preaching the good word of Codeberg for months, delighted to see it's working.
If I can get NixOS to move, I will be the happiest gal in the world...
Excellent news ! I have been preaching the good word of Codeberg for months, delighted to see it's working.
If I can get NixOS to move, I will be the happiest gal in the world...
Hold on ....
Are you saying all software hosted on github is infected with copilot? Or am I misreading the situation?
Your confusion is understandable since MS has called like 4 different products “Copilot”. This refers to the coding assistant built into GitHub for everything from CI/CD to coding itself.
All code uploaded to GitHub is subject to being scraped by Copilot to both train and provide inference context to its model(s).
Basically having your code in GitHub is implicit consent to have your code fed to MSs LLMs.
All code uploaded to GitHub is subject to being scraped
No kidding: That was literally my very first thought back in the days when I learned that M$ has taken over GitHub.
(Copilot did not exist then)
Mine too. More precisely: code uploaded to GH won't be yours anymore. IIRC there were changes to the TOS that supported this. But even if not, predicting the obvious doesn't make us prophets.
No, it isn't.
"Basically" your vibes aren't an actual answer. Businesses are not forking over millions to give away their code.
You can have conspiracy theories about it using the code anyway (I'm particularly confused about your use of the word "scrape" which tells me you don't know how AI training works, how hosting a website works, or how scraping works - maybe all three?) but surreptitiously using its competitors' code to train CoPilot would be a rare existential threat to Microsoft itself.
Does GitHub use Copilot Business or Enterprise data to train GitHub’s model?
No. GitHub does not use either Copilot Business or Enterprise data to train its models.
FAQs are not legally binding. If you want to quote something, then do privacy policy and terms of service.
Just to add to what the other commenters said, the quote you highlighted doesn't even say what you think it does.
It says that Copilot data is not used to train the models, not that code uploaded to Github isn't used to train the models.
As an aside, your nitpicking of the term "scrape" and rant about how the user you're replying to must be ignorant is cringe, jsyk.
Oh my. The “you are all noobs, I am the only techie here, so I know it” argument is so unnecessary and makes you appear super entitled.
You obviously seem not to have an idea how all that shit works, where OpenAI and Microsoft scrape copyrighted material, which is illegal, to train their models. On top of that, in the US there are many laws where they can circumvent ToS if it helps national security, and we all know with Trump, that he will do everything to support his economy. So we end up with a situation, where the contracts say they will not use the data to train models, while doing this exact thing, and nobody ever will be able to prove it and the whole legal system in the US will protect the corporation. So good luck with that “lawsuit”.
But that is only when Microsoft would play by rules, which they don’t. Which no one does. So they just use the data to train the models, generating billions of value, and just wait for a lawsuit where they pay a fine of 100k.
This all comes to the conclusion that you are not just naive and inexperienced, but also an entitled asshole.
If you're gullible enough to believe an FAQ coming from Github themselves, then I have bad news for you.
Like Meta and it's privacy rules, I bet they do even if they're saying they don't.
Lmao desperately trying to justify sunk cost, I see?
You’re right, it’s not scraping, it’s worse. Most AI bots do scrape sites for data, though since MS has direct access to the GH backend, they don’t even need to scrape the data. You’re giving it to them directly.
The issue here is trust. Microsoft, along with every other company invested in the AI race has proven repeatedly that getting ahead in said race is more important to them than anything else. It’s more important than user privacy, ToS, contracts, intellectual property, and the law itself.
If they stand to make more money screwing you over than they stand to lose from a slap on the wrist in court, the choice is clear. And they will lie to your face about it. Profit machines as big as MS don’t care. They can’t. They are optimized for one thing.
Someday when you’re grown up you will realize how cringe your way of communicating is.
I guess it's about copilot scanning the code, submitting PRs, reporting security issues, doing code reviews and such.
More distros need to follow. No FOSS should have any relationship to Microsoft or their products.
The because of training claim is wrong.
Quoting the Gentoo post:
Mostly because of the continuous attempts to force Copilot usage for our repositories,
It seems to be about GitHub pushing copilot usage, not them training on data. Moving away doesn't prevent training anyway. And I'm sure someone will host a mirror on hitting if they don't.
Did this few months ago. Everyone should do the same.
It's funny that all the pro-AI chuds suddenly coming out of the woodwork to try and say this is a terrible idea.
Gentoo is still around‽ But Arch exists and eMachines was discontinued like 10 years ago!
I know this is probably sarcastic but honestly Gentoo's great if you don't trust binaries by default. Nothing is an absolute guarantee against compromise, but it's an awful lot harder to compromise a source code repository or a compiler without anyone noticing (especially if you stick to stable versions) than it is to compromise a particular binary of some random software package. I trust most package maintainers, but they're typically overworked volunteers and not all of them are going to have flawless security or be universally trustworthy.
I like building my own binaries from source code whenever possible.
Genuine question from a longtime Linux user who never tried Gentoo - doesn't updating take forever? I used a source build of firefox for a bit and the build took forever, not to mention the kernel itself
The long update has the advantage of providing an opportunity to touch grass.
touch grass is literally a one-liner, cmon bro
Gentoo does not have always the latest builds, not by default.
Updates depend on your amount of packages, hardware, and willingness to utilize that hardware for compiling.
I don't use DE, just dwm+dmenu, so my biggest packages are Firefox and LibreOffice, which can take 3+ hours with dependencies. KDE or Gnome would most likely add more.
But you can put number of cores for compiling into config. If you have your PC on most of the day, you can set it to 1 or 2 and you most likely won't even know about it.
Or, if you have 16 core CPU, let 14 do the compiling and you can browse the web with the remaining two.
This all assumes you have enough RAM as well. It's not as bad, but you should have at least 32GB.
The distro is smooth, way more than anything I've ever tried, and I'm not switching from it.
Gentoo is more linux than anyhing. It is literally a penguin. What does Arch have?
This is a most excellent place for technology news and articles.