934
top 50 comments
sorted by: hot top controversial new old
[-] arcine@jlai.lu 71 points 5 days ago

Excellent news ! I have been preaching the good word of Codeberg for months, delighted to see it's working.

If I can get NixOS to move, I will be the happiest gal in the world...

[-] Lost_My_Mind@lemmy.world 96 points 5 days ago

Hold on ....

Are you saying all software hosted on github is infected with copilot? Or am I misreading the situation?

[-] renegadespork@lemmy.jelliefrontier.net 183 points 5 days ago* (last edited 5 days ago)

Your confusion is understandable since MS has called like 4 different products “Copilot”. This refers to the coding assistant built into GitHub for everything from CI/CD to coding itself.

All code uploaded to GitHub is subject to being scraped by Copilot to both train and provide inference context to its model(s).

Basically having your code in GitHub is implicit consent to have your code fed to MSs LLMs.

[-] Zwuzelmaus@feddit.org 64 points 5 days ago* (last edited 5 days ago)

All code uploaded to GitHub is subject to being scraped

No kidding: That was literally my very first thought back in the days when I learned that M$ has taken over GitHub.

(Copilot did not exist then)

[-] A_norny_mousse@piefed.zip 13 points 5 days ago

Mine too. More precisely: code uploaded to GH won't be yours anymore. IIRC there were changes to the TOS that supported this. But even if not, predicting the obvious doesn't make us prophets.

[-] TheOctonaut@mander.xyz 11 points 5 days ago

No, it isn't.

"Basically" your vibes aren't an actual answer. Businesses are not forking over millions to give away their code.

You can have conspiracy theories about it using the code anyway (I'm particularly confused about your use of the word "scrape" which tells me you don't know how AI training works, how hosting a website works, or how scraping works - maybe all three?) but surreptitiously using its competitors' code to train CoPilot would be a rare existential threat to Microsoft itself.

Does GitHub use Copilot Business or Enterprise data to train GitHub’s model?

No. GitHub does not use either Copilot Business or Enterprise data to train its models.

https://github.com/features/copilot#faq

[-] kilgore_trout@feddit.it 47 points 5 days ago

FAQs are not legally binding. If you want to quote something, then do privacy policy and terms of service.

load more comments (14 replies)
[-] bearboiblake@pawb.social 28 points 5 days ago* (last edited 5 days ago)

Just to add to what the other commenters said, the quote you highlighted doesn't even say what you think it does.

It says that Copilot data is not used to train the models, not that code uploaded to Github isn't used to train the models.

As an aside, your nitpicking of the term "scrape" and rant about how the user you're replying to must be ignorant is cringe, jsyk.

[-] zr0@lemmy.dbzer0.com 20 points 5 days ago

Oh my. The “you are all noobs, I am the only techie here, so I know it” argument is so unnecessary and makes you appear super entitled.

You obviously seem not to have an idea how all that shit works, where OpenAI and Microsoft scrape copyrighted material, which is illegal, to train their models. On top of that, in the US there are many laws where they can circumvent ToS if it helps national security, and we all know with Trump, that he will do everything to support his economy. So we end up with a situation, where the contracts say they will not use the data to train models, while doing this exact thing, and nobody ever will be able to prove it and the whole legal system in the US will protect the corporation. So good luck with that “lawsuit”.

But that is only when Microsoft would play by rules, which they don’t. Which no one does. So they just use the data to train the models, generating billions of value, and just wait for a lawsuit where they pay a fine of 100k.

This all comes to the conclusion that you are not just naive and inexperienced, but also an entitled asshole.

[-] RichardDegenne@lemmy.zip 21 points 5 days ago

If you're gullible enough to believe an FAQ coming from Github themselves, then I have bad news for you.

load more comments (4 replies)

Like Meta and it's privacy rules, I bet they do even if they're saying they don't.

load more comments (2 replies)

Lmao desperately trying to justify sunk cost, I see?

You’re right, it’s not scraping, it’s worse. Most AI bots do scrape sites for data, though since MS has direct access to the GH backend, they don’t even need to scrape the data. You’re giving it to them directly.

The issue here is trust. Microsoft, along with every other company invested in the AI race has proven repeatedly that getting ahead in said race is more important to them than anything else. It’s more important than user privacy, ToS, contracts, intellectual property, and the law itself.

If they stand to make more money screwing you over than they stand to lose from a slap on the wrist in court, the choice is clear. And they will lie to your face about it. Profit machines as big as MS don’t care. They can’t. They are optimized for one thing.

load more comments (1 replies)
[-] ayyy@sh.itjust.works 4 points 4 days ago

Someday when you’re grown up you will realize how cringe your way of communicating is.

load more comments (3 replies)
[-] ExLisper@lemmy.curiana.net 22 points 5 days ago

I guess it's about copilot scanning the code, submitting PRs, reporting security issues, doing code reviews and such.

load more comments (9 replies)
[-] clot27@lemmy.zip 64 points 5 days ago
load more comments (5 replies)
[-] BlameTheAntifa@lemmy.world 23 points 4 days ago

More distros need to follow. No FOSS should have any relationship to Microsoft or their products.

[-] Kissaki@feddit.org 7 points 4 days ago* (last edited 4 days ago)

The because of training claim is wrong.

Quoting the Gentoo post:

Mostly because of the continuous attempts to force Copilot usage for our repositories,

It seems to be about GitHub pushing copilot usage, not them training on data. Moving away doesn't prevent training anyway. And I'm sure someone will host a mirror on hitting if they don't.

[-] user28282912@piefed.social 4 points 3 days ago

Codeberg does actively try to prevent bot scraping.

[-] ArkHost@lemmy.world 20 points 5 days ago

Did this few months ago. Everyone should do the same.

[-] gwl 4 points 4 days ago* (last edited 4 days ago)

It's funny that all the pro-AI chuds suddenly coming out of the woodwork to try and say this is a terrible idea.

[-] baronvonj@piefed.social 14 points 5 days ago

Gentoo is still around‽ But Arch exists and eMachines was discontinued like 10 years ago!

[-] cecilkorik@piefed.ca 24 points 5 days ago

I know this is probably sarcastic but honestly Gentoo's great if you don't trust binaries by default. Nothing is an absolute guarantee against compromise, but it's an awful lot harder to compromise a source code repository or a compiler without anyone noticing (especially if you stick to stable versions) than it is to compromise a particular binary of some random software package. I trust most package maintainers, but they're typically overworked volunteers and not all of them are going to have flawless security or be universally trustworthy.

I like building my own binaries from source code whenever possible.

[-] bearboiblake@pawb.social 9 points 5 days ago

Genuine question from a longtime Linux user who never tried Gentoo - doesn't updating take forever? I used a source build of firefox for a bit and the build took forever, not to mention the kernel itself

The long update has the advantage of providing an opportunity to touch grass.

[-] bearboiblake@pawb.social 10 points 5 days ago

touch grass is literally a one-liner, cmon bro

[-] msage@programming.dev 8 points 5 days ago

Gentoo does not have always the latest builds, not by default.

Updates depend on your amount of packages, hardware, and willingness to utilize that hardware for compiling.

I don't use DE, just dwm+dmenu, so my biggest packages are Firefox and LibreOffice, which can take 3+ hours with dependencies. KDE or Gnome would most likely add more.

But you can put number of cores for compiling into config. If you have your PC on most of the day, you can set it to 1 or 2 and you most likely won't even know about it.

Or, if you have 16 core CPU, let 14 do the compiling and you can browse the web with the remaining two.

This all assumes you have enough RAM as well. It's not as bad, but you should have at least 32GB.

The distro is smooth, way more than anything I've ever tried, and I'm not switching from it.

load more comments (4 replies)
[-] abbadon420@sh.itjust.works 14 points 5 days ago

Gentoo is more linux than anyhing. It is literally a penguin. What does Arch have?

load more comments (4 replies)
load more comments
view more: next ›
this post was submitted on 17 Feb 2026
934 points (100.0% liked)

Technology

81653 readers
4074 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS