1585
Backdoors (lemmy.ml)
top 50 comments
sorted by: hot top controversial new old
[-] CameronDev@programming.dev 195 points 5 months ago

To be fair, we only know of this one. There may well be other open source backdoors floating around with no detection. Was heartbleed really an accident?

[-] lemmyreader@lemmy.ml 100 points 5 months ago

True. And the "given enough eyeballs, all bugs are shallow" is a neat sounding thing from the past when the amount of code lines was not as much as now. Sometimes it is scary to see how long a vulnerability in the Linux kernel had been there for years, "waiting" to be exploited.

[-] RecluseRamble@lemmy.dbzer0.com 74 points 5 months ago

Still far better than a proprietary kernel made by a tech corp, carried hardly changed from release to release, even fewer people maintain, and if they do they might well be adding a backdoor themselves for their government agency friends.

[-] Vilian@lemmy.ca 19 points 5 months ago

true, opensource can be flawed, but it's certain less flawed than a closed source alternatives

[-] lemmyreader@lemmy.ml 8 points 5 months ago
[-] xenoclast@lemmy.world 35 points 5 months ago

Yeah he didn't find the right unmaintained project. There are many many many cs undergrads starting projects that will become unmaintained pretty soon.

[-] CodexArcanum@lemmy.world 169 points 5 months ago

I've gotten back into tinkering on a little Rust game project, it has about a dozen dependencies on various math and gamedev libraries. When I go to build (just like with npm in my JavaScript projects) cargo needs to download and build just over 200 projects. 3 of them build and run "install scripts" which are just also rust programs. I know this because my anti-virus flagged each of them and I had to allow them through so my little roguelike would build.

Like, what are we even suppose to tell "normal people" about security? "Yeah, don't download files from people you don't trust and never run executables from the web. How do I install this programming utility? Blindly run code from over 300 people and hope none of them wanted to sneak something malicious in there."

I don't want to go back to the days of hand chisling every routine into bare silicon by hand, but i feel l like there must be a better system we just haven't devised yet.

[-] Killing_Spark@feddit.de 30 points 5 months ago

Debian actually started to collect and maintain packages of the most important rust crates. You can use that as a source for cargo

load more comments (3 replies)
[-] wolf@lemmy.zip 22 points 5 months ago

THIS.

I do not get why people don't learn from Node/NPM: If your language has no exhaustive standard library the community ends up reinventing the wheel and each real world program has hundreds of dependencies (or thousands).

Instead of throwing new features at Rust the maintainers should focus on growing a trusted standard library and improve tooling, but that is less fun I assume.

load more comments (6 replies)
[-] RegalPotoo@lemmy.world 18 points 5 months ago* (last edited 5 months ago)

It's a really wicked problem to be sure. There is work underway in a bunch of places around different approaches to this; take a look at SBoM (software bill-of-materials) and reproducible builds. Doesn't totally address the trust issue (the malicious xz releases had good gpg signatures from a trusted contributor), but makes it easier to spot binary tampering.

[-] wizzim@infosec.pub 11 points 5 months ago* (last edited 5 months ago)

+1

Shameless plug to the OSS Review Toolkit project (https://oss-review-toolkit.org/ort/) which analyze your package manager, build a dependency tree and generates a SBOM for you. It can also check for vulnerabilitiea with the help of VulnerableCode.

It is mainly aimed at OSS Compliance though.

(I am a contributor)

[-] acockworkorange@mander.xyz 17 points 5 months ago

Do you really need to download new versions at every build? I thought it was common practice to use the oldest safe version of a dependency that offers the functionality you want. That way your project can run on less up to date systems.

[-] baseless_discourse@mander.xyz 37 points 5 months ago* (last edited 5 months ago)

Most softwares do not include detailed security fixes in the change log for people to check; and many of these security fixes are in dependencies, so it is unlikely to be documented by the software available to the end user.

So most of the time, the safest "oldest safe" version is just the latest version.

load more comments (2 replies)
[-] treadful@lemmy.zip 23 points 5 months ago

Okay, but are you still going to audit 200 individual dependencies even once?

load more comments (3 replies)
load more comments (1 replies)
[-] KillingTimeItself@lemmy.dbzer0.com 111 points 5 months ago

everytime this happens i become unexplainably happy.

There's just something about a community doing it's fucking job that gets me so normal feeling.

[-] hash0772@sh.itjust.works 96 points 5 months ago

Getting noticed because of a 300ms delay at startup by a person that is not a security researcher or even a programmer after doing all that would be depressing honestly.

[-] mariusafa@lemmy.sdf.org 92 points 5 months ago

I love free software community. This is one of the things free software was created. The community defends its users.

load more comments (1 replies)
[-] CosmicCleric@lemmy.world 60 points 5 months ago* (last edited 5 months ago)

The problem I have with this meme post is that it gives a false sense of security, when it should not.

Open or closed source, human beings have to be very diligent and truly spend the time reviewing others code, even when their project leads are pressuring them to work faster and cut corners.

This situation was a textbook example of this does not always happen. Granted, duplicity was involved, but still.

[-] GamingChairModel@lemmy.world 11 points 5 months ago* (last edited 5 months ago)

100%.

In many ways, distributed open source software gives more social attack surfaces, because the system itself is designed to be distributed where a lot of people each handle a different responsibility. Almost every open source license includes an explicit disclaimer of a warranty, with some language that says something like this:

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.

Well, bring together enough dependencies, and you'll see that certain widely distributed software packages depend on the trust of dozens, if not hundreds, of independent maintainers.

This particular xz vulnerability seems to have affected systemd and sshd, using what was a socially engineered attack on a weak point in the entire dependency chain. And this particular type of social engineering (maintainer burnout, looking for a volunteer to take over) seems to fit more directly into open source culture than closed source/corporate development culture.

In the closed source world, there might be fewer places to probe for a weak link (socially or technically), which makes certain types of attacks more difficult. In other words, it might truly be the case that closed source software is less vulnerable to certain types of attacks, even if detection/audit/mitigation of those types of attacks is harder for closed source.

It's a tradeoff, not a free lunch. I still generally trust open source stuff more, but let's not pretend it's literally better in every way.

load more comments (5 replies)
[-] DingoBilly@lemmy.world 46 points 5 months ago

Immediately noticed even though the packages have been out for over a month?

Easily could have stolen a ton of information in that month.

[-] mlg@lemmy.world 42 points 5 months ago

Yeah but tbf it was deployed on mostly rolling release and beta releases.

No enterprise on prod is worried because they're still on RHEL 6 /s

[-] kopasz7@lemmy.world 18 points 5 months ago

Why the /s? We are migrating our host to RHEL7 since months.

load more comments (2 replies)
[-] DingoBilly@lemmy.world 12 points 5 months ago

Yeah they got lucky. But shows how susceptible systems are. Really makes you wonder how many systems are infected with similar - this wouldn't be the first back door that's live in Linux systems.

load more comments (3 replies)
[-] Draegur@lemm.ee 42 points 5 months ago

i feel like the mental gymnastics should end with a rake step

[-] veganpizza69@lemmy.world 48 points 5 months ago

It's about the complex rationalizations used to create excuses (pretexts).

The original is this:

[-] johannesvanderwhales@lemmy.world 12 points 5 months ago

Alright I won't argue about that specific version's point, but this is basically a template for constructing a strawman argument.

load more comments (3 replies)
[-] pete_the_cat@lemmy.world 30 points 5 months ago

It is pretty funny, I bet he's kicking himself right now for it.

[-] yum@lemmy.eco.br 29 points 5 months ago

I just updated xz in my system. Thanks Lemmy!

load more comments (3 replies)
[-] JCreazy@midwest.social 23 points 5 months ago
[-] luves2spooge@lemmy.world 48 points 5 months ago
[-] seaQueue@lemmy.world 31 points 5 months ago

Openssh backdoor via a trojan'ed release of liblzma

[-] the_seven_sins@feddit.de 20 points 5 months ago

Ever wondered why ${insert_proprietary_software_here} takes so long to boot?

[-] sus@programming.dev 18 points 5 months ago

because AbstractTransactionAwarePersistenceManagerFactoryProxyBean needs to spin up 32 electron instances (one for each thread) to ensure scalability and robustness and then WelcomeSolutionStrategyExecutor needs to parse 300 megabytes of javascript to facilitate rendering the "welcome" screen

[-] squaresinger@feddit.de 17 points 5 months ago

The only real downside on the open source side is that the fix is also public, and thus the recipe how to exploit the backdoor.

If there's a massive CVE on a closed source system, you get a super high-level description of the issue and that's it.

If there's one on an open source system, you get ready-made "proof of concepts" on github that any script kiddy can exploit.

And since not every software can be updated instantly, you are left with millions of vulnerable servers/PCs and a lot of happy script kiddies.

See, for example, Log4Shell.

[-] oce@jlai.lu 82 points 5 months ago* (last edited 5 months ago)

If your security relies on hidden information then it's at risk of being broken at any time by someone who will find the information in some way. Open source security is so much stronger because it works independently of system knowledge. See all the open source cryptography that secures the web for example.
Open source poc and fix increases awareness of issues and helps everyone to make progress. You will also get much more eyes to verify your analysis and fix, as well as people checking if there could other consequences in other systems. Some security specialists are probably going to create techniques to detect this kind of sophisticated attack in the future.
This doesn't happen with closed source.
If some system company/administrator is too lazy to update, the fault is on them, not on the person who made all the information available for your to understand and fix the issue.

[-] prettybunnys@sh.itjust.works 11 points 5 months ago

Crowd sourcing vulnerability analysis and detection doesn’t make open source software inherently more secure.

Closed source software has its place and it isn’t inherently evil or bad.

This event shows the good and bad of the open source software world but says NOTHING about closed source software.

[-] oce@jlai.lu 30 points 5 months ago

Crowd sourcing vulnerability analysis and detection doesn’t make open source software inherently more secure.

It does, because many more eyes can find issues, as illustrated by this story.

Closed source isn't inherently bad, but it's worse than open source in many cases including security.

I think you're the only one here thinking publishing PoC is bad.

load more comments (12 replies)
load more comments (2 replies)
[-] SpaceMan9000@lemmy.world 44 points 5 months ago

Honestly, for closed source software the POCs are also immediately available. Lots of threat actors just use patch diffing.

These days vulnerabilities are at times also patched with other non-related commits to conceal what exactly has changed.

[-] ozymandias117@lemmy.world 29 points 5 months ago

Even in open source, responsible disclosure is generally possible.

See, e.g. Spectre/Meltdown, where they worked privately with high level Linux Kernel developers for months to have patches ready on all supported branches before they made the vulnerability public

[-] DemSpud@lemmy.dbzer0.com 27 points 5 months ago

bUt gUyS WhAt aBoUt sEcUrItY ThRoUgH ObScUrItY??

load more comments (3 replies)
[-] KillingTimeItself@lemmy.dbzer0.com 12 points 5 months ago

this is why we invented responsible disclosure, which is a thing companies like apple do even. Although in this case, this was the very beginning of what seemed to be a rollout, so if it does effect systems, it's not very many. And if they are affected. The solution is pretty obvious.

Don't be a dunce, report responsibly.

[-] _dev_null@lemmy.zxcvn.xyz 10 points 5 months ago

Was the transition into management easy for you, or was it a slow acceptance?

load more comments (1 replies)
[-] possiblylinux127@lemmy.zip 10 points 5 months ago

I'm pretty sire there are plenty of ways to exploit proprietary systems. You can't stop the power of the keyboard

load more comments (2 replies)
[-] heyfrancis@mastodon.social 14 points 5 months ago
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 30 Mar 2024
1585 points (100.0% liked)

linuxmemes

20728 readers
1416 users here now

I use Arch btw


Sister communities:

Community rules

  1. Follow the site-wide rules and code of conduct
  2. Be civil
  3. Post Linux-related content
  4. No recent reposts

Please report posts and comments that break these rules!

founded 1 year ago
MODERATORS