this post was submitted on 29 Mar 2024
568 points (99.1% liked)

Technology

59414 readers
3123 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.

“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” an official with distributor OpenWall wrote in an advisory. “Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’” provided in recent updates. Those updates and fixes can be found here, here, here, and here.

On Thursday, someone using the developer's name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.

“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.

One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.

“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said.

He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 87 points 7 months ago* (last edited 7 months ago) (5 children)

From the article...

Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. “BUT that's only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”

Is auditing for security reasons ever done on any open source code? Is everyone just assuming that everyone else is doing it, and hence no one is really doing it?


EDIT: I'm not attacking open source, I'm a big believer in open source.

I'm just trying to start a conversation about a potential flaw that needs to be addressed.

Once the conversation was started I was going to expand the conversation by suggesting an open source project that does security audits on other open source projects.

Please put the pitchforks away.

[–] [email protected] 57 points 7 months ago (4 children)

You're making a logical fallacy called affirming the consequent where you're assuming that just because the backdoor was caught under these particular conditions, these are the only conditions under which it would've been caught.

Suppose the bad actor had not been sloppy; it would still be entirely possible that the backdoor gets identified and fixed during a security audit performed by an enterprise grade Linux distribution.

In this case it was caught especially early because the bad actor did not cover their tracks very well, but now that that has occurred, it cannot necessarily be proven one way or the other whether the backdoor would have been caught by other means.

[–] [email protected] 19 points 7 months ago (1 children)

Also they are counting the hits and ignoring the misses. They are forgetting that sneaking a backdoor into an open source project is extremely difficult because people are reviewing the code and such a thing will be recognized. So people don't typically try to sneak back doors in. Also, backdoors have been discovered in an amazing amount of closed source projects where no one was even able to review the code.

[–] [email protected] 10 points 7 months ago* (last edited 7 months ago) (1 children)

They are forgetting that sneaking a backdoor into an open source project is extremely difficult because people are reviewing the code and such a thing will be recognized.

Everyone assumes what you have stated, but how often does it actually happen?

How many people, and how often, and how rigorous, are code reviews actually done? Especially with large volume projects?

[–] [email protected] 12 points 7 months ago (1 children)

Depends on the project, but for a lot of projects code review is mandatory before merging. For XZ the sole maintainer can do whatever they want.

[–] [email protected] 10 points 7 months ago* (last edited 7 months ago)

Depends on the project, but for a lot of projects code review is mandatory before merging. For XZ the sole maintainer can do whatever they want.

I've done plenty of code reviews in my time, and I know one thing, the more busy you are, the faster you go through code reviews, and the more chance things can be missed.

I would hope that for the real serious shit (like security) the code reviews are always thorough and complete, but I know my fellow coding brethren, and we all know that's not always the case. Time is a precious resource, and managers don't always give you the time you need to do the job right.

Personally I use a distro backed indirectly by a corporation and hope that each release gets the thorough review that it needs, but human nature is always a factor in these things as well, and honestly, there are times when everyone thinks everyone else is doing a certain task, and the task falls between the cracks.

[–] [email protected] 9 points 7 months ago (1 children)

Have those audits you allude to ever caught anything before it went live? Cuz this backdoor has been around for a month and RedHat is affected, too. Plus this was the single owner of a package who is implicitly trusted, it's not like it was a random contributor whose PRs would get reviewed.

The code being open source helps people track it down once they try to debug an issue (performance issue and crashes because in their setup the memory layout was not what the backdoor was expecting), that's true. But what actually triggered the investigation was the bug. After that it's just a matter of time to trace it back to the backdoor. You understimate reverse engineers. Or maybe I'm just spoiled.

How long until US bans code from developers with ties to CN/RU?

[–] [email protected] 3 points 7 months ago (1 children)

How long until US bans code from developers with ties to CN/RU?

That won't happen because it would effectively mean banning all FOS which isn't remotely practical.

[–] [email protected] 1 points 7 months ago (2 children)

How do you propose we meaningfully fix this issue? Hoping random people catch stuff doesn't count.

[–] [email protected] 1 points 7 months ago

An open source project that does nothing but security audits on other open source projects?

[–] [email protected] 1 points 7 months ago

In time it may become a trade-off between new (with associated features and speed) Vs tried and tested/secure.
To us now this sounds perverse, but remember that NASA generally use very old hardware because they can be more certain the various bugs & features have been found and documented. In NASA's case this is for reliability. I'll concede 'brute force' does add another dimension when applying this logic to security.

This may also become an AI arms race. Finding exploits is likely something AI could become very good at - but a better AI seeking to obfuscate?

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago) (1 children)

It's maybe possible, but perhaps even unlikely still.

Overwhelmingly thorough security review is time consuming and expensive. It's also not perfect, as evidenced by just how many security issues accidentally live long enough to land Even in enterprise releases. That's even without a bad actor trying to obfuscate the changes. I think this general approach had several aspects that would made it likely to pass scrutiny:

  • It was in XZ, which was likely not perceived as a security critical library. A security person would recognize any thing as potentially security critical, but they don't always have the resources and so are directed to focus on obviously security related and historically security incident magnets.
  • it was carried out by someone who spent years building up an innocuous reputation. Investigation may even show previous "test samples" to be malicious but not caught, or else it was a red herring to get people used to random test samples getting placed in the project.
  • The only "source code" he touched was "just build scripts". Even during a security audit, build shell scripts are likely going to be ignored, they are just build scripts and maybe you run some tests on all scripts, but those tests aren't going to catch this sort of misbehavior.
  • The actual runtime malicious code was delivered as portions of ostensibly throw away test sample xz files. The malicious code is applied by binary patch of the build output. A security audit won't be thinking too hard about a sea of binary files that are just throwaway samples as fodder for test.

So while I see the point about logical fallacy about it accidentally not getting far enough to see if the enterprise release process would have caught it, I think we know track records well enough to deem this approach likely to get through. Now that it has been caught, I could see some changes that may mitigate this in the future. Like package build scripts deleting all test samples and skipping tests when building for release, as well as more broad scrutiny.

There's also the reality that a lot of critical applications deem themselves too cool to settle for "old crusty enterprise distributions". They think that approach is antiquated and living on the edge is better. Admittedly I doubt theyd go as far as arch, tumbleweed, or rawhide, but this one could have easily made it to Debian testing, fedora release, or an Ubuntu release.

[–] [email protected] 1 points 7 months ago (1 children)

I think we know track records well enough to deem this approach likely to get through.

That was my concern, and why I brought up my point.

Human nature, especially when volunteer work versus paid work is being done, as well as someone who purposely over the long-term is trying to be devious, could be a potent combination for disaster.

I still wonder if there should be an actual open source project that does nothing but security audits of all other open source projects, hence my original question as an opener to a conversation that I never got to elaborate on because I was getting attacked almost immediately by people who are very sensitive about bringing any criticisms/concerns about open source out in the open.

[–] [email protected] 1 points 7 months ago (1 children)

The issue is that it implies that open source has a problem due to volunteers that is not found in closed source, which is not really the reality.

You can look at a closed source vendor like Cisco and see backdoors, generally left over from developer access, yet open for abuse. The nature of those is so blatantly obvious any open review would have spotted it instantly, yet there it was

With this, you had a much more device obfuscated attack that probably would have passed through even serious security audits unnoticed, yet it was caught because someone was curious about a slight performance degradation and investigated. Having been in the closed source world,I can tell you that they never would have caught someone like this. Anyone even vaguely saying they wanted to spend some time investigating a session startup delay of half a second would be chastised for wasting time.

Further, open source projects are also the fodder for security researchers to build their resumes. Hard to prove your mettle without works, and catching vulnerabilities in OSS code is a popular feather in their cap.

It also implies that open source is strictly a volunteer affair. Most commercial applications of a Linux platform involve paid employees doing some enablement, and that differs place to place. There's of course red hat paying for security research, Google, Microsoft also. I know at least one company that distrusts everything and repeats a whole bunch of security audits, including paying external companies to audit open source code. I would wager that folks downstream of say centos stream or certain embedded platforms can feel pretty good about audits. Of course all bets are off when you go grab yarballs, npm, pip, etc.

[–] [email protected] 1 points 7 months ago

The issue is that it implies that open source has a problem due to volunteers that is not found in closed source, which is not really the reality.

I (partially) disagree. Fundamentally, my belief is that someone who gets paid to do the work is more rigorous doing the work than someone who does it on a volunteer basis, a human nature thing. Granted, I'm speaking very generally, and what I stated is not always true, but still.

Also, corporations that write close source programs are much more legally adverse to being sued if their product fails (there's a reason why we're seeing so many corporations slapping in arbitration clauses into their agreements these days; risk-averse).

Open source projects tend to just be more careful about their code base not being tainted, and write in disclaimers ("As-is") to protect themselves legally for the failure of the product scenario, and call it a day (again, very generally speaking (I use Fedora specifically for a reason)).

And speaking of Fedora, I do agree with your point that some open source projects are actually done by paid coders. I just believe that's more of the outlier, than the norm, though. Some of that work is done by corporate employees, but still on a volunteer basis.

Not dismissing at all, I am thankful for corporations that actually spend time letting their employees do open source work, even if it's just for their own direct benefit, as it also benefits everyone else.

[–] [email protected] 1 points 7 months ago (2 children)

You’re making a logical fallacy called affirming the consequent where you’re assuming that just because the backdoor was caught under these particular conditions, these are the only conditions under which it would’ve been caught.

No, I'm actually making that comment based on a career as a software developer, who has actually worked on a few open source projects before.

[–] [email protected] 5 points 7 months ago (1 children)

Your credentials don't fix the logical fallacy.

[–] [email protected] 1 points 7 months ago

Experience matters.

[–] [email protected] 3 points 7 months ago (1 children)
[–] [email protected] 0 points 7 months ago* (last edited 7 months ago) (1 children)

What, experience doesn't matter?

As Groucho Marx would say, "I can believe you, or my lying eyes".

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

Experience doesn't matter if you don't read Wikipedia links given to you by random people :)

Edit:

I'm actually making that comment based on

has another tone to "in my experience as"

Didn't actually want to educate you, but I feel this edit won't hurt. Literally.

[–] [email protected] 0 points 7 months ago* (last edited 7 months ago) (1 children)

Experience doesn’t matter if you don’t read Wikipedia links given to you by random people :)

You're assuming I don't already know what's being discussed in the link (or have read the link), but disagree with how it's being applied to me.

Also, experience doesn't evaporate into the ether just because someone does not read a link. That's a fallacy for sure.

[–] [email protected] 0 points 7 months ago (1 children)

You're assuming I'm assuming.

[–] [email protected] 0 points 7 months ago (1 children)

And you're assuming that I'm assuming that you're assuming.

Any particular reason why you're getting on my case?

[–] [email protected] 2 points 7 months ago (1 children)

Because the way this conversation started was a logical fallacy you weren't aware of. I like to teach.

You're dragging it too. I know now, you are not one to learn. But can you at least learn from this and move on?

[–] [email protected] 1 points 7 months ago (1 children)

Because the way this conversation started was a logical fallacy you weren’t aware of.

You're assuming I'm not aware of the point you're bringing up, again. I am, I'm disagreeing with you and how you're trying to apply it to me.

You’re dragging it too. I know now, you are not one to learn. But can you at least learn from this and move on?

Defending oneself is not 'dragging it too'. I'm literally replying to you stating that I am aware of the point you're stating repeatedly that I'm not aware of, but that I just disagree with you and how you're applying that point to me.

But instead of inquiring as to why I disagree, you're just repeating back more of the same thing.

Let's just agree to disagree on whether the point you're trying to make applies to me or not, and we both move on. It's such a trivial thing for you to keep hammering me on, it makes me wonder if you're just a conflict bot.

[–] [email protected] 0 points 7 months ago

Just the amount of text you wrote which I'll never read shows how you'll always try to prove your point, even if it was based on a fallacy to begin with. Just go and live a life my friend.

[–] [email protected] 22 points 7 months ago* (last edited 7 months ago) (1 children)

Having once worked on an open source project that dealt with providing anonymity - it was considered the duty of the release engineer to have an overview of all code committed (and to ask questions, publicly if needed, if they had any doubts) - before compiling and signing the code.

On some months, that was a big load of work and it seemed possible that one person might miss something. So others were encouraged to read and report about irregularities too. I don't think anyone ever skipped it, because the implications were clear: "if one of us fails, someone somewhere can get imprisoned or killed, not to speak of milder results".

However, in case of an utility not directly involved with functions that are critical for security - it might be easier to pass through the sieve.

[–] [email protected] 1 points 7 months ago

I don’t think anyone ever skipped it, because the implications were clear: “if one of us fails, someone somewhere can get imprisoned or killed, not to speak of milder results”.

However, in case of an utility not directly involved with functions that are critical for security - it might be easier to pass through the sieve.

I've actually seen people checking in code that doesn't get reviewed properly on mission critical apps before (like in the health industry).

My understanding is basically the same as yours, and in theory I agree with you. However, the problem is we all tend to hand-wave away any possibility of bad things happening, because it's open source, and don't take into account human nature, especially when it comes to volunteer versus paid work.

[–] [email protected] 17 points 7 months ago (1 children)

Auditing can be done only on open source code. No code = no audit. Reverse engieneering doesn't count.

[–] [email protected] 1 points 7 months ago

True, but does it actually get done, or just everyone just assuming gets done, because it's open source?

[–] [email protected] 16 points 7 months ago

Bystander effect, yes.

[–] [email protected] 12 points 7 months ago (1 children)

The answer is the same as closed source software: sometimes.

But that's beside the point, a security audit is not perfect. Plenty of audited codebases are the source of security vulnerabilities in the wild. We know based on analysis that the malicious actor's approach would have a high chance of successfully hiding from a typical security audit.

[–] [email protected] 1 points 7 months ago

Oh I know security audits are not perfect, I'm just wondering if they actually get done, or everyone just assumes they get done because of "Open Source", but they don't.