There simply is no established or easy way to detect backdoors done the #xz way. We give powers and trust to maintainers because that is the development model.
Anyone suggesting there is an easy fix has not understood the issues at hand.
But we are Open Source which allows everyone to dig, check, read code and investigate.
Jim Fuller
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Jim Fuller • • •Jim Fuller
in reply to daniel:// stenberg:// • • •Troed Sångberg
in reply to daniel:// stenberg:// • • •Yeah I think if there's anything this results in it will be that non-developer corporate people (hey Legal department) will realize that the fact that open source contributions are peer reviewed doesn't mean bad faith code cannot be merged.
... and that it doesn't matter if you're a paying Red Hat customer. You'll still end up with it in your product.
I do wonder if this will create some anti open source backlash though. It's of course not rational, but when has such decisions ever been.
daniel:// stenberg://
in reply to Troed Sångberg • • •Troed Sångberg
in reply to daniel:// stenberg:// • • •Absolutely. I'd even go so far to claim it might be easier to get commit access to a closed source product than open source.
As a youngling I was part of putting an easter egg into a rom-flashed hardware product by Very Large Company. There's no difference between being able to do that and something far more nefarious.
But from a legal point of view a corporation can act towards an employee or consultant in a way they cannot vs some anonymous open source contributor - and even though that doesn't help after the fact it's something that still feels reassuring to them.
Jono Ferguson
in reply to daniel:// stenberg:// • • •I have strong feels that it's not just open source that suffers from this.
At least with open source, one can look at the source.
Patrick Seemann
in reply to daniel:// stenberg:// • • •Luis Correia
in reply to daniel:// stenberg:// • • •in my mind, I imagine that there are probably some others in use right now...
and it is a LOT worst in closed source software, where no one can analise it thoroughly.
IOT products come to mind easily
Peter Bindels
in reply to daniel:// stenberg:// • • •> But we are Open Source which allows everyone to dig, check, read code and investigate.
And *this* is the main difference between companies like Microsoft, Apple etc., and open source software on the other hand. In this case, you can see the exploit, you can find out who inserted it and you can check all of the history of the project, down to when those people first showed up.
If there's something like this in closed source, you wouldn't know. It would never be shown in public.
FobUpset
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to FobUpset • • •Troed Sångberg
in reply to daniel:// stenberg:// • • •I had the read the commit with the fatal '.' multiple times to spot it ...
@FobUpset
Andrew Bartlett
in reply to daniel:// stenberg:// • • •Marius Kießling
in reply to daniel:// stenberg:// • • •FSMaxB
in reply to daniel:// stenberg:// • • •I guess the only way to fix something like this is to completely flip around how permissions work from the current denylist approach to an allowlist approach.
Something that, from my understanding, WASI is doing.
But the current model of doing things is so ingrained in our operating systems, programming languages, ABIs and, to some extent maybe even hardware, that it seems like an impossible thing to do retroactively.
Thomas Lee ✅ :patreon:
in reply to daniel:// stenberg:// • • •Josh Soref (w/ screen reader)
in reply to Thomas Lee ✅ :patreon: • • •@DoctorDNS I don't understand how this would help.
The attacker was the entire active ecosystem at the time. They chose to deliver the final payload as a source tarball instead of a git commit because they appear to have chosen to target distributions that were consuming the tarballs. But that was a decision of the attacking organization based on their targets' operational practices.
Stephen Bannasch
in reply to Josh Soref (w/ screen reader) • • •oss-security - backdoor in upstream xz/liblzma leading to ssh server compromise
www.openwall.comJosh Soref (w/ screen reader)
in reply to Stephen Bannasch • • •@stepheneb @DoctorDNS I'm aware. The message to which I replied didn't say "recompile from git sources"...
But even if it did. The other half of my post stands: the attackers will choose a model based on their target (here the distributors).
In this case, they assembled their attack using one git repository and one source code change in a "source archive". They could in a future attack distribute the components across multiple disparate components with seemingly unrelated maintainers.
daniel:// stenberg://
in reply to Josh Soref (w/ screen reader) • • •Adrian Cochrane
in reply to daniel:// stenberg:// • • •Even then, this particular attack was carefully designed to get glossed over by code auditors!
I personally looked at XZ somewhat recently, and while I can't tell you whether I audited a malicious version...
And yes I was studying the tarballs!
crazyeddie
in reply to daniel:// stenberg:// • • •kurtseifried (he/him)
in reply to daniel:// stenberg:// • • •My favorite part of all of this is people going on about "resilience" and achieving it by building another version of everything (that also has to be compatible, otherwise how will people make use of it?):
"Then build in resilience. Defense in depth, and diversity — not a monoculture. OpenSSH will always be a target because it is so widespread, and the OpenBSD developers are doing great work and the target was upstream of them because of this. But we need a diverse ecosystem with multiple strong solutions, and as an organization you need second suppliers for critical software." (https://www.docker.com/blog/openssh-and-xz-liblzma/)
Does this mean we can expect another group to write a curl compatible library in Rust now so we have some resilience and diversity? (this is sarcasm, I feel some people might take this seriously. It is not serious, it is sarcasm).
OpenSSH and XZ/liblzma: A nation-state attack was thwarted, what did we learn? | Docker
Dockerdaniel:// stenberg://
in reply to kurtseifried (he/him) • • •Elias Mårtenson
in reply to daniel:// stenberg:// • • •@kurtseifried yet that is exactly what is done in safety critical environments. During the Apollo program, there were two independent navigation computers, each developed by different teams (the primary one, the famous AGC was developed by a university, MIT I think? And the spare emergency computer was by IBM).
Completely different hardware and software, both of which could get the lunar lander back to the command module.
Perhaps this idea isn't so bad after all? At least for certain software.
kurtseifried (he/him)
in reply to Elias Mårtenson • • •@loke Those systems are comically simple and small compared to modern systems.
"Looking at transistor counts, the Apollo Guidance Computer had about 17,000 transistors in total in its ICs"
"The 1969 Apollo 11 mission (above) was the first to land men on the Moon. Since then, the most obvious advances have been in computing and electronics (especially in reducing size). The Apollo Guidance Computer had RAM of 4KB, a 32KB hard disk."
You can't do that with something like curl.
Elias Mårtenson
in reply to kurtseifried (he/him) • • •@kurtseifried just like the backup navigation computer couldn't do everything the AGC could, and didn't need to, no one uses all the features of curl.
You don't need to replicate all behaviour, only a small subset. These alternatives already exist, and if the interfaces were standardised, you'd be some way towards this already.
Of course, something like this could likely never be agreed upon by the community, but there are plenty of companies out there that should and could, contribute financially too such a project.
Thomas Depierre
in reply to Elias Mårtenson • • •@loke @kurtseifried i will add that we got pretty good research on this in the 90s, i recommend to look at Leveson. Spoiler: it does not help, because they... Make the same mistakes.
There are also other dynamics here like the fact that the niche expertise necessary highly limit how many people that can work on it, which over time push toward convergence and monopoly.
daniel:// stenberg://
in reply to Thomas Depierre • • •federico
in reply to daniel:// stenberg:// • • •#xz was probably chosen due to the presence of a corrupted xz file as part of the tests making it an ideal candidate for hiding data. In cryptography there are https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number - the same principle could be used to reject mysterious blobs from codebases. Yet many "bugdoors" can be introduced by creating subtle vulnerabilities and that's difficult to spot.
numbers used by cryptographers to show that they are working in good faith
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)daniel:// stenberg://
in reply to federico • • •hnapel
in reply to daniel:// stenberg:// • • •Daniel Feldman
in reply to daniel:// stenberg:// • • •Andrew Bartlett
in reply to daniel:// stenberg:// • • •Precompiled code would be the one thing that is likely to be seen again or in other existing attacks, I would like to see a GCC that puts specific magic in all object files and a linker or post install check that looks for it consistently.
An advanced attack can put the mark in new exploits but it gets harder to hide.
I guess this fits under dig, check, investigate.
Colin Lee supports #BLM 🇺🇦
in reply to daniel:// stenberg:// • • •What we can do is at least fingerprint the methodology used by this same threat actor to discover if there were any other very similar attempts.
It's likely there will be common elements in how they reached out to various open source projects which were very common dependencies.
This won't catch all cases, but it could catch some.