 Hello, welcome to my talk, which is how to backdoor the Linux kernel. Totally a non-controversable topic, and I'm sure everybody here is looking forward to learning how to do that. First, who am I? This is a picture of me as displayed by one of those upload your photo to get a cool shot of yourself, and I find it a great illustration for the chat GPT mania, because clearly, it did a great job of representing my likeness there. I live in Montreal, Quebec. I've been Linux admin since 1997, when I first installed Red Hat 5, not Red Hat Enterprise Linux, Red Hat 5. I'm formerly head of the Linux Foundation for Infrastructure Security. My title changed a little bit, just head of the CoreIT projects team. I'm in charge of kernel.org for the past 10 plus years, but I like to describe myself as the keeper of grounds and keys at kernel.org. Now, a common question that I get is, did anybody ever approach you and ask you to do something nefarious because you have access to all those cool things? And the answer to that is, no, they haven't, which I don't understand why, because my rates are really quite reasonable. I'm kidding, of course. Please do not contact me, because it'll be a very short and awkward conversation, and I will have to find out how to contact the FBI and whatever is the Canadian alternative to use, so. But first of all, what do I mean by the back door, right? It's a hidden vulnerability installed by the victim. It's not added by an attacker after they compromise their account because this or to compromise their system, that'd be a root kit, probably, which allows remote access or elevating privileges or exfiltrating sensitive info, such as scanning RAM or local disk, looking for private keys and uploading them to some remote system or looking for secret docs, that sort of stuff. That's what I mean by the back door. Now, this is not a talk about how to write back doors for Linux kernel. If you came here looking for that, then you probably are in a wrong talk, so this is not a how to back door Linux for dummies. This is really about, can you attack the pipeline? Can you sneak in a back door into the actual bonafide release of the Linux kernel, right? Because the pipeline looks like this. There's a submission point where people submit patches, maintain or receive or view the patches, maintainers accept patches or reject patches, in which case there usually goes into another one cycle. Now there's a merge part of the pipeline where maintainers, sub-maintainers send pull requests. Linux usually reviews the pull requests and he can either accept pull requests or reject pull requests. And then maybe he will merge it into what we call the main line. And then there's a publish point. Linux tags a release, Greg Carl Hartman signs a tarball and then users download the tarball. This is the pipeline of getting a change into the Linux kernel. Now can you sneak malicious code inside the patch or can you hack or threaten or bribe somebody like me to add malicious code to a patch? Can you trick the maintainer to apply malicious patch? You know, can you attack the merge stage? Can you send a malicious pull request to Linus? Can you hack Linus and modify his repo or can you make Linus sign a malicious tag? You can attack the publish stage. Can you make kernel, can you Greg sign the wrong tarball, for example, because that's his signature on the tarball. Can you bribe me to replace a kernel with something else that's been backdoor? Can you man in the middle of download from a kernel.org? The answer is yes, you can. But, here's what we get into the but. That's hacking the patch workflow. So the patch workflow is for Linux kernel is fairly straightforward. The developer submits code, maintainer reviews the code, maintainer accepts code, maintainer sends a pull request to Linus with the only changes in one nice pull request. Now this is what people think the patch submission workflow is like. You know, there is a very rigorous process which is illustrated here by how to get a law in the Canadian Parliament, for example. Illustrating with the wrong country there. But this is what it's actually like if you think about it. It goes through multiple revisions. It gets rejected, it gets sent. Some subsystems use Garrett, some subsystems use GitLab, some before the code even goes out for review. Internally at the company it could use whatever we don't even know because we don't see that part. Now can you overtly send malicious code? So write a patch, submit it, and try to get it put into the main line kernel release. So this is obviously unlikely to succeed for complex backdoors. We're not talking stuff like exfiltrating keys or looking for secret documents because that's just going to be several pages of code that's gonna stand out like a log. You're gonna stand out and you already will notice it. But maybe you could do a simple vulnerability to elevate privileges locally or maybe try to get into the system even though it's fire rolled off, something like that. Can you hide complex backdoors inside the humongous patch sets? Huge patch sets are actually not that common or at least not for the critical paths. And they usually take ages to get reviewed. Anybody who's tried to submit a very large patch set to the kernel knows that the chances of it going through even when there's nothing wrong with it are almost non key case can probably attest to that because he sends those all the time. There are many eyes reviewing humongous patch sets when even they do get through. That's it could be really hard to sneak something in there that's overtly malicious. Now, can you backdoor a relatively obscure device driver? You can, but you may just as well volunteer to be a maintainer for that driver because people will say, yes, please just do it. But the problem of course is that now you have a very obscure device driver that your chance of backdooring anybody would be extremely low. It'd be super, super uncommon. So of course here people will always mention the University of Minnesota and the hypocrite commits. And it's really not the best poster case that we have because they were not really trying very hard, honestly. I mean, the patches were signed by James Bond, one thing, they were not good. People who have actually seen those patches, they stood out as obviously trying to do something strange. And the whole experience kind of left us stirred, not shaken. It's not really anything that we can rely on to say, could somebody do it or could they succeed or will they fail? Now, so if overtly sending malicious code is not really your best strategy because it will almost certainly get noticed. Can you covertly sneak in malicious code into the patches? And this is actually not as crazy as you think. The patches get reviewed, but then patches get applied, it may not be the same set of patches. So you may look at your email client, you can see the code and it looks sane, but then you go to apply it and you actually apply something else because somebody's snuck in something else in the middle of it. Usually checks, this happens like when somebody sends you a V13, V14 of a revision that they change some, say, well, I only changed a few words here and there or just based on your previous feedback and you've reviewed it so many times, you just said, okay, it's fine, I'm just gonna apply it. That's a way of attack. We don't actually know what you're applying. Now, or it's a sub-mantainer who you trust normally and you would say, yeah, I'm going to take these set of patches, but it may be actually somebody else posing to be that sub-mantainer. Or you can even look like before, for example, we'll check the de-kim signatures and that actually doesn't mean as much as people think. I'm going to go into it in a little bit later. So end to end trust. It's like, well, how do you know that the patches that you have received are from the same person who has previously submitted patches to you? It's like with whom you've had relationship going for years and you trust, right? How do we make sure that this is the same person? The de-kim signatures that I mentioned are actually super fragile on mailing lists. We've just now having a long conversation about some of the mailing lists, implementations literally just going out of their way to break de-kim. It requires trust and domain admins. If admin of Chrome.org, I can modify any email that passes through my email server. Oh, anybody at Gmail, anybody at the company mail server. And those keys are not usually that well protected. They're just there readable by the SMTP process because it has to run and access the keys. And it's actually the most noticeable part is that it's typosquadrable, right? If redhat.com versus redhat.net or whatever it is, and just seeing the de-kim check mark doesn't really mean as much unless you go and verify that the domain matches. And before, we'll try to warn you saying, what the from is from this domain, but the signatures from this domain, for example, I will say, but it's still even for typosquads will not actually alert you. So be careful looking for that. Now, something we've recently added to, well, recently it's actually like three years now is provided a library that can do cryptographic signatures for patches sent in. It adds a separate de-kim-like header to the email messages. So they're actually completely out of the way. So it's not like a PGP signature which is just junk all over your message body, but it actually signs the from, the subject, and the body of the message just kind of like de-kim does, but the key distribution is very different there. It's tailors to patches. It actually works around most of the mailing junk so that there's appended stuff at the bottom of the message, which is common to Mailman 2 mailing lists. We actually work around that. We can figure out what actually the message was. It can use PGP keys, it can use SSH keys, it can even use directly ED25519 keys. Problem, of course, with any end-to-end trust like this is dedicated trust, always super hard. And if you don't do a dedicated trust, then you have to manage your own keys, and that's also super hard in every case. Now, we are trying to fix this, although there's newer features in before, and by new I mean they've been there for a couple of years now. People still haven't heard about them. The Shazam feature of before is something that you can take the set of patches and turn it into a pull request, a very similar, something very similar to a pull request. You can apply it. So for patches that don't have a cryptographic signatures, what you can do is you can apply them first, then review them so they're in your Git tree already and then you merge them. Because this way at least you know exactly that you've reviewed this code and you've applied this code, it stayed in your repository. It didn't come from different sources, so you know for the fact that you have reviewed it. Now before diff also is a feature that you can show you diff between series. If somebody says, oh, this is a V13, I've only made small wording changes in comments, you can actually do a diff. Well, show me the range diff between V13 and V12 and it will show you that. So for example, if you notice that it is not actually just wording changes that there's more than that, you can reject it and say, please, what are you doing? The newest feature of course is a before send and before prep, those are the commands that allow you managing the series and patches directly in your Git repository and stuff like cover letters and who it should go out to and V1, V2, V3 of the series, it all streamlines all that management and also implements the signing of the patches automatically so you don't even have to think about it. Try it out. I know some people have been using it but it's only I can count them with the numbers of both of my hands for less than 10 people as far as I know use it routinely. And the stuff I hope to work on is the key ring management directly in before. So if you're using before for your stuff, it can do tofu like management of keys so that you don't have to spend too much effort managing submitter's keys, but you can do stuff like, well, this is the same key this person has been using for the past two, three years and it hasn't changed so it's probably the same person but if this person were the same from sending me something signed by a different key that at least can give me an alert saying something weird is going on. So if a malicious series did get applied if all this failed, right? Likely it will still be found out. Their eyes watching commits, we know this for a fact. Problem is that they may not be talking to us they may be watching the commits and putting them into their own stash of zero days to use for the future, right? That's the problem. Intentional bugs, if you're trying to sneak in a back door and you masqueraded it as a simple like an overflow or any other kind of vulnerability it could still be found by a CI or a fuzzer or some sub integration bot. It's still a danger that this will still not get through because we do have a decent set of fuzzers this decent set of CI tools that will go out and test change sets to make sure that there's nothing, there's no bugs in it. This will catch intentional bugs as well. So can I, can I, a little explanation admin back door repository? Technically yes, but I will almost certainly be almost immediately found out. So please don't ask me to do that. Not that anybody's done that. There are tricks to make this more successful. Like if I could think about it I could arrange for somebody's laptop to get stolen first. The problem might be found out because the way Git works if there is a changed commit then the next time you try to push your own changes you will just say not fast forward and it will refuse and then you will know something really weird is going on and you'll start looking immediately, right? But if your laptop is stolen if I walk away with the laptop then you don't have a local copy anymore. What you will do is you will reclone the repository all over and then you will not have this weird error coming up. I can be less conspicuous I can just figure out a way to make your disk stop working. This will be like, oh my laptop broke and now I have to reinstall everything and I will reclone everything for remote. That's one way to work around it. There's one way to do one other way to get a fresh clone of the repository. Now how do you work, how do you fix this? Is by signing Git commits and I know, I know, it's super annoying. I do it to all my repositories and half the time my keys are somewhere in the other pants and I have to go and get them and they get super annoyed at that too. And I say still, please try to remember to do this. This will allow you to quickly check if the repository is exactly the same as the last time you pushed out. This is also true for shared repositories. So there's quite a few now used by subsystem maintainers who multiple people can write to the same repository. There's a way to attack this. For example, if you expect that there will be new commits. So before you push something out you will just do a Git pull rebase and not even pay attention to what happened and then you will push out your own set of changes. If you do check signatures on those commits before you actually rebase, that's one way to make sure that the repository has not been modified. We're also published a transparency log. There have been a couple of articles about that. People say it's not a true transparency log because it's just basically a Git repository with the record of all the commits. But we do, we do replicate it to multiple mirrors including to non-disclosed locations which I'll mention also in the next slide. It is temper evident, obviously not temper resistant but if somebody tries to temper with the transparency log it will be also vis-to-nobody who's pulling from it. It can be used to exonerate developers. So if I do backdoor somebody's repository and somebody says, well, case your repository had a backdoor in it you will at least can say, I never pushed it. We can look through the log and say, I've never made this commit. So it exonerates you and points the finger at me which is why I would never do it. So and always we come back to the same stuff. It's always important to trust the developers not trust the infrastructure in which all the kernel.org stuff resides. Kernel.org has been rooted before. It's been 12 years since that date. There's no guarantee that it won't happen again. There's no guarantee that it hasn't happened again. We just don't know about it, right? Number one rule of kernel.org is to not trust kernel.org, please don't. We've always, we try to promote these zero trust workflows. The stuff that we do, try to be end to end. Trust your fellow developer, we just send bytes, we store bytes, we receive bytes. It is your responsibility to make sure that they haven't been corrupted either unintentionally or maliciously. We have tools that we wrote for that specifically. All right, so can you attack Linus? Linus receives a pull request, Linus merges a pull request, Linus tags a release. It's actually probably one of the hardest ways of trying to do this. All pull requests to Linus must be PGP signed. He does check the PGP signatures on the pull requests. If something doesn't match, he immediately notices and checks it out. He has contacted me before asking about somebody's key changes, for example. And we do, I do know that he does check this. There is of course the problem that we still rely on Shaoan for Git. We still sort of trust it because Shaoan, there are collision prevention attacks, collision attack prevention in the Git code. So we still say we trust ish it. There are, there's of course a continued effort to make Git use a pluggable hashing functions, like the shot of 56 and shot 512 and so forth. But it still is kind of not quite ready to go, unfortunately, at some point it will be. And of course it's very hard to get into Linus' key ring because it's literally manually managed. He will add the key only when he's certain that some checks have been verified. So can you bribe a maintainer? There's plenty of maintainers here. Can you bribe people in this room? Yeah, but also with limited success. They can only probably backdoor their own patches because if they're applying and try to modify somebody else's patches, almost certainly that person will notice it because if you've ever sent patches to Linus kernel, you're super proud of it and you'll be like, oh, you're looking at my code and they're like, wait, I didn't commit this, I didn't write this. And it will be immediately a red flag and it will be almost certainly found out almost immediately. This can also break CI in weird ways just to highlight the problem because the patch ID will change and then it just, the string of weirdnesses will cause more people to look at that patch submission. Of course it found out that something in Ethereum has happened, it will destroy the developers, the maintainers, it's a reputation. So yeah, because you'll probably want to target a key subsystem maintainer, not somebody like a random device driver. It'll also be super expensive because you're effectively paying for next 25 years of their productive work, for them to even to agree to something like this. So another much cheaper ways of getting anything like this done. So probably not something that we can even consider. Like for example, hacking their workstation. That will be fairly effective. There's still a high chance that I mean we're talking about somebody looking at Git commits. People will probably still notice something that goes out, but it probably will be caught until when it's only too late. Like I said, people are watching Git commits, they may not be talking to us, but some of them are thankfully. It still is only effective for planting intentional bugs. Like I said, if you try to sneak in complex backdoor code that does something more than just a way to elevate privileges or something like that, it will just be too much of the code and it will be almost certainly sticking out and even the developer themselves, maintaining themselves will notice that something weird is happening. So please, protect your workstations. Protect your digital identity above all and that means your encryption keys. This, we do have a program if you're a maintainer that you can get a free key file for storing your PGP key. If you have an employer, ask them to get you a better one because the one that we can get you is a Nitro key start, which is a great low device, but it doesn't have a lot of layers of protection. You can get a much more expensive one that is able to do digital signatures. And if you're doing a lot of work, you know, and you're doing a lot of play, it's not that expensive to have two different systems. Just separate your work environment from the environment where you do everything else. So can you attack downloads? Like Greg Krahartman signs a release, tags a release, kernel.org publishes a tarpaul or user or distro, which is more often downloads the tarpaul, builds it and ships it out. This has also been protected for a while. All the tarpauls are signed with the Greg's keys and the signatures are actually part of the stable Git repository itself. They're Git nodes that are shipped with the repository. We have no access to Greg's key, so we can't really do this process for anybody other than Greg himself. So kernel, we do verify the signature on tarpauls before we ship them out. So even there's levels of precautions there. So the few times that this did break, there were times when we changed the compression library which actually modified the tarpaul itself. I like to change some of the header information, just literally a couple of bytes. This was noticed immediately and sent to us. So if this does, somebody does try to replace a tarpaul of the kernel, this will almost certainly raise immediate suspicion and we will hear about it immediately. But just to reiterate, you are responsible, you and I don't mean you here, all the distro users, distro maintainers who are downloading kernels, please always check the signature on the tarpaul before you get it, just to give an extra layer of precaution there. We also have a SIG prover tool. We publish it, what it does, it runs on the background as a job. It downloads random tarpauls from kernel.org from the number of mirrors. It will verify the signatures on them and it will immediately send us an alert if something is, that's fine something that there is. That's the link to the code that runs it. You can help run it too. I run it in a couple of locations that are mine. Obviously, this protects against others, it doesn't protect against me. If you wanna help protect against me doing the various things, please also run this tool somewhere. Don't tell me about it. So, here it gets to the interesting part. You don't really want to backdoor the kernel and I tell you why. It won't be just your personal backdoor. For every powerful state level actor, there is another equally or more powerful state level actor who is also paying someone to watch all Linux commit. It's like building a shared arsenal of powerful weapons that anybody can launch at each other. It's like shared zero day access. If you are not a state level actor, if you're a powerful criminal syndicate, there's another powerful criminal syndicate and they're all banked at the same Swiss banks and you really don't want the Swiss banks to be vulnerable to the backdoor you put in into the Linux kernel. And if you wait long enough, someone will backdoor the kernel for you. This is paraphrasing KSPP reports, right? These are all the critical and high CVE vulnerabilities in the kernel throughout the years and you can see that many of them have lived for many, many years. This is literally what I meant. Go back to that one. There's literally any of your kernels right now, there is a critical vulnerability that allows local root. We know about it because it was published on the open wall list and the mitigations will be, it's already fixed in the get and the proof of concept will be out on Monday. So everybody kernel in this room, unless you've applied the fix, is vulnerable to local root elevation privileges. The backdoor is already in your system right now. And paraphrasing Greg again, just wait until a critical vulnerability is fixed. It's not like manufacturers patch their kernels or anything. So this is also true. That fix and that proof of concept that's gonna be released on Monday will be applicable to billions of devices out there for a number of months, years. We don't know how long. That's the unfortunate truth of the state of patching affairs in the world. Are there backdoors on the Linux kernel right now? Like I said, yes. Intentional backdoors? Probably not. And I say probably because there's no proof we can't offer you any proof. So despite the perception of that this is a bizarre and it's really all big mess that if a Linux development is done, it actually is very quite rigorous. There is a lot of code review that a lot of eyes looking at the Linux kernel code. There is a transparency and oversight at many levels. The pre-commit, obviously maintainers are not interested in sneaking anything nefarious because it's bad for their reputation. Post-commit, there are people looking at Git commits and just reviewing kernel code because it's part of somebody's job probably. Like I said, being a maintainer is a well-paying job with very long-term job security. Bribing a maintainer is super effective probably inefficient. There are cheaper ways of rooting somebody. Again, having root on kernel that org infrastructure gets you almost nothing. Anything weird or strange or nefarious you can do will almost certainly be almost immediately found out by the members of the community. Unintentional kernel backdoors? Like I said, everybody here probably has one known backdoor on their system right now. And I wrote this before the news came out so I was prescient, I will point out. Not just because it's not known to us, it doesn't mean that the future backdoor, future vulnerabilities are not known to bad guys right now, right? And bad guys, if you're watching this, what if it's known to your worst enemy, right? Will it be used against you? Probably. Let us know about it. It's in everybody's interest. There it is, security at kernel.org. Send a little note, it can be completely anonymous if you want. That's it. Thank you for attending. If you have any questions, please feel free to ask. Hi, could you describe the community issues that the free tooling work group is trying to solve and what's the road map roughly to get there? The issues that we're trying to solve in a community, sorry I don't quite... The free tooling workshop, the working group. Free tooling working group? Yeah. I'm not entirely sure what that means. Two months ago you posted my grand plan to put together a free tooling working group at the Linux Foundation that would look like... Oh, yes, okay, I see what you mean. This is not entirely related to this talk, but I will say that this talks about the work that we're doing on before and public inbox and the other bot integrations that we've done. We've done the bugzilla bot integration very recently. It's still trying itself out. So what we want to do at the Linux Foundation is instead of it being a bit of a, you know, skunkworks effort, I'm a system administrator for Linux for kernel.org infrastructure, but what I actually end up doing, 60, 70% of most of my weeks is writing code to help out developers. So instead of it being kind of like a juggling of priorities every week, I want this to be a dedicated effort by members of the Linux Foundation who are paid to literally make developers life easier and to help secure the, what we call the pipeline of code revisions going from the developer all the way through to the next release. So that's the, yeah, that's the tool chain, kind of like the tooling. I've had a number of names in my head, which is why I didn't quite ring the bell, but yeah, that's the goal. It still remains a goal. There's been no developers development since a couple of months, but I would like this to happen just so we actually continue working on this as a dedicated effort as opposed to whenever we have time. Thank you. Have you talked to any of the top level maintainers about basically requiring use of your tool set for submitting to them? Yeah, how do you see that happening? You know, I could, yeah, I recommend everybody to use this. I try to make this as out of your face as possible. You know, if you once set it up, it should continue to work. The problem is that, well, can I enforce this? So you can't, so I guess the key is can you convince a Greg or somewhere on the page? Greg is too nice to do so. Well, Kase is here probably the base scenario, right? He does use it, and I'm not sure if you require people sending code to you to use it, but he'd recommend it, right? So if we can make Kase make, you know, and make an example of him, that sounds bad, but I'd make him a sort of the model to follow. How about that? So maybe that's a case there, but I don't think as a administrator for Krone-Roga can enforce this. But yeah, I definitely would recommend everybody. I've wrote a ton of docs. Problem with writing docs is that then people have to read them, and that's the hard part. Anybody else who's everybody hungry? Three to one? All right, thank you very much.