 Good morning, everyone. I'm not Jim Zemlin. I'm Angela Brown from the Linux Foundation, for those of you who I have not met before. I'm filling in this morning. We obviously have a bit of a theme to our talks today, which, as I think all of those working on security know, we need more people thinking about it. So hopefully we'll get some more stragglers in here. Because it is our last morning, I just want to take a quick minute to, again, thank our speakers, sponsors, and our attendees for being here this week. We know it's been a crazy couple of years, so we really appreciate you coming back and helping us get back to some semblance of normal. Also a quick reminder that we have our 30th anniversary of Linux Gala tonight out on the Mansion Garden lawn. Starts at six. We are going to do a group photo for those who would like to do it at 5.45. So please make your way to the Arbor at 5.45 if you'd like to do that. We'll have cocktail hour, seated dinner, cigar bar, music, all sorts of things. So please do join us tonight. With that, I'd like to introduce our first speaker. So Dan Lorinck is the founder and CEO of Chain Guard. He's been working on and worrying about containers since about 2015, starting with projects like Koniko, Minikube, and Scaffold. With the goal of making containers easy and fun. Then he became concerned about the state of the open-source software supply chain and partnered with others to found Tecton, Salsa, and Sigstore projects. Today he's going to talk more about these and share on the state of open-source supply chain security. Please welcome Dan Lorinck. Thank you, Angela, for filling in for Jim today, and thanks everybody else for getting up so early here in Napa and not drinking too much wine last night. Yeah, so today I'm going to be talking about the state of open-source security and open-source supply chain security in particular. Like Angela said, it's something that I've been working on and worried about for a long time now, but if you've been paying attention to the news and if you're at KubeCon a couple of weeks ago and if you're looking at the agenda today, you're probably thinking that this stuff just popped out of nowhere, and it kind of did because of all of the attacks that are happening right now. But I'm going to start out with a little personal story, I guess, of how I got worried about this and all of the different projects going on in the Linux Foundation to help out. So like Angela said, I started in open-source actually in about 2015 or 2016, and my first open-source contributions actually were for the MiniCube project. Does anybody here use MiniCube? Awesome. Yes, I'm sorry for all those bugs. That was literally the first time I wrote Go code or the first open-source code I ever wrote. I just saw some stats now that MiniCube is used by something like half of a million people every single day, which is awesome. I haven't been a maintainer in a while, but I was the first maintainer for the first several years of that project. And we took MiniCube code, we were building all of this stuff, and MiniCube had some kind of quirks and it was kind of hard to build because it required all of these kind of native extensions, kermal modules, hypervisors, all of this stuff. So we had a really hard time finding a good build system that would work for MiniCube. So we did what everybody does in that case and you get a Jenkins cluster on whatever spare hardware you can find under your desk at work. And that kind of terrified me because the whole Kubernetes community was just taking this giant binary that I was building on this system under my desk and installing it as root on their laptops and using that to develop and work on Kubernetes itself, which is a pretty scary thing if you think about it, right? We have all of these secure build systems, we know how to build stuff correctly, but at the end of the day, one of the most fundamental tools for development across the industry was just using all of these bad anti-patterns, but we didn't really have any options at the time. So that's what kind of led me down this rabbit hole of working on secure build systems, secure practices, and things like this because as we've seen, build systems are fundamental to security of all of software. That's how I got into this space, and that's what kind of led me down this long tangent of trying to secure everything in open source. So today, I'm going to be talking about a couple topics and then introducing a whole bunch of other awesome speakers leading efforts in this space. I'm going to be starting out with kind of an overview of open source security. Everybody's probably seen all of the scary slides and trend lines and attacks that have been happening, so I'm not going to spend too much time on that. I'm actually going to flip it around a little bit and spend a much more time on some of the successes and recent wins across the Linux Foundation, the CNCF and other efforts in open source security. And then I'm going to talk about some of the new work and initiatives that are just starting, things like the OpenSSF and other foundations in this space. And then finally, I'll introduce and kind of mention all of the next speakers and how all of that work fits together. So I said I wasn't going to spend too much time on here, but we still have to show some of these scary graphs to put everything into context. This is a graph of next generation supply chain attacks from the recent Sona type report. Sona type has been putting these together for the last couple of years. Once again, I think this one just came out last month. These numbers are scary up until the right, which is bad in this case. Sometimes that's good, but not here. This one's focused on typosquadding, malicious code injection and tool tampering, which are three of what they define as the next generation supply chain attacks. Typosquadding has been around for a little while, but that's roughly where somebody renames a package or switches a couple of characters and tries to trick you into installing a malicious version of code. This is pretty common and people get Bitcoin miners injected into their CI systems or laptops all of the time. Malicious code injection happens too. We just saw a big one last week actually with the UA parser NPM library, an account takeover happened, and another coin miner got inserted into people's personal laptops. Tool tampering, that happens all of the time as well. People get corrupted versions of tools that insert backdoors and scary stuff like that. This has been a problem. We've known about this problem since the 80s actually. Ken Thompson wrote a famous paper called Reflections on Trusting Trust, where he tricked a compiler into inserting backdoors into everything. It compiled, including new compilers, so it was self-propagating and proved that if you haven't compiled every line of code, they compiled all of your code, all the turtles down, that you can't really trust anything. You have no foundation of trust. But I think he scared everybody so much back in the 80s that we all just kind of put our hands in our ears and forgot about it until recently. I don't quite know why it's only started now. I've heard a couple answers for why supply chain attacks have picked up. The one that resonates most with me is that we've finally gotten so good at securing all of the other doors and all of the other ways in. If you think back to five or six years ago, most websites on the internet weren't even using TLS. People had weak passwords. We didn't have things like multi-factor authentication. There were much easier ways to attack companies and attack individuals than having to do a supply chain attack. Thankfully, because the security industry has been doing such a good job, we made all of those ways so much harder and all of those attacks so much more difficult. Because of that, attackers don't give up, they just shift to the next thing. So supply chain attacks haven't really gotten easier. They've just become relatively easier compared to all of the other options. So like I said, people are noticing here, not just the attackers. This is another recent report from Angkor, another company that does one of these every year. So in 2021, 64% of companies reported that they were subjected to a supply chain attack this year. This is pretty scary. Was anyone here affected by one or had to deal with the fall out of one? I had a few I had to deal with like the Travis CI Secrets League and a couple other incidents like that. So this is just this year, over half of companies reported that they had to do one. And I'm sure a bunch of the ones that said no didn't know that they were attacked. So there's definitely some false positives in here. Finally, enough people have noticed actually that we're looking at new regulations. This is a good thing. It could be a good thing if the regulations turn out correctly. And I hope that they will. But yeah, executive order 14 028 was written and signed in May of 2021, from the Biden administration, this order contained a bunch of provisions on how to improve our nation's cybersecurity. This covers a bunch of other things like zero trust architectures, improved vulnerability sharing and reporting across industries, companies don't hide this these attacks that happen. There's new policies being written to force companies to disclose them in a standard way within a standard timeline. But there's a whole section in here, section four software supply chain security. And this covers a bunch of initiatives that the Linux Foundation has been leading in the communities here have been working on for a long time. This is things like s bombs or software bills and materials, which you'll hear about more from Kate after this is things like a new reference framework for build system security and code and artifact integrity. These are all things that agencies were directed to produce as part of this executive order. So there's all these timelines and agencies like NIST and CISA and NTIA are working on new standards guidelines and regulations for industry to improve their software supply chain practices. That's how bad of a problem. This is it's gone from a couple attacks to actual national security threats. So why are we talking about this though, in an open source conference and in an open source context? I think it should be pretty clear. I think supply chain security is open source security. It's for a couple of reasons. And this also has a couple implications on the solution here to his open source code is unique in a lot of ways. And unfortunately, some of the benefits of open source are actually drawbacks that we need to come up with special solutions for when talking about supply chain security. So right here, 98% of code bases contain open source software. Jim had some great stuff in his keynote the other day about this to open source has one open sources everywhere. It's ubiquitous. I think the other 2% of companies it's like that one dentist that I didn't recommend the toothpaste, they checked the wrong box or didn't read the question correctly. Or they don't know what open source is or something like that. It's everywhere. You can't write code in isolation without using open source in some context or not. But at the same time, very fact on the other side of the slide here. This is a recent report from synopsis 92% of those code bases contain outdated or vulnerable dependencies. So I think this one is also probably a lower mound. I think if you actually took a closer look, everybody's got some of these lurking around skeletons in their dependency tree. But yeah, these numbers put together pretty scary open sources everywhere. And it's out of date. It's vulnerable people aren't taking a close enough look and paying enough attention to that. Like I said before, open source security is supply chain security. And that's that means that we can't solve this in isolation, right? No company can come up with a perfect supply chain security solution on their own. We need to work together. And that's fundamental to the problem. And supply chains are about the links between companies. It's about the tools. It's about the libraries about the practices we use to exchange code, exchange binaries and exchange services. If somebody were to try to do this on their own, it just wouldn't work at all. Because it's about interacting with others. We need to come up with strong links, we need to strengthen these practices rather than messing up and making them weaker as we go along. We need to work on this together. So some of the strengths of open source software, it's open, it's free, it's transparent, anybody can contribute. These are the reasons it's grown so fast. These are the reasons it's taken over and become ubiquitous in companies, supply chains and dependency trees. But unfortunately, an attacker might flip these around and look at them a little bit differently. Some of these benefits actually become drawbacks and we need to take them into account. We need to look at them upfront, realize that they're challenges and come up with solutions for them without ruining what's great about open source. I've got a couple of memes here to try to explain some of these in context. Open source software is open. Anyone can contribute to it. The many eyes make all problems shallow, right? But anybody that spent time on the internet knows that not anybody on the internet is nice. There are bad people on the internet and there are people trying to attack you. There's actually just a pretty interesting vulnerability reported Monday or disclosed Monday of this week of the Trojan Code's vulnerability. And I think it was a little bit overblown in context, but the idea is still pretty scary. The point was if you use some Unicode bi-directional characters and you send patches, a code reviewer might think the code is doing one thing, but you've actually tricked them because the letters are actually Unicode homoglyphs and they mean something else. This is a perfect example of an attack that can happen by an untrusted contributor sending patches and not getting reviewed enough. We need to come up with answers here that don't make it harder to contribute to open source because this is something that would ruin the ecosystem for everyone. If you don't trust who's writing the code, then you need to come up with better ways to review it and make sure that the code isn't malicious. Another great benefit of open source code is that it's transparent, right? If you want to, you can crack open any one of these packages, look at the source code and rebuild it yourself. Fortunately, anybody that's worked on open source code or looked inside of a Docker container, if it's a gigabyte or two in size, there's just way too much stuff in there to actually look through. Sure, you could go ahead and open all these boxes, open all these packages, look at every line of source code, but at some point, just because you can do that in theory, it doesn't mean it's always possible in practice. The numbers are just too hard to work through and look at. Yeah, many eyes do make all bugs shallow, like Venus always says, but that only works if all the eyes are actually looking at every line of code and are distributed correctly. We don't have ways to do that today. There are certain sections of code bases that do get lots of review, and then there are certain sections of other code bases that nobody has looked at in years, and all of that code is a place for a vulnerability to sneak in. Finally, it's free. That's another great reason companies use open source all the time. You don't have to pay for it. Of course, you would use something for free if you can. Fortunately, free sometimes isn't worth the cost, even that's free. Open source software is mostly put together by volunteers. People aren't paid for this all the time, and that can cause problems. If a maintainer gets another job and stops working on something and doesn't put patches in. So sometimes the cost of dealing with the effects of using open source software that isn't maintained, that way the cost of just funding maintainers up front. It's a distribution problem here. We have tons of companies making tons of money on open source software, but it's really hard to get that into the hands of the maintainers that need it to do their jobs correctly and to support that critical infrastructure for everyone. So thankfully we have a bunch of companies working together in a group for the open SSF, working on figuring out how to solve that distribution problem and get maintainers the funding and support they need to be able to do this correctly. Well, those are kind of the challenges. Not all hope is lost, though, right? We have a bunch of people in this room. We have a bunch of people worried about this problem now over the last year, and so I think that we can fix it together. It's good news time. This is a funny tweet that got some good laughs after it a couple weeks ago, but there are some benefits here at open source that help out, right? Open source means that each problem only has to be solved once. Anybody that's done that knows it's not quite true. Once per license, once per language, once per community, once per foundation, all that stuff, but it's still better than every company solving on their own, even when you take into account those multiplication factors. And so that's what we're here for in the Linux Foundation, the open SSF, and all the groups working to solve this. So now we're going to talk about some recent highlights. All of the hard work that people have been doing to improve this over the last year. I'm going to start out with Kubernetes. Kubernetes, the project has done a bunch of awesome work to improve their supply chain security. The last, forget exactly which release this was in July of this year, though, the Kubernetes release, Adolfo, or Paraco on the internet, you might know him by, produced the first S-bomb, the SPDX S-bomb for the Kubernetes release. He went through exactly how to do it and actually released a new tool that other projects can use to produce S-bombs themselves. This was a huge step forward both for S-bombs and for Kubernetes security. In addition to that, the Kubernetes project has put a huge amount of effort into reducing the complexity of the dependency tree here. On the left was a graph of the Kubernetes dependency tree at one point, and zoomed way, way, way, way, way out in order to actually capture everything. I can't tell what most of these dependencies are even. It's just a hairy mess. And I actually want to call out DIMS here, because he's done a bunch of awesome work to help actively reduce that. I think of these dependency trees as kind of like an entropy or chaos. And if you don't put an active work, it builds over time. So they put in a, this is a little screenshot of the GitHub check, but there's a new check actually that runs on every PR and monitors whether the dependencies go up or down. You can't always keep things from going up, but at least you can flag these and make sure it doesn't happen by accident and put in some work to reduce it over time as well. And that work is actually paid off and the tree has actually gotten much, much smaller over the last couple of releases. So thanks DIMS. And everybody else in the Kubernetes community for helping out there. Outside of Kubernetes, the CNCF has been doing an awesome job here. And the tag security group just put together a best practices for supply chain security white paper that got published in May. And they're right now working on a reference architecture showing how to put all of these tools together and build a best of breed or best of class software supply chain. I think that might be done any day now. Andres is leading that effort, maybe not this week, maybe next week, some time around then, should be published and available for anybody to get started with. A bunch of work outside of the CNCF too. The Linux kernel is running everywhere. Linux kernel just landed on Mars earlier this year. And there are tons of bugs all over that code base. And a lot of these result in memory safety issues. So Josh from the ISRG is going to be talking about this in a little bit more detail later. But this is an effort to actually reduce entire classes of bugs instead of squash bugs one at a time by taking advantage of new technologies like memory safe languages and compilers. And finally, there are a bunch of new initiatives that we're also going to hear from maintainers about in a little bit today. Things like the SIG store project started to make a free code signing certificate authority system to get certificates out there for open source maintainers to sign their code with. So also binary transparency logs that anyone can use for any open source project for free. The open SSF has some exciting announcements coming up soon around funding and making that available to open source projects to get that money in the hands of maintainers. The bottom left there is the open SSF logo but salsa dancing. And that is the logo for the salsa project for SLSA, which defines a bunch of best practices in alignment with the executive order around software supply chains and ways to reason about them in levels and a way you can adopt them gradually over time. And then the that's the logo for Prasamo or the memory safety initiative that we'll hear about in order to rewrite critical software in memory safe languages to improve the security for the entire Internet by finding like levers and choke points that everyone is running to eliminate giant classes of bugs. So without we've got some next talks that are going to go into a lot more detail about each of these topics. Up next there's going to be some announcements about six door from Luke Hines. Kate Stewart and Gary O'Neill are going to be talking about S bombs and the exciting work that's going on in SPDX. They just got accepted as an ISO standard earlier this year and now they're working on the next version that's PDX 3.0. Jennifer Furnick and David Wheeler are going to cover the open SSF funding and all the work going on in the open SSF working groups. We're going to hear about EBPF a new kernel technology. It's being used for runtime security by Liz Rice. Kim Lewandowski is going to cover Salsa and the Dancing Goose. And then we're going to hear about memory safety from Josh. So with that I want to introduce Luke who couldn't join us in person. So he's going to be presenting on Zoom and we'll read his bio out here quickly. So if you haven't met Luke, Luke Hines is the security engineering lead at the office of the CTO at Red Hat. He's worked in open source for 20 years since the early days of IP filter in the Linux kernel. He's the founder of the Sigstore project and developed the CNCF project KeyLime alongside researchers from MIT. So today Luke's going to talk about the progress of Project Sigstore and he has some exciting announcements there too. Please welcome Luke Hines. Thank you very much, Dan. Great to virtually be here. And yeah, Dan set the stage perfectly there for how serious and relevant this topic is today. And Secure Supply Chain has really started to ramp up its its its demand for attention and rightly so. So I'll be doing an update on Sigstore, a project that I've worked alongside Dan on alongside Dan who just spoke for a while now and we're going to be doing a kind of a very rough overview of what Sigstore is. I expect most of you are already familiar with the project. How our communities function in our plans for a public service on our first round of Bootstrap funding, which we just secured. So so let's start and get into this. So yeah, an agenda is a brief project overview projects, data's Bootstrap funding and then what's coming next. So what is Sigstore? Essentially, Sigstore is a project that's currently under the Linux Foundation. We originally spoke to the Linux Foundation, trying to establish what sort of home to have. And you know, they kindly accepted us in where we started to find our feet and develop the project. And since then, the momentum has been fantastic. The adoption has been fantastic. And so we're now sort of starting to edge our way towards becoming a public service. So there is the community, of course. This is the developers. It's our users. And as said, there is the public service as well. So we plan to launch a free to use nonprofit public service for people to sign software. And this coincidentally can also be used as an on-prem service. So various vendors are starting to look at how they can utilize Sigstore behind a firewall, so to say. So what does Sigstore provide? So essentially, it's software supply chain transparency, integrity and provenance. So with Sigstore, you can sign something such as a container image, a software package, a software bill of materials. There is various other open source projects that we integrate quite tight with, such as in Toto, Tuff, Tecton, CD chains. And with Sigstore, you're able to sign these various artifacts and you're also able to store them securely into what we call a transparency log. And then utilizing these technologies, we provide an easy way for users to sign, OK, because the sign in adoption rates are very low at the moment. And then for their users of their software to verify, to verify in a simple manner as possible. So I mentioned all of those technologies, but there's lots more that can be there's lots more artifacts and use cases that Sigstore can be leveraged for. There's people looking at signing machine images, documents, all sorts of things. There's been a real a hive of innovation around the project that's been great to see. So kind of a rough over the decoder of our projects. As I said, we provide free short lived code signing certificates. That's the one of the great features of Sigstore is utilizing the technology that we found a user no longer needs to protect and maintain a private key. We have this sort of technology that we term key loss, OK, and we leverage this with very short lived signing certificates. So the private key is a femoral or most it's encoded to memory and never touches disk and they can immediately discarded. And we leverage the transparency log to provide a sort of a frozen moment in time to reflect that sign in event that has occurred. We also have a cryptographically verifiable auditable community operated CA, OK, so our certificate authority was boot strapped in the open using the tough framework, the update framework. So there's a series of key holders and we all signed various certificates, including our sort of the certificates that we use to run our infrastructure. So a little bit about the community. This is this is the exciting part for myself. So we've seen really rampant adoption. So the project started in July 2020. I had to think about that the years have been a bit of a blur recently. And from that day where there was just a very small few of us that were working on the project, we've seen two thousand four hundred yeah, two thousand four hundred commits from three hundred and eighty two contributors. Many different organizations are starting to come forward and take an interest in SIG Store. And interestingly, we've had 80,000 odd log entries into our transparency log. So many people are starting to store things into our public transparency log. This is all sorts of things container images, provenance files in Toto files, mini sign PGP X 509, many different sort of key types have been stored in there. And we've become a very popular GitHub project as well. And we have a slack channel that I think's got about a thousand people on there as well now. So the adoption has just been an absolute dream. It's really been great to see such a large vibrant community come together to work on this problem. So this is this particular graph that I really like. You can see in July 2020 that I think was just three of us. That was myself, Bob Calloway and Dan Lawrence. We were starting to sort of work on the early ideas, the prototype of Sigstore. We then started to get some more attention. We started to come above the radar, move out of our sort of stealth mode. And you can see the adoption around April suddenly starts to really pick up. And then October, we've gone through the roof essentially. And I think a lot of this is to do with the exposure of secure supply chain. People are starting to realise this is a problem space. And they've started to converge on Sigstore as one component to address this problem. So yeah, we've seen a meteoric rise in community adoption and new contributor growth, which is great to see. And we have a fantastic community. It's a very welcoming, inclusive community that I'm proud to be bought off myself. As again, same story, our commit growth is continuing to grow alongside new contributors. And interestingly, we're actually starting to see things level off to a degree, which is kind of reflects the fact that we're pretty much starting to reach our sort of functionality complete stage. There's still work to do, but a lot of the releases of our core projects are starting to get towards $1.0 for a GA release. Cosign was one of the first projects. And Recall, our transparency log will shortly be moving towards $1.0. And Fulcio, our certificate authority service, our WebPKI, that is also edging its way towards $1.0. So on to the Bootstrap fund. OK, so one of the things that we found working in Sigstore is the developer, the contribution rate, as you seen, has been exceptional. There's been a lot of people really interested in the project. We've been running under the BLF as a non-funded organization. And a lot of communities now want to start adopting Sigstore. There's people approaching us from RubyGems, talking to the Rust community. There's been talks with the Maven community, many different communities. OK, and a lot of very, very high level, fantastic integration with Kubernetes as well. So we've started to realize that our sort of core community folks, we're not really scaling enough to cover code review, writing code, talking to people and adopting, helping these communities on board as well. So we came up with this idea of having an initial Bootstrap fund, just some cash to sort of give us an injection to help the adoption and the onboarding of these various communities. So what we're looking to do with this single round of funding, before we move into a more regular funding for the public service, is to hire a developer relations engineer who will be working under the Linux Foundation. And they will be there really to sort of chop the wood and carry the water, help these communities on board, herd cats effectively, get people around the table and start to get some solid adoption around these communities that are in dire need of a signing solution. There's a lot of high level open source package managers that absolutely have no sign in at all at the moment. Those that do have a very low uptake, you know, it's typically around two to three percent of people that actually do implement and use the signing systems that are available. We're also going to have a security audit. This is really important. You know, when we get to where we expect to be, we will effectively be critical infrastructure. People will be relying on us as a service. So it's really important that our infrastructure components and our clients are robust and that there's been a very good threat analysis and people that are subject matter experts in this area can audit the code and help us with an embargoed process should anything be discovered. And then a marketing budget as well. This is the kind of this is a much more smaller amount, but this is just for general sort of outreach, swag, that sort of thing. So I know I'm going to go into the our initial benefactors, the people that have helped to sponsor us for this initial stage. And that's a chain guard, Cisco, HPE, Google Red Hat VMware. So sincere thanks to all of these companies for putting a bit of trust into us and helping us to really take this incredible adoption to the next level and start to move towards having a nonprofit free to use service for users to start signing and generating artifacts that have provenance integrity and non repudiation. So I'll now go into what's coming next. OK, so we're continuing to improve the sign in and verification systems that we have within SIGSTOR. It's been some fantastic work in cosine and there's other projects that are starting to really start to gather some interest around adoption. OK, so we're looking to improve the sort of the core functionality, but the UX of how those are for our users, how easy it is to verify and to sign. There's been a lot of interest in integration with S-bombs. Recall can now store various sort of S-bomb testations that can be used for integrity and provenance. As I said earlier, there's lots of packaging systems that we're starting to talk to. There's been some good momentums recently around Ruby Gems. We're talking to the PiPi community, along with our fellows in the update framework community that have spoken to the Python Foundation before. NPM, Wazem and Rust, these are other communities that we've approached and we're looking to hopefully on board. And Java, as I said, Maven, we have a sort of a Maven client and there's been some talks there. The public infrastructure is continuing to mature, so recalls will be reaching its general availability soon. Fulcio, we now have a short list for what we need to close so that we can hit a 1.0 release there. Perdo University have been working on a monitor. A monitor is effectively a system that makes sure that our transparency log is behaving, that the integrity of that log can be trusted because the log is something that you can make publicly auditable. And we're expecting and hoping that other monitors will get involved as well and people will start to innovate on top of our platforms. And we have a trust route that I spoke about earlier, our sort of open bootstrap CA and how this can be leveraged by other projects we can effectively become a sort of a sort of a signing trust for other projects. Not on the level of let's encrypt, this is more of a sort of a one of sort of intermediate for open source projects. And we also plan to run the public service. OK, so this is when we've got the adoption we've got through our bootstrap phase. What we plan to do then is to have a consistent running reliable service. OK, so this will require staffing, so there'll be a need for site reliability engineers. There'll obviously be a yearly operational budget and there'll be continued outreach and on board enough more communities and vendors and anybody that has an interest in this space and needs a signing system. So that brings me to almost the end of my slot. I just want to leave you with a little bit of details around how to find out more. So probably the best place to visit is our website sixtall.dev. From there, you can reach all of the materials such as documentation where our Slack channel is. You know what six store is, what we offer, etc. OK, there's also our organization where you find all of the repositories for all of the projects that I've mentioned. Obviously, I'm not there in person, but two of my fellow co-founders of six store are there. So maybe they could put their hands up. We've got Bob Calloway, OK? And of course, Dan, who you just met, who just introduced myself. So you want to find out more. Recommend you connect with those guys as well and they'll be able to let you know, answer any questions that you have. And I would leave it there. Thank you for your time. And hopefully next time I'll be with you in person. Thank you. The border from the UK is opening about four or five days too late for Luke to have been here. I know he wanted to. But Bob Calloway is right here. Dan Lawrence right here. This is important work. Please have your companies get involved. It requires a lot of effort and it requires funding. Our next two speakers are going to talk about SPDX. I am lucky to call one of them a colleague and I know from talking to her for years, but also from talking to him. They have spent years working on supply chain transparency and specifically on SPDX. That hard work culminated this year with the formal recognition of SPDX as an ISO standard, which is very exciting. Gary O'Neill is the CEO of Source Auditor. He's been working for over 20 years in tech, both at Fortune 500 companies all the way to startups. Kate Stewart is VP of Dependable Embedded Systems at the Linux Foundation, where she's responsible for automated compliance testing programs and projects, SPDX, Phosology and many, many more. I don't know where you find the hours. But again, here to talk about SPDX, Gary O'Neill and Kate Stewart. Thanks, Angela. Welcome. My name is Kate Stewart and I'm here with my fellow SPDX Steering Committee member, Gary O'Neill. We just finished reworking our governance for the project and we are now a steering committee as opposed to a core team. So there's more of us out there and hopefully you'll get to hear more from them in the upcoming year. But what we want to try to do is discuss a little bit about the need we've been seeing for us all to improve the automation around the supply chain, transparency and the security things, so that we can actually start to effectively manage it at scale. So this is my favorite cartoon, bar none. It just so beautifully summarizes our problem. Software today is built on a combination of open source and proprietary software packages. You saw the numbers from Dan and the challenge is the dependency trees hide things. As you go down and work through your software, sometimes you don't know what's there and that's where the vulnerabilities sometimes are hiding and there may be issues with what's one person or it could be multiple people, things change over time and things become vulnerable over time. It's a different type of timescale. So this reuse is very, very popular, very powerful. It's creating a lot of innovation that we're seeing but it's also because we have this whole set of things we're pulling together, we're adding the risk. So we need to have transparency in terms of all the pieces through it and what changes over time. So, SPDX started working on this problem about 10 years ago, over actually, and it was based on the challenge of complying with the open source licensing. And you needed the transparency to know what was actually your shipping so that you could comply with the licensing. And I started working on it because I was working on shipping up board support packages for free scale at the time, a semiconductor. And I needed to know what was there so I could put the appropriate artifacts together. And I was finding that my colleagues in other places were having the same problem and we had no way of communicating the same information so we were all doing our greps. And so this was the starting point of the SPDX project. And Gary was one of these colleagues that was doing the greps too with me. So three years ago, NTIA started a multi-stakeholder group and we started getting involved there because there was other market segments were trying to solve this problem too. And everyone had an idea in their head of what a software bill of materials was. But there was no written-down definition. Okay, we've been working and considering SPDX's software bill of materials pretty much since the start. And NTIA basically brought a whole bunch of stakeholders from industry, government, academia together to say, okay, what's minimum viable here? What really is minimum viable? You know, it'd be nice to do other things but what do we need? And so it's effectively identifying the artifacts and then working on the relationships between them so we can trace out that dependency tree. So lo and behold, three years ago, so by and early this year, the supply chain attack step is going forward and the binding administration, which Dan referenced came out with a call for improving the software bill of materials and it relied on the NTIA definition to define what a software bill of materials was and what minimum viable was. And so that suddenly gave us a framework that lines up with the work that's been done up till now. One of the things that the LF research team did is did a survey over the summer to figure out, you know, do people understand what S-bombs are? Are they actually gonna do anything about it? And we pretty much got very much an overwhelming yes. Well, they have awareness of it but yeah, they were gonna do something about it. So about three quarters of the people basically were surveyed, have said that they are planning of improving their changes and making a response to this. So it's a big powerful force in the industry. In parallel, what's been going on is, you know, back in 2009, we all wanted to work together to start this and there was an initiative here at the Elinix Foundation called Foss Bazaar and so we started gathering there. I think we called it Package Facts initially and then we actually had some marketing people that gave us software package data exchange. But we came here and we started sharing our use cases and figuring out what do we wanna do. So similarly, we also saw that companies wanted to figure out how to work with open source properly and this is what started off Open Chain. So these grassroots efforts have sort of evolved over time and then in the last couple of years through the JDF Foundation, we were actually able to take them to become ISO standards. And so the grassroots of use cases that everyone's come up with in the community have become standards now, which has been very exciting for both of our projects here. Now, this collaboration, like, you know, we've been, Gary and I have been here from the beginning pretty much, I think. Yeah, and we both had jobs that needed it. So we were doing this because we needed it for our jobs and then we kept involved, I kept involved between multiple jobs and Gary has stayed in the one job pretty much the entire time. And as new cases came in, we kept on evaluating it. Like I said, manual grip doesn't scale. But as you can see, we've had a lot of releases in history for this specification and it culminated with this year. In August, we actually got through ISO and the thing is we're not done. This is a journey. There's more use cases that are there. Technologies are coming up, becoming evolved. And so, you know, we're very much interested in people coming and reaching out to us with cases they don't think they can represent with SPDX so we can figure out a new way to do it. So, obviously, good company on the journey wakes the way shorter, is appropriate and we wouldn't have gotten this far without these companies. Some of them have been here from the start and some are just showing up from new startups. So all of these are helping us to figure out the use cases and work our way forward. And I really like this problem from Africa. If you wanna go fast, go alone. If you wanna go far, go together. Well, anyone who's been contributing and wanted their name up there over these years is still on the spec. And some of them have disappeared for a while and then come back and that's always welcome. But there's a lot of arguments, there's a lot of discussion. There's a lot of good, robust debate that's gone in to making sure that we could represent these cases for people. And that continues. Anyone who wants to is welcome to join us on our tech calls or the general monthly call and understand what's happening with the project. It's completely open. So the specification is continuing to evolve. We're actually going through a major refactoring exercise with it right now and we'll make sure we have a migration path. But there's new use cases and new technologies we wanna be able to support better. So we're doing some restructuring into a base profile and then the licensing profile will be maintaining as an optional profile. There's some security, things that we wanna use cases wanna sort as well as usage and build information. So this is work that's ongoing. If these areas are interesting to you, please come join us. And then the next step is gonna be making them easy to adopt and I'll turn that over to Gary. All right, good morning everybody. So Gary O'Neill, I got into this as Kate mentioned very early on. I've been in this area for more than 15 years. I got involved. I was a CTO for a startup company that got acquired by Microsoft. And I gotta say even 15 years ago, Microsoft did a pretty darn good job of due diligence, especially on open source. And I learned a lot through the process. So for example, I learned that you can't just grab and find all your open source. Your engineers come tell you a week later, oh yeah, we got this too. And it's like, oh my God, I gotta go tell Microsoft again. So after that experience, it's like, okay, I think I can help other companies with this. I think I should form a consulting group and I've been doing it for 15 years ever since. Just a couple of years into my business, I had a customer that brought up a really good point to me. Said, you know, we got all these tools. You know, they're using BlockDoc, a very leading software company in this space, but also other tools that do open source. And they all had different formats. Some were using spreadsheets. Some were using PDF format to report things. And how do you bring all this together? There's clearly a need for a standard. And we ran across to SPDX, you know, Kate and team and the Linux Foundation had this little group going called FossBizarre. I joined it and I gotta say, when I first started with SPDX, it was for business, you know. It solved one of my problems, one of my customers problems brought in good business contacts, but I stayed with it for the people. The community is just amazing. There's some really good people involved. And I've really enjoyed it. And that's really why I'm still here, you know, through this multi-year journey. The other thing is I get to code, my goodness. I love writing code. Even though I used to do a lot of managing, I still like to code and they needed tooling. So here I am. Another passion I have is hiking. So you'll see some pictures of hikes as we go through this journey. I got a credit, Steve Winslow for this picture though, the open source trail. I heard of open source hardware, but open source trails, my goodness. This was up in Western Mass. Apparently they chose to name their tool or their tools. See, I've got tools on the brain. Their trail, open source trail. So, and you know, I really like this quote, let the joy be in your journey, not in some distant goal. So that's been my view of this whole journey. So let's talk a little bit about the journey of the tools. It started off as a debate. We couldn't decide on a format within SPDX. There was one group that really liked small, readable, greppable, you know, files that are easy to incorporate in human readable. And then there was people like me that wanted it to be machine readable, something you can reason with, highly structured, hard to read. And I was allowed to bring this complex format into the SPDX community as long as we had a tool to translate it. So tool, I get to write code. So I put my hand up and that was the start of the tools more than 15 years ago. And up until two years ago, that was the same code base we were using, by the way. It was getting a little bit rusty around the edges, but we just replaced it, by the way, with a much cleaner set of tools. That evolved into a set of libraries that are used for my tools, for a lot of commercial tools out there. And that kind of helped recess the skids, you know, for using it. Because now instead of reading this back and writing your own code, you can just go pick up a library and get going. So that helped a lot. Now, one thing I thought might be interesting to a lot of you is how the tools interacted with the spec. Cause I learned this on the way, on this journey. Tooling really does help build better specs. And what I mean by that is when you take a spec that's new, that's never been tried and you write a tool around it, suddenly you find out what's possible, what's feasible, what's hard, what's impossible. And I'll tell you, we almost released a couple of spec items that just weren't parsable. You know, just no matter what you did, you couldn't parse it. And we discovered that along the way. So we evolved into this kind of iterative process where we developed the tools at the same time that we developed the spec. And we have this iteration that goes back and forth. And it's evolved to the point now where we have multiple tools vendors that come together in what we call docfests. It used to be called plugfests and bake-offs. But all those tools vendors get together and we compare notes and we find out what's wrong in our tools. But more importantly, we find out what's wrong with the spec. What's not clear, what's ambiguous, how it leads to different implementation and it helps us improve the overall spec. So that was a really good learning for me and I think useful for other organizations out there. Now, as we were developing these tools, these are all command-line tools. We're getting a lot of feedback, but you know, this is really hard. I gotta download this. What I gotta install with JVM, you know, goodness sakes, this is really hard, especially for the lawyers. Lawyers want spreadsheets, you know, they don't wanna have all these command-line tools. So we came up with the idea of an online tool. Now, when we went to embark on this, this was a little bit of work. So we enlisted the help of others and we came up with something I'd like to call student-driven development. There's this wonderful program sponsored by Google called Google Summer of Code where they bring together students, mentors like myself, open source organizations, provides funding and provides a structure for allowing these students to contribute. Now, the thing that results out of this is not only code, which is always good, but contributors. We use this for the online tools that's almost all developed by the students over five years and four years and five students. And the student that started this five years ago is still with us and is the maintainer of the project and has mentored over three students and still going strong. More recently, I've used the LFX Mentoring Program as another mentoring program, another excellent program to bring students into the community and to help produce more code. So it's not just about the SPDX tools themselves. It's about a broader ecosystem of tools. So over the years, we've really brought on board a lot of tools into this ecosystem of SPDX and SNS bombing or operability. There are two programs that I wanted to mention that I play a part in. One is the ACT, the automatic, I always get this wrong. I always think of it as ACT. I gotta read it, automated compliance tooling group. And that fosters collaboration between different open source projects that produce or consume SPDX documents. And then there's an open chain reference tooling group kind of driven out of Europe, but they have a nice framework and a nice reference, I guess, called architecture that you can visit with all the tools laid out. And it's another form for collaboration between the groups. Now, when you think about the tooling, there's a broad set of tools out there now that deal with SBOMs. I have to thank the NTIA SBOM working group for coming up with this taxonomy of the different types of tools. Kate's one of the co-leads for this group. And they break it down basically into three different types of tools. Producers, consumers, and tools that do the transformation of the SBOMs or of the documents. So if you map that onto our ecosystem, you come up with this nice little graph here. And to kind of go into this, there's a few of these tools I just wanted to call out. One is on the producing side. We kind of broke that into two parts, producing during the build. And as mentioned in some of the previous talks, the build is a really critical part where you can assemble this information. You know a lot of information while you do the build. And the OSRT, the Open Source Review Toolkit, sponsored by Hear Technologies, I believe is the name of the company, Thomas Dean Berger is one of the key contributors. And Bosch. What's that? And Bosch. And Bosch is another key contributor to that. They assembled this and produced SPDX documents at build time and hook into a lot of the different packaging infrastructures for doing that. I think even more interesting is what's going on in reuse. Because the problem is even at build time, you don't really know everything. You know, if you have dependencies, where do those dependencies come from? The reuse group coming out of FSFE, you know Free Software Foundation Europe, they're working on making it so that the maintainers and the actual contributors can tag the important information so you can build high quality S-bombs at the point the code is written. So now you've got really high quality information that goes from the beginning when the code is written. And you can track it because we're using a standard like SPDX, track it all the way through the build environment out until you get to use it. So you know what's in your source code. All we have to do is get everybody who contributes code to use reuse and we're good. Everything becomes easier. But of course it's not the case today. So we have products that actually look at what's in the code after it's been produced. Now the typical ones are Phosology and Scan Code scanners. Phosology is one of the earliest open source scanners that are out there, looks at licenses, finds it, produces an SPDX document. More recently, Scan Code sponsored by NextB is a really good code scanner. Can identify copyrights, licenses and again produces SPDX documents. I want to point out Turn which take on I think one of the hardest challenges which is these containers which are incredibly complex beasts. So it's all you know, many literal layers. They peel these layers back, find out what's in it, produces an SPDX document which can then be consumed downstream. So now beyond even this community of tools, there's these upstream projects that are adopting it and producing SPDX documents as part of their build process. So the Kubernetes bomb tool is producing SPDX documents that are incorporated in it. Yachto actually started supporting SPDX very early but they've now, and one of the more recent releases is producing SPDX documents for their Linux distros. And then in the IoT space, Zephyr is producing SPDX documents that can provide binary to source traceability with SPDX documents. So all of these taken together provides a lot of transparency for these different environments using SPDX. Within the tools group, we have a very large, I've just touched on a few of the tools that are out there. Here's a URL that has a list. It's not complete. If you know of others, go in and add them. You can add to this particular list. It's organized in the taxonomy that we described earlier. And then where are we going forward? Okay, this is my favorite height down on the Grand Canyon by the way. The biggest one is security. We've brought on board some of the NTIA experts that are in the security field. We brought on board some really good concepts and additional use cases. I'm writing a tool right now that takes an SPDX document and just uses the information from the document to go pull some of the public vulnerability databases and brings that information back down. Using the current spec. So it's a very, very active area. More coordinated integration between the tooling with an ACT. Improving the Python libraries is a particular focus and bringing on more volunteers. So if you're interested, you know somebody interested, go visit our GitHub website. Join us on the SPDX calls. We're a very welcoming organization and a fun group to work with. So please come join us. So that's what we had today to talk about as the famous quote from Lao Tzu. The journey of a thousand miles begins with one step. It's been a long journey, but it's been a fun journey. And we've given you three steps that we've taken along the way and hopefully they can help all of you with your open source journey as well. So thank you very much. And if you're interested, come join us in the SPDX community. Thank you. Thanks. Thank you both. Our next two speakers are going to be talking, as Dan said earlier, about OpenSSF. So the first speaker is Jennifer Fernick, who is actually a founding board member of OpenSSF, also serving on the governing board now and the technical advisory committee. And then, so she'll actually be talking remotely, couldn't join as well. When she's finished, David Wheeler will join the stage. David is the director of open source supply chain security at the Linux Foundation. I think many of you know him until follow up after her with some more information on where OpenSSF is going. Please welcome Jennifer and David. My name is Jennifer Fernick and I'm so thrilled to be joining you here today. I am a cryptographer and a founding member of the Open Source Security Foundation and also the SVP and Global Head of Research at NCC Group, leading the research division of one of the largest security consulting and offensive security testing companies in the world. Basically, this means that we hack everything. So today I'm here to talk about attacking and defending open source software. Open source software is central to the core and critical infrastructure of the internet. As leaders, we show ourselves worthy of that responsibility by ensuring that the digital infrastructure of the world is secure enough to operate as intended and in a way that benefits humanity. Ultimately, we need to accept that free software isn't free. We pay for open source security vulnerabilities either in advance through security improvement efforts or else in countless other more expensive ways after these vulnerabilities have been exploited by threat actors. And software supply chain attacks are increasing. We've seen a 650% year over year increase in these attacks in 2020. And unless we make attacking the supply chain harder, these attacks will continue to accelerate. So it's clear that software is under attack whether open source or proprietary. Attackers do not care what licenses our software has but they do care about impact and the impact of open source software, especially the core critical projects underlying much of our digital world is unquestionable. But security is really hard and none of us has solved this fully. Technology companies worth trillions of dollars and with some of the best developers on the planet are still shipping systems with memory corruption vulnerabilities. Security is thus very difficult and perfection is elusive but together we can radically raise the bar. In all likelihood, the next heart bleed, shell shock or stage fright is probably already in our repos just waiting for adversaries to find it before we do. This graph shows us that on average, a vulnerability on GitHub goes undetected for over four years. Even critical vulnerabilities took over two and a half years and on average to get disclosed. And this shouldn't surprise us. Shell shock existed in bash for over 20 years before it was found. Taken together, we see some worrying trends. It takes years to detect vulnerabilities after they are introduced. Yet it takes attackers mere days to exploit known vulnerabilities after they are disclosed. Developers are not actually getting any better at secure coding and this is all while applications are increasing in complexity and in dependency chains. So the great and by which I mean terrible thing about all of this is that even if your code is totally clean and free of vulnerabilities, you've still only remediated a minority of your overall security risk. Research shows that most of your risk actually comes not from your code but from its direct or its transitive dependencies which is to say, research shows that most of the volums can totally be blamed on someone else. Unfortunately, there's still your problem. So open source projects and proprietary software both tend to have a lot of open source dependencies. For example, we know that the top 50 open source projects with the most downstream dependencies had an average of over three and a half million projects that depended on them. An attacks on open source software are getting worse and open source software is a part of everything. And as the internet gets closer and closer to everything that we do, the stakes of cybersecurity get higher and higher where real world physical harm, the fate of healthcare systems and their patient records, the adversarial control of vehicles and other autonomous machines, the evaporation of hundreds of millions of dollars and the takedown of entire nations' utilities and other critical infrastructure are just a few software errors away. And if this sounds dramatic, know that every single example that I have just given you is something that has already happened somewhere in the world and has been reported on by major media outlets. The number of vulnerabilities in the wild increasingly outpaces the speed at which the security community can patch or even just identify them. And every day the world contains more source code than it ever has before. And vulnerabilities seem to scale with the size of code base. And we're seeing a growing number of reported vulnerabilities. We have this problem that security as it is practiced today does not scale at the rate needed even just to keep things at least as secure as they were yesterday. And what's worse is we have compelling reasons to expect that this is going to get even worse for defenders. Innovations and finding vulnerabilities at scale sound like just the revolution we need, but we need to keep in mind that they are dual use, meaning that if we're not careful, these tools could even end up as a net benefit for attackers, not for developers or for defenders. From a research perspective, there are a lot of things I'm excited about in terms of finding and hopefully remediating vulnerabilities at scale. These include innovations in program analysis, fuzzing, including machine learning, fuzzing frameworks, vulnerability finding query languages, automated exploit generation. However, I'm also very nervous because we don't have moral rights over these technologies. They may end up readily adopted by attackers before defenders. We also have to think realistically about the imbalance in incentives between open source maintainers and threat actors. Often in security, we talk about how defenders only have to be right every single time to stay secure, but attackers only have to be successful once. The issue is compounded by the fact that developers primary goal is and should continue to be to build core functionality and to innovate and not just to defend against malicious hacking, but attackers entire goal is to attack. So putting this all together, there's more and more code developed each year. There's no fewer volumes per unit of code developed compared to five years ago. GitHub studied like five years of open source commits to see if the rate at which vulnerabilities were introduced has changed over time and found that online of code written in 2020 is just as likely to introduce a security vulnerability as one written in 2016. We also see an exponential growth of volumes reported every year. And we have reason to believe that the reported vulnerabilities are just the tip of the iceberg. Not only because we're not using tooling consistently across all of these code bases, but also because tools can't find everything. And sometimes we really need to perform code review and pen testing and security audits by hand. And even when we find those phones, it's hard to disclose them to open source projects. And then downstream projects need to patch. And on top of all of this, these vulnerabilities have a tremendous amount of value. Some of the most powerful and critical of these are worth hundreds of thousands, if not millions of dollars when sold to governments or exploit brokers or other threat actors. So there are weird incentives at play here that we cannot simply ignore. Not only that, it's not just the number of phones that's increasing, it's the number of active attacks and exploitation of vulnerabilities in the wild is happening faster than ever before. This is true for both open source and proprietary software. Recent research has shown that the number of days between vulnerability disclosure and exploit creation and observation in the wild has shrunk from 45 days as of a few years ago to a mere 72 hours now. Time is very much not on our side. So how do we reduce vulnerabilities at scale? I created this diagram to chart out some of the things that can help make open source more secure and where the SDLC can best apply. These include data-driven identification of the world's most critical open source projects, systemic interventions to prevent vulnerabilities in the first place, introduced at various places in the SDLC. This includes everything from developer education to integrating security into DevOps workflows, to creating security tooling and program analysis techniques, to programming language research and standards development to help prevent or eliminate entire bug classes. It includes threat modeling and much more. But it also includes things like preventing inherited security debt through creating tools that can help developers and users quickly assess the security of a repo or package, investments in technical security reviews of critical open source projects, improving vulnerability disclosure to open source maintainers and having emergency incident response support to respond to whatever is the next hard lead. We need coordinated impact prioritized funding for security improvements, audits and research that is no longer a sparse matrix of individual efforts but instead a cohesive end-to-end security program that thoughtfully seeks to prevent, detect and fix vulnerabilities in the open source ecosystem through a mix of scalable systemic improvements for all projects and targeted attention to the security needs of the most critical ones. So my question then is what would happen if the technologies built and maintained by the teams of people in this room were to come under adversarial control by a hostile threat actor? I ask this because if your system is insecure, you cannot presume you will continue to control it. And importantly, you cannot make claims about a system that you do not control. Suddenly, what we knew about a system's privacy, fairness, robustness, regulatory compliance, ethics and even safety dissolves. Fortunately, we know exactly what we need to do to make open source software more resilient and secure but it requires all of us together. We must act together, coordinated and at scale for the safety of the critical infrastructure of the internet. So join us in what is potentially the greatest intervention into software security of all time. Thank you so much for listening and for Delonix Foundation for this incredible event. I now hand things over to my colleague, David Wheeler. Good morning. Can you reset the clock down here? All right, so thank you so very much, Jennifer. First of all, I don't want anybody to come away from our combined talk to say, oh, there's no hope. Okay, there is hope, okay? But it's also important to acknowledge that it's not just the people here in this room or some people writing some papers who've noticed these problems. Society in general is noticing. Anisa, which is an agency in the European Union, basically also identified, as well as the sonotype report shown earlier, that the number of supply chain attacks has dramatically increasing. And their opinion is that the reason this is happening is the reason that attention is shifting to suppliers is that increasingly the operational systems are becoming harder to attack more directly. And the attackers are simply looking, well, what's the easy way? The supply chain's now the easy way. So that's what they're going to do. As mentioned earlier about the White House executive order on cybersecurity, that has a number of sections. Section four is a really long section within that and it's all about the software supply chain. This is suddenly, we're talking now, US presidential level kinds of interest. The good news is that the Linux Foundation is a lot of projects that are working to improve, to secure open source software. That includes the open SSF, open source software security foundation. I'll talk more about that in a moment. But I do want to acknowledge some of the many, many others six doors mentioned earlier in Toto, chaos working on metrics, SPDX, we heard about earlier. So there's many, many other folks working on improving the security of open source software writ large. That said, the open source software, sorry, open source security foundation or open SSF was established to collaborate to secure the open source ecosystem. And it's a wealth of the new foundation. It was only established in August, 2020. There were some challenges starting up a new foundation in the middle of a pandemic, as you can probably imagine. And so the initially was agreed, we're not going to worry about membership dues right now. We understand that's a little challenging right now. But of course that limits the resources available. Well, COVID suddenly has not disappeared, but people have a much better understanding of the economic impacts, we have vaccines. And so now we are moving towards models with membership dues and that sort of thing. And the great news is that open SSF has raised $10 million in new commitments to work to resource some of these critical issues. I don't have time to go into detail, but currently the open SSF has six working groups focusing on anywhere from how to identify what the critical open source projects are to whether the best practices so we can do the right things that are going to reduce the likelihood of vulnerabilities getting released. What do we do if a vulnerability is found? How can we help get them reported, get them fixed quickly, get that distributed back out? I took this diagram from the salsa folks and put various open SSF projects on it to try to show in a bigger picture how many of the current open SSF projects and work together. Now I do want to acknowledge that, of course, open SSF is a relatively new foundation. I'm expecting new projects to be created. And I'm not going to talk about every single one here, but my goal here is to both give you a big picture and some little samples of some of the things that the open SSF is doing. So if you look on the top left, where does software start? It starts in developers heads. That's where software starts. And the problem today is that the vast majority of software developers have never been taught how to develop secure software. I will argue the main reason why we're not seeing a decrease in the number of vulnerabilities. We don't teach them, why would they do anything different? Just as a quick anecdote, I was driven up here by an Uber driver who turned out to be a computer science student, senior at Purdue. I asked, have they taught you anything? What have they taught you about security and developing at your computer science department? Nothing, nothing. Clearly, security is not important because not a word is spoken about how to deal with it. And I don't think that is at all unique. I think that is still the norm across the universities, colleges, across at least the U.S. and I really think around the world. That's a serious problem. So one of the things the open SSF has done is develop free education courses on the fundamentals of how to develop secure software, costs you absolutely nothing. We now have thousands of people who have registered for those courses. Let's see, we're moving on. Of course, once they actually start writing source code, you've got to start writing that code and putting into a project and hopefully coordinating with others. Well, there are ways that are more likely and less likely to produce secure results. So we have projects like the CI Best Practices badge, set of criteria of things that projects should be doing that are more likely to produce secure results. Things like scorecards, which can automatically analyze a project's processes and identify either things are doing well or things that maybe they're not doing so well, maybe need to fix. Various improvements to how we build the software. Once you write the source code, we need to transform it into a package. Well, how do you do that? Those build environments and if the build environments are subverted, the results are going to be subverted. And so, and we've got very, of course, then you need to actually package it, release it out to distributions. I'll put out just a few of the other things earlier as mentioned, Salsa, which is particularly interested in how do you, how do you make sure that the supply chain for that software has integrity and they have developed different levels to, okay, these are the first things you need to work on. Okay, once you've worked those, here's the next thing that you need to work on. When people decide what software packages to bring into their larger system, we want to help them make good decisions. Some software is more secure than others. We'd like to help them figure out which ones are. We want to identify critical projects. We've got various approaches to analyzing data to try to help quantitatively identify what seems to be most important. Some most important projects are the kinds of projects you don't notice because they're just like that picture there. There are many, many, many levels deep, but everything depends on it. And finally, we need to improve those critical projects. I just said final, but I don't need to add something else. We will probably never have the case where we have no vulnerabilities ever. I think that's an awesome goal to shoot for, but like many idealized goals, probably we won't achieve that, but that means that we need to deal with that. We need to know what to do when a vulnerability is found. How can we improve the vulnerability reporting process, fixing, releasing, getting those updates, getting us updated all the way and doing that in a much more rapid way because the attackers are faster now. One of the newer projects within OpenSSF is Project Alpha Omega. This has gotten some significant funding and the idea here is actually pretty straightforward. There's millions and millions of OpenSource software projects, but some are especially important. Let's identify a small set and really dive deep, do real security audits with people specifically trained to do that sort of thing, use automated tools, help them fix the problems found, help them improve their practices, and that's Project Alpha. Project Omega is the, well, we really love to do that for everything. There's only limited resources. So for that next batch of projects are important, but we don't quite have the funding yet to really go deep. What can we do with automated tooling and triage and so on to help those projects much more rapidly up their game? So what I want to end with is please get involved. I think all of us have a stake in the security of all the different projects that we're involved in that we're depending on. And so if you're interested, please get involved. There's a link right there. Go to the openSSF.org website. Go look at the get involved. I'd be happy to talk to you. Brian Bellendorf would be happy to talk to you and the governing board members would be delighted to talk to you, but please, we would love for you to get involved. And with that, thank you very much. Thank you, David and thank you, Jennifer. Our next speaker is also going to be joining us remotely to talk about EBPF. So previous to her current role, she was the vice president of open source engineering at security specialist, Aqua Security. She's now the chief open source officer at ISOvalent, which has been an EBPF pioneer and the original creators of the Cilium project. She also serves in the technical oversight committee for CNCF. So she's been involved in security for a long time. We're happy to have her, wish she could be here in person, but please welcome Liz Rice. Hi, everyone. Thank you for the introduction, Angela. I really wish I could be there in person, but I'm just gonna be speaking to you from rather dark and damp London today. But I'm gonna be talking about something I'm very excited about, EBPF. It's enabling some really powerful tools for networking, observability and security in cloud native and beyond. I'm gonna talk about all three, but lean towards talking about security in line with the theme of the key notes today. So if you haven't come across EBPF before, you're probably wondering, what does that stand for? It stands for extended Berkeley packet filter, but to be honest, that's not terribly helpful. What you really need to know about EBPF is that it makes the kernel programmable. So we can write custom programs that run within the kernel. They're loaded there by an application in user space and they're associated with an event. So when the event happens, the EBPF program runs. Those events, there are literally thousands of different types, it could be entry or exit from any function in the kernel or in your user space applications. It could be hitting a trace point. It could be the arrival of a network packet. There's just thousands of things that can be used as triggers for these EBPF programs. So I think it's traditional to have a kind of hello world and I'm just gonna run this for you now. So my hello world EBPF program literally is just going to say hello California. And I'm going to attach that to a exec v system call. I've also got some user space code that's going to load this EBPF program and put it into the kernel. I'm just skipping over that for the purposes of time but my user space code is in go and then this is my C kernel code. And the exec v system call that I've associated with this program with that gets called every time a program runs on my virtual machine. So if I run this and I need to be privileged to do so, we immediately start seeing that tracing message being generated a lot. And that's cause there's a lot going on on this particular machine. And if I run something in a different window, I ran PS, it had the process ID 217475. So there it is. We can see I was running a bash shell and that's what triggered that exec v system call with that process ID when I ran PS. So what I want you to take away from this demo is that I was able to very simply attach my EBPF program and it immediately could see processes happening across my machine. And the tracing information generated included information about the context of that event. In this example, there's information about the process ID number and the executable. So if we get that kind of information when an event is triggered, we can use that for really powerful observability tools and for security tools too. Now this ability to change the way the kernel acts dynamically is a really radical change for the world of Linux. So previously, if you wanted a change to the kernel, well, first of all, you'd have to get buy-in from across the Linux kernel community, that it's a good idea, that it's general purpose applicable to everyone. It would take time to get the change into the kernel. And then typically it takes years before kernel release makes it into the production distributions that people are using around the world in enterprise. Now, this is one reason why EBPF or why EBPF is suddenly so interesting and why you're hearing a lot about it now. That's because distributions are starting to have EBPF capabilities available. EBPF has been in the works and under development in the kernel for years, but now it's finally running in those distributions that have been commonly used. So now we can take advantage of EBPF to write EBPF programs that can be loaded into the kernel dynamically and add features, add capabilities. We can also use them for some security purposes. One of my favorite examples is using EBPF to mitigate a kernel vulnerability. This is called packets of death. If you have a vulnerable kernel, if you have a particular malformed network packet arrive, it could crash the kernel with EBPF. We can hook into the arrival of a network packet, check for whether it's malformed and discard any packets of death so the kernel doesn't crash. Now, instead of having to wait for a kernel patch, this is literally something you could just load into your kernel. You wouldn't even have to stop any processes running on that machine and you'd have a mitigation for that vulnerability. Another example of using EBPF for security is Linux security modules. So you may be familiar with LSM such as SE Linux or App Armor. And these run as kernel modules when your user space application does things, does activities, the kernel checks whether those are activities are permitted by whatever security policies you're using. But those tools tend to be pretty inflexible and pretty generic with BPF. We can use that same LSM interface within the kernel but we can dynamically load those security policies and those security checks and you can have custom security policies in BPF LSM specific to your needs. So what does this mean in the cloud native world if we're running in Kubernetes? Our application code runs in containers and those containers all share the same kernel on a virtual machine. There's one kernel per virtual machine. Our containers are grouped into pods in Kubernetes world but they still all share one kernel per machine. So whatever our pods are doing whether they're accessing files or sending and receiving network traffic whenever Kubernetes creates containers these are things that the kernel is involved in and aware of. So we can hook BPF programs into these events and get insight into everything that's happening on the whole machine. And this is enabling some really powerful observability tools. One example is Pixi which is in the sandbox. This is an example of a flame graph that it can generate without making any changes to the application code or the pods you literally can install Pixi and get these kind of observability this information generated dynamically. We can also use EBPF to make container networking really efficient in a container environment. We have in Kubernetes we will typically have a network namespace for every pod and then there's also a network namespace on the host machine. And there's a whole networking stack in each of these networking namespaces. So when a packet arrives on the ethernet interface into this machine it's gonna go through all of that host networking stack to reach the virtual ethernet connection that takes it into the pods networking namespace through the networking stack in the pod and finally to the application. If we're using an EBPF powered CNI like Zillium it knows about all of the endpoints that are being created across the cluster. So when a packet arrives at that ethernet interface it can inspect it immediately see that it's destined for a pod and send it directly into the pods networking namespace by passing the whole host TCP IP stack. And that results in some significant measurable performance improvements. In this graph we can see that both Zillium and Calico in EBPF mode is significantly more performant than in legacy non EBPF mode. It's almost as fast as if we were not using containers at all and simply sending traffic between two different nodes. And this networking can be not just efficient but also Kubernetes aware. A CNI is aware of the pods that it's being asked to assign addresses to it can inspect the labels on those pods and use that to build up a picture of which endpoints are associated with different Kubernetes services. And then we can use that to build up service maps like this. We can also inspect individual packets using EBPF to get the details of what's flowing around these networks. We can use a similar approach to accelerating the data plane in a service mesh. Normally in a service mesh there's a proxy inserted into every pod using the sidecar model which I'll discuss in a moment. And that means that a packet that arrives on the node has to traverse the host networking stack. Then once it reaches the pods network namespace it has to go through the stack to user space back into the kernel again and back out to user space to reach the application. And there's typically this, there's a proxy at either end of the connection. So it's a packet is making all these journeys twice one on the egress and one on the ingress. So that adds significant and measurable latency in a typical service mesh. But with EBPF we're able to bypass a lot of those parts of the networking stack and just goes directly from socket to socket significantly reducing the latency in a service mesh. Plus because we have the awareness of what endpoints are, what services they're related to we can use EBPF and network policy decisions. We can drop packets with EBPF programs if they don't comply with network policy. It's a very efficient way of achieving network policies. And the real beauty of all this is that you can add these EBPF programs into your kernel without having to make any changes to your applications without having to change configuration. We saw that in the Hello World example that as soon as you run the EBPF program or as soon as you load it, it can be triggered by events related to pre-existing applications. And this enables us to rethink why we're using sidecars. Nathan Leclerc did this really great cartoon about the sidecar model and how we can use EBPF to replace the sidecar with kernel instrumentation. So let's talk about the sidecar model. This, if we want to instruments an application we want to instrument a pod perhaps for security purposes, perhaps for some kind of observability tooling. But using the sidecar model, we have to inject a sidecar container into every pod that we want to instrument. If it's inside the pod, it's inside the namespaces and it can observe what's happening within that pod. In order to get that sidecar into the pod there has to be YAML definition that adds that sidecar container. Typically, the YAML is updated automatically. Perhaps it's done during the CICD process or as part of an admission control webhook that updates the YAML to add the sidecar definition in. But what if that goes wrong? What if you fail to label an application to say that you want the sidecar injected? What if someone just makes a mistake? If something goes wrong with that injection process or there's a misconfiguration, your application would end up not being instrumented. So if it was security tooling it would not be protected by that security tool. Contrast that with EBPF where we simply have to load EBPF programs into the kernel and any pods running on that machine will be visible to those EBPF programs. No sidecar required. And this is particularly interesting if we think about the case of a malicious activity that if your node is compromised and an attacker manages to run something on your node it's very unlikely they're going to bother to instrument it with your security tooling but it will still be visible to EBPF programs. So we've seen that EBPF programs can get information about processes. They can get information about networking. This gives us some really great possibilities for some very powerful security forensics tools. This is something that we've been experimenting with in Cilium where I showed you previously some network flows where they were Kubernetes aware and we could see traffic flowing between different pods and different services. Combine that with the kind of information that we saw about processes from the Hello World example and we get something like this. We can see that this particular pod has created network connections to two destinations. They look totally reasonable. It's connected to the Twitter API and it's connected to an Elasticsearch service. Imagine if one of those destinations looked suspicious. Perhaps it was a cryptocurrency mining pool or a command and control center. By combining the network information with the process information you'd be able to see exactly which executable in which container, in which pod, on which node, in which namespace, you'd have all of the details know exactly when it happened. And using that information, it would be much easier to track down how the application was compromised, what in fact has been compromised. So I think this is really the... EVPF is a really powerful basis for building some really interesting security tools. So I've talked about EVPF making the kernel programmable and it's enabling this really powerful and efficient networking observability and security. We're seeing tools join the CNCF. We've already got tools like Pixie and Falco and Cillium in the CNCF family. It's not even just restricted to Linux anymore. Microsoft are working on EVPF on Windows and it's conceivable to think about EVPF also being introduced to other operating systems to take advantage of the same kind of principles. In order to support the collaboration of developing EVPF technology itself and the surrounding tool chains there is now an EVPF foundation. It's part of the Linux foundation. And that's really enabling this kind of collaboration on the nitty-gritty of EVPF. Now, I've talked quite a lot about EVPF programs. I think for most people, particularly end users they're unlikely to be writing EVPF code themselves but I do think there's a very high chance that they will be using EVPF-enabled technology and tools in the years to come. Well, starting from today. So I hope that's conveyed some of the reasons why I am so excited about EVPF. If you'd like to find out more there's a website evpf.io. I am Liz Rice on pretty much every internet platform. So although I can't be there in person to answer any questions and hear your thoughts about EVPF I would be delighted if you reached out to me on the internet. So enjoy California on my behalf and thank you very much. Thank you so much to Liz. Okay, so a couple of speakers have mentioned that we're gonna be talking about Salsa today. So our next speaker, Kim Lewandowski is gonna do just that. She's a co-founder and head of Product Chain Guard and previously was at Google where she was actually on the governing board of the Open Source Security Foundation. Please welcome Kim Lewandowski. Thank you. Is my mic on? Yeah, cool. All right, I'm gonna try to go pretty quick because I know I'm standing in the way of Josh catching a flight. And he's, he's our, I think last talk next. So today is National Candy Day. When I was researching what to talk about in my slides I thought this was really interesting. So happy National Candy Day everyone. But I think what would be more interesting actually is since we're in Napa if we did a drinking game and every time someone says supply chain we all take a sip. But I didn't prepare for that. So now that I have children and they were able to go trick-or-treating this year it brought me back to memories of when I was a kid, trick-or-treating and my parents always saying, oh, I gotta check the candy before you eat it to make sure there's nothing bad in it and it hasn't been tainted. And funny enough, I mean, it's very similar to our software supply chains and we wanna make sure there's nothing bad in there. And Dan has already showed us that lots of attacks happening you've seen the graph a million times. We're not making these things up so it is something we need to pay attention to. Dan touched on this a little bit but kind of like trying to ask ourselves why are these attacks that attackers are going after now? And here's a few, well, I'm gonna take my mask off, sorry. You gotta understand me a little bit more. And so a few of these, the other types of attacks have become more difficult. I think our attack services are increasing. Systems are becoming more complex. Companies are relying on more and more open source software. They're moving to the cloud. I read a stat that 70% of all enterprise workloads will be deployed to the cloud by 2023. And I think this has been accelerated from the pandemic, everyone moving to more virtual environments. And the economics are off. We're not putting as much money behind defending against these that the attackers are putting into attacking us. And so while at Google, this is actually I was thinking of sort of something Salsa related several years ago. And these are the constant questions that I was getting, especially in the CDF foundation where we started the Tecton project. Security was always top of mind for everyone in the CDF, trying to understand how to make these CI CD platforms secure. And so a lot of reasons that we went into this space and I actually started a company around this. So what is Salsa? So it stands for supply chain levels for software artifacts. Is it a dip or is it a dancing goose? We kind of still use both, but right now you can see we have the OpenSSF goose and a pretty Salsa dress. And I would like to formally request OpenSSF budget to get this goose in a plushie with the dress. So David Wheeler, let's make that happen. And so what it is, it's a framework for making our software supply chains more trustworthy. It was inspired by what something Google does internally and all production workloads are gated by a very similar framework at Google. And so the real goal of this project is to establish a framework as the industry standard where we can all talk about these things in a common language. We can understand the risk and then we can strive towards getting our critical software to a more trustworthy state and then act on any of those risk if we understand the terminology and what we're all talking about together. And so this is a table, a condensed table of the Salsa requirements. And it was a table that I could fit in the slide, but actually if you hit the URL, you'll see this much more in depth and broken out a bit more. So it is a leveling system, so there are four levels today and each of these requirements are incremental security improvement for supply chain integrity. The main pieces are the source control system itself, the build system, Dan stopped building mini cube under your desk, provenance requirements, and then just common best practice security requirements. And so we wanna keep ourselves honest within this project, so really looking at the actual attacks that have happened and mapping those back to the mitigations that we think Salsa provides. And this is just a snippet of some common attacks that we've seen and how Salsa can help mitigate these attacks. The first one is submit bad code without review. Salsa is really based on a principle of having two person trusted review on all changes that go into code. And so it's a big one that would catch a lots of attacks and then a more stronger and secure build platforms or another big theme of the Salsa project. And so here's just a quick timeline of some of the highlights of the project. We started the repo back in March, lots of feedback and iterations. We announced in June, we formed a seven person steering committee. All seven people represent a different company, which is pretty cool. We launched a redesigned website and then Operation Salsa is a pretty hilarious film that we put together with car chases and I can't even remember what else is in that film but you can hit the link in the read me on the main repo. It's quite an entertaining watch. And then just recently we pushed through a provenance format. So this is how, this is something that you could use to actually meet those provenance requirements in the framework and shows you the origin of where like the software artifact comes from. And then so a look ahead, this is just a snippet of some of the things that we're planning on doing within the project. There's already talk about a Salsa five level that talks about things like credential requirements, reproducible build, something near and dear to David's heart and transitive dependency. So even if my core project meets Salsa level four, what about all the dependencies underneath and how do they come into play? And then policy and verification piece, like who cares what level of Salsa it is if you're not checking and doing anything about it. And then we have some new websites, improvements coming out and adoption is another big one. I don't remember if Dan mentioned it but the Kubernetes team had put together how you might reach Salsa within the Kubernetes project itself. So that's cool. And then something we launched right before I left Google was this SOS.dev, which is rewarding developers for security improvements in Salsa as a piece of that project too. So there's money, check that out. Getting involved, typical open SSF Linux foundation type community. We have Zoom meetings every other Wednesday. We have a Slack channel. And then here's just a quick list of a few ideas of I have of how people can get involved with this project. We're always looking for feedback on the actual requirements in the framework. Case studies are super helpful. If you take a project and try to get it to meet the certain Salsa levels, like document it. We wanna highlight that. It's helpful for other people. Reference implementations of how you might reach the Salsa levels with different build systems, et cetera. And then keep ourselves honest with those threats, like we wanna make sure that we're building this to actually catch common threats. And then we do have one issue now with some folks that are translating the site into Japanese. I think it would be fun to have more languages. So that is it. Thank you, everyone. Thank you so much, Kim. And our final speaker today, Josh Az, who is the founder and co-exec, or sorry, executive director of ISRG is here to talk about Prasamo. Please welcome Josh Az. Hi. Thanks for having me today. This has been a great conference. So nice to see everybody in person again. So as you've been told, I'm the executive director and one of the founders of internet security research group. And I'm gonna talk about memory safety for critical digital infrastructure. So a little bit about our organization first. We're a nonprofit founded about eight years ago and we think of ourselves as a home for public benefit digital infrastructure. We're probably best known for our first project, which is called Luts and Crypt, a free automated and open certificate authority. We're serving about 260 million websites today. Really pleased with the difference we've been able to make with this project. But today I'm gonna talk about Prasamo, which is our latest, I think, fair to say, our latest project about moving some security sensitive software to memory safe code. So here's the problem. Lack of memory safety is a really serious and persistent threat to internet infrastructure. And it's an old one. It's not, you know, supply chain stuff is very serious, but I feel like we've only started talking about seriously recently. And it's great that we started that. This is one we've been talking about for a long time, but unfortunately I do not think that we've done enough to take it seriously. So I wanna talk about that today. The problem is mainly CNC++ code. That's where these vulnerabilities are coming from. New memory safety vulnerabilities are coming up in widely used software every day. Oops. And the stacks that we use are pretty unsafe from top to bottom because of this stuff. So you got web browsers on the top, servers, proxies, business logic, crypto, image processing kernels, all this. It is one big pile of CNC++ from top to bottom despite the mountains of evidence that we have about how unsafe this is. I think it's fair. I don't think it's hyperbolic. I think it's fair to say that this is out of control. 90% of vulnerabilities in Android due to lack of memory safety, 70% from Microsoft, 80% of zero-day vulnerabilities. It is rampant. And it's not just things and release notes in these abstract problems that are software. Society pays the price for this stuff every day. Privacy violations, financial losses, denial of service, human rights impacts, there are real costs to this stuff every day. People get hurt. The thing that I can't get over the thing that gets me so worked up about this, I guess, is that this is not one of those problems we don't know how to solve. I don't know how to make programmers not make logic errors in their code. That's something we're gonna be dealing with for a long time. But this one, we don't have to live with. We know exactly what to do about this. You replace the code that is not memory safe with code that is memory safe. It is a lot of work. I will grant you that. But it is not as much work because I think some people have made it out to be. This is an industry full of talented people, full of software developers. We got a whole bunch of companies worth billions of dollars. We got a few worth trillions. If we wanna solve this problem for critical digital infrastructure, we can do it and we can do it. We can do a pretty good job over the next five to 10 years and we can get some big payoffs even sooner than that. So I wanna address buzzing and static analysis efforts. We have made a lot of progress here. It is helpful and I think it is definitely worthwhile investment but that is not where ISR use focus is. There are some downsides to this. It introduces a lot of overhead. It's not applied consistently and ultimately it doesn't solve the problem. And this is what happens. These couple slides, right? These are the two most, these are vulnerabilities from the two most popular cell phone operating systems. Every device in this room suffers from at least one of these and at least half of these actually and these are just from the past couple months, right? These are coming from teams that are well resourced. They are fuzzy and they're doing static analysis. They are the best in the business when it comes to trying to write safe C code and they cannot do it because it cannot be done. And it's personal, right? How much stuff is on your cell phone about yourself and you wanna believe that that's secure? It's not really, right? And it's all because of a problem that we already know how to solve. We just have not done it. We have chosen not to do it, not to prioritize it. So our memory safety project has two goals. Make the internet's most critical software memory safe and change the way that people think about memory safety. Our role, we wanna come up with strategies for how to do this. Like I said, it is not easy. It is a lot of work. You gotta be smart about how you do it. We wanna facilitate, coordinate engineering as needed. Our engineers don't do most of the work. We find other talented people and maintainers and we set them up for success in doing this work. We can do the fundraising and we can talk about this work. So how do we identify a risk and decide what are we gonna try to fix? These are the four criteria that we sort of boiled things down to when deciding what is the most important stuff to work on and we're not talking about hundreds of projects. We're talking about like what are the top 10, the top dozen projects that we should focus on. So very widely used on nearly every server or client. And I mean nearly everyone on a network boundary, performing a critical function and written in languages that are not memory safe. If you have a piece of software that ticks all these boxes, it is going to get attacked. It is a target. It either has been attacked many times or it's going to be. So this is how we identify what we wanna focus on. And then we take that list and we look for opportunity. Is it a library or a component that can be used in a lot of different projects? Can we efficiently replace key components with existing memory safe libraries? Are funders willing to fund? You know, if no one's willing to pay for it, that makes it harder to do the work. Are the maintainers on board and cooperative? So we look for opportunity within the set of software that we've identified as the most at risk. I'm gonna talk a little more about modular improvement because this is really important. So for example, one of our projects is a TLS library. If we can get a great TLS library that is memory safe, we can use that over and over again in the most critical software and software that's not quite so high on the priority list. So we're looking for these modular improvement investments. Most projects, it's not practical to just start rewriting them from scratch. You're not gonna get where you wanna be on a reasonable timeline, but if you can come into a project, you know, like we did this with the Curl project, we swapped out OpenSSL for the Russell TLS library. We swapped out their custom HTTP code that was handling all the network for a memory safe library called Hyper. So for a relatively small investment, you can build Curl today with almost entirely memory safe networking. And that's a very ubiquitous utility. So we're big fans of modular improvement and we wanna invest in libraries that can improve security across a lot of projects. We really wanna work with maintainers. This is key. You can't always do that. Some critical software we've gotta move forward and the maintainers for whatever reason are probably not gonna be that helpful. But when at all possible, we really wanna work with maintainers that bring a lot of valuable knowledge to the table. And if you're gonna move code from code that's not safe to memory safe code, the ideal pipeline for that to happen is just a software update. Someone sits at the computer, hits update and the unsafe code gets replaced with safe code. If you have to convince people to change programs, from one program to another program, it's more work, adoption is slower. So maintainers are great and we love to have them involved and we love to fund them if possible. That helps them feel like we're not walking in and asking something so unreasonable if we say we'll financially support you to help make these important changes. And we need to build trust. So maintainers, when we approach them and say, you know, we'd love to see your project, your 20 year old project that's entirely written and see you move to memory safe code. That's a scary ask, right? It's a new language, it's a ton of work. I can certainly understand a lot of hesitancy about that. So we need to build up trust and we're gonna do that by having plans that make sense. We're gonna build up a corpus of success stories over time so we can say yes, I know this seems scary here. This is how we did it with another project that is very similar to yours. We can do it piece by piece. We're not asking you to start over today and start from scratch. So the building up success stories and the way we talk about those to help build trust is gonna be really important. So here's a list of the things that we're working on right now. Russell's is a TLS library that can replace open SSL in many applications. I think that's one of the most important things we could do, Apache, H-E-B-D, very popular, the Linux kernel. So I'm a pretty ambitious, optimistic person about tasks. Initially when I started this project, I left the Linux kernel off the list because I thought that's too much. I don't know if we're gonna pull that off but Miguel Ojeda and Alex Gainer somehow got it going. We're pretty close to having the ability to build kernel modules and Rust in the Linux kernel. So hats off to them, that's incredible. And we're really proud that with support from Google, we now have Miguel Ojeda working full-time through ProSumo on the Linux kernel. DNS and NTP, all these things check the boxes. They are ubiquitous, they're on a network boundary, they're performing critical functions that are written in unsafe languages. If we can even move just these things, we're gonna make a huge difference in the fundamental security of the internet. So until recently this wasn't possible. We didn't have great system languages to replace C. That's probably the biggest barrier we do now. We have that option. The stakes are getting higher. People are becoming more security conscious and recognizing the value. So I think the enthusiasm is building the willingness to do the work and we have the technical options. So I include this slide to make it clear that we don't mean to fault people for having written a lot of C and C++ in the past. I think you really had no choice. But we have choices now and we need to make those choices. We need to change the way that people think about this stuff. If you ask great engineers today to go set up a proxy to handle a bunch of traffic, right? A really normal thing to do is take Apache or Nginx or something like that and spin it up and now you can proxy a bunch of traffic. Again, you just put millions of lines of C code on the edge of your network handling a bunch of unknown traffic from the internet. That's a dangerous thing to do. It is provably dangerous. We have so much evidence but that is the norm today. That is where we are today. That has gotta change. My hope is that five or 10 years from now doing that is seen as almost irresponsible, right? That's where we have to get. So yeah, it doesn't have to be like this. It's just work. Let's do the work. So thanks for having me and my colleagues Sarah and Daniel here come talk to any of us about this stuff if you're interested. Thanks again. Thank you, Josh. A couple of things just obviously there's a theme here. There are reasons to be concerned but there are, as he said, ambitious and optimistic people doing the work. We all depend on it. Get involved, get your developers involved, your maintainers involved, your companies involved. Thank you to all of you for all of the work that you're doing. We did move sessions back 10 minutes so sessions are not gonna start until 11.25 now because we're around a little late today. Save the date for next year. We're back in Lake Tahoe, November eight through 10. So we're sticking with the November timeframe. Again, enjoy the rest of the event. Hope to see you tonight, 5.45 if you wanna be part of the group photo. Thanks.