 My name is Kate Stewart, and I've been focusing on how we can make embedded systems dependable here at the Linux Foundation. That's been my focus, SPDX is one aspect, and you'll see why, but making these things dependable is part of what we're going to have to care about going forward, I think. So with the security threats emerging every week that you've been hearing about, there are implications for critical infrastructure systems. And I've been studying these and wanted to share with you some of this information today. So, this one, okay, software development today is based on open source. It's been, we've been seeing this pretty much since 2018, but we need to look at some of the trends to understand what this is going to mean for critical infrastructure systems in the years ahead. First trend, open source is increasingly part of software products. From the SCA analysis tools reports and the studies that we've been doing at the Linux Foundation, we've seen the increase in the amount of the code bases that contain open source software components, as well as an increase in the percentage of the products, the lines of code in the products that are open source. So we've seen this increase in the components for application, and most of these resulted dependencies of what the software is being included. The growing use of containers is also a factor here as well. Next trend is open source is increasingly part of embedded systems. For embedded software, it's hard to find the data, okay? There's a nice stats everywhere to pull from, but from some of the studies I've managed to get their public, about 69% of the embedded systems are running on Linux, and when you're running on Linux, there's other open source components there. And so, as you can sort of see from here, these are getting into some of the critical infrastructure areas, okay? And the other thing that's important to note is IoT, even in 2019, was a large part of the embedded ecosystem, and there's some interesting challenges there. So the next trend to talk about is, according to the UN, there's about 8 billion people on this planet as of the start of this month. By 2026, we're heading towards two IoT devices for every person on this planet. From the study that was published by Arnholst in August 2021, the estimates are that we're looking to forecast almost triple the number of Internet of Things devices of 8 billion in 2020 to 25.4 billion in 2030. The other thing that was interesting from this information was major industry verticals that currently have more than 100 million IoT devices. So each of these verticals has 100 million. Our electricity, gas, steam, steam and air conditioning, water supply and waste management, retail wholesale, transportation, storage and government. Again, a lot of these are industry verticals that are part of several country's critical infrastructures. And then another trend that we saw is the critical infrastructure cybersecurity awareness is growing in the industry. The problem is pretty much worldwide. We saw this in the US last year, but you can see from these, it was in other countries as well, Brazil, India. It's not just in one country or something like that. We've got vulnerabilities there. We've got people trying to exploit this because they can make money by exploiting these vulnerabilities. And so figuring this out is going to be a challenge for all of us. And in fact, if you look at the other organizations that have safety critical applications that also hid into some of these critical infrastructures, these are where the vulnerabilities are being found. And these are where we're going to need to make sure we keep the supply chain solid because people's lives are there. Now one of the things I was really excited about this summer is Japan's NISC released the Cyber Security Critical Infrastructure Action Plan that identified Japan's critical infrastructure areas and systems to bring focus onto this important subject. Fortunately for me, they translated into English. Now these areas here, these critical infrastructure industry sectors include the ones that are listed here on there. But you'll see a lot of these are embedded systems that are coming into play. And what I also thought very insightful from this report was its emphasis on the relevant safety factors that need to be taken into account as well as the security remediations. The safety principles here, oops, sorry, there. These safety principles mean looking at the standards and adhering to that even if you fix something, it's still safe to use. And for critical infrastructure, this is going to be key because bugs aren't going to go away. So depending on each sector, though, and the applications, there are different standards that are engaged here and the risk levels that apply. But there are some underlying commonalities across these standards we can all build on. All software being used needs to be known, tested, and managed over time. You just don't do it once and done. We have a time spectrum to take into account. The infrastructure doesn't just change every day. Some things are in place for like 20 years or longer, oops, sorry, to make gadgets. This brings us to our final trend to consider that over the last couple of years, we've seen that open source is showing up in safety critical applications from automotive to civil infrastructure to aerospace to energy. We see organizations forming and common code in these markets, verticals. This trend started several years ago with automotive grade Linux. Possibly even carrier grade Linux before that now that I think of it. And more recently, we're seeing a lot of focus in the civil infrastructure project, LF networking, LF energy, and a variety of other embedded projects supported by Yachto. So we have common software building blocks like Linux kernel that's being used and reused in different ways pretty much through the ecosystem. This will allow us to share efficiencies and understand security and share fixes to the security and remediations. So there's positive directions here. But because of the safety critical implications in the national regulatory oversight that's emerging, NISC, Cyber Security Policy for the Critical Infrastructure Protection Plan points out that there are useful factors to consider as a good starting point for all of us to understand more. So what does this going to mean for the evolution of open source software development? Well, since critical infrastructure tends to have safety considerations associated with them, safety standards provide a framework and factors to consider in the analysis of a system, both hardware and software in the system. That provides, is providing a specific safety related function. Since software is more and more being composed of open source components, there's new challenges for the open source software development systems moving into these areas. So let's be clear, as Dan pointed out, closed source has the same challenges if not more. However, companies with proprietary offerings have had the resources that fund the safety level analysis in the past and follow traditional methodologies that the standards are looking for. For software though that is being integrated in and put together some of these pieces have been missing. And this is what we need to focus on figuring out how to solve. So the challenge is how can we evolve the open source ecosystem to meet the challenges of being used in critical infrastructure, where there are regulatory requirements and where there are safety critical implications. Some of the initial steps though that we can take to make this journey, we can start today. And so, first off, step is know exactly what you're using. Being able to automatically understand all the components of your system is going to be key. This is where S-bombs come in. Being able to automatically at your fingertips have exactly which components are sitting there on your deployed products. That is key for security and essential for doing safety analysis at scale for these modern systems. In the last year, the focal point of S-bomb definition work has moved from, in the US, from NTIA to CISA. And this is just sort of defining what is an S-bomb. And the multi-stakeholder means are continuing. There's weekly meetings and Japan CERT team is part of these discussions. I've been in several meetings with them as well. But for us to get to scale, we're going to need international standards. And international standards as sharing the software metadata efficiently is key. SPDX is a language to convey the software components information and it lets you scale down to the source file level, which is where we need to be for embedded. Certain knowing exactly which files are there or not there will tell us if we have to go and do a remediation in the field. Okay, the component version is a good starting point, but it's not sufficient. Open chain also has been asking for software bill of materials or a bomb as part of what you share between organizations to build trust between these organizations. And so it provides information on the processes of working with these information and sharing. So, SPDX in November 2021, SPDX became an international standard. And I would personally like to thank you. The ISO Japanese Reviewing Team, because they gave us the most detailed feedback and really helped us improve it when we went through ISO. There were several nights and weekends spent addressing their feedback, but they did give us a lot of really good detailed feedback that we made sure we could address. The open chain is also another international standard, and it has started building up industry-focused special interest groups in the last year. We've had automotive and telecommunications. The telecommunications and transportations are areas that are considered critical infrastructure. And I've been talking to Shane and I'm hoping to see more of the critical infrastructure verticals start to meet and share best practices. Because sharing these best practices within this critical infrastructure sector is going to be key to the adoption, because you work with your peers and you understand the same language and the same concerns. So I'm hoping that Shane will be working on getting more of these special interest groups formed, and we can continue to collaborate that way. Next step is knowing how your software is built. From a security perspective, we've learned in the last few years that knowing your dependencies is essential. From the supply chain, the tech is starting to show up. The industry is also waking up to the fact that it's key to understand how it's built. Safety standards have been expecting to know how software is built for the last couple of decades. The information is there, the best practices are there. They're just translated into a form that we just need to build the bridge language gap a bit sometimes. The concepts, though, are solid. So, LF projects focusing on building clarity here. Well, SPDX is a way of tracking dependencies to the source level for six years now. There's a rich set of dependencies, but we're evolving to the 3.0 version of the spec right now, and we're adding in an optional build profile to further clarity on the building software. There is also a security profile that's going to be incorporated and a usage profile. There is a team here in Japan that has been working on helping us to define what does it mean for risk analysis when you're using software in your system and be carrying some of that metadata as well. Another project that's working on this is Yachto Project. As Dan was saying, we need to have this automated. Well, Yachto's automated it. One line config change in your scripts, and you can basically automatically generate S-bombs for your build tool chain, as well as the libraries it's built, as well as your final image. It all happens behind the scenes. It's there today and embedded. Projects that use Yachto. It's a one line change you have to turn on. That's where we need to be across the ecosystem, and that's the type of changes that Dan was referencing. We need this to be behind the scenes. We need to make it easy for developers to just have the right things happening. Another project that's working in this area is obviously OpenSSF, and Dan just talked about SIGSTARP for provenance, and then there's also the GitBombs project for having a verifiable component dependency graph, which is also useful as a cross check. The last project I want to quickly chat on here is Zephyr, and Zephyr is an RTOS to use when Linux is too big, and it is able to automatically generate software build materials for the sources and the built image and have the linkage between them, and again as a result of one command. So if you're using Zephyr today, one command, you can have these S-bombs coming out automatically. I guess the next step is you need to be able to reproduce your builds. Security issues are being discovered all the time, so if a vulnerability shows up 20 years from now, can you fix it? The critical infrastructure needs to be maintainable and reliable over time, and so there is this need to be able to rebuild an executable image in the future times in order to apply a fix. So some of the LF projects that have been focusing on this aspect already are the Civil Infrastructure Platform, which has extended the reproducible build support in Demian, and has extended support for certain core components, like time of support for bug fixes. The Octo project is now capable of reproducible builds for all of its packages as well, and with that one line S-bombs change, having the S-bombs and the reproducible builds is probably the best known practices today for us helping to secure the infrastructure. And then, system today are increasingly able to incorporate, are increasingly incorporating AI trained models. And so, SPDX is working on an AI and dataset profile that will let us summarize this information effectively and share it, so we can reproduce the trained models too. Because getting all these elements the same way you have to build your images, you have to build your models by training them. There could be vulnerabilities, there could be hazards coming out of those trained models, and we're gonna need to be able to address them as well in the future for safety analysis. So, we're heading there. The Civil Infrastructure project has been exploring how to satisfy some of these requirements of infrastructure using open source systems built on Linux for several years now. And as you can see, build environment is one of the key aspects, as is safe security updates, security itself, automation of the testing, and then long-term support strategy. These are things that they've been focusing on now, so we are focusing on these in Projects in the Linux Foundation. We just need to be doing it wider. Next step that I think we can all take today, start hardening the most important projects first. The criticality score is the work that's been done from Harvard-Lish to identify these projects that Jim was talking about earlier. Well, there's 80 million I think was being said, plus out there, what's ones are important? And then, you know, how can we get them hardened first? Well, the obvious ones is Linux, and there's a lot of work going on there. And because it is so common in bed systems, it is a good place to make sure we are doing the best we can. Certainly, you know, it is being used, it comes under different names. It may go called, it may go called Android, Debian, Red Hat, Wind River, any of these distros. Underneath is the Linux kernel, and that is, you know, that is something that I think, making sure we have the best practices for this is going to be key. And so, things that we're doing to focus dependability of Linux, well, we've obviously been putting a long-term support system in place for several years that's improving the security, maintainability, and reliability of Linux. And so, certain every year there's a new LTS declared, and some of it's longer terms. Civil infrastructure platform is looking at how can they extend it even longer? And there's good challenges there. Part of the challenge is making sure everything is upstream. So we've been doing work in the real-time project to finish getting all the preempt RT patches upstream so there's nothing out of tree. That's almost done now, and that will help improve maintainability of infrastructure and embedded that has real-time considerations. The Kernel CI project is focusing on improving the testing and reliability of the kernel images, and Alisa is focusing on how do we bridge between the safety analysis world and the open source development world. And so, our challenge is critical infrastructures need an operating system for these complex algorithms and software suitable for use in safety critical systems. There have been some historical gaps between the processes used in open source and what's documented in the safety standards. So Alisa's mission is to define and maintain a common set of elements, processes and tools to help make Linux-based safety critical systems amenable to that certification. Because we have communities that can't quite talk to each other right now. We have to build that bridge and building the bridge is step-by-step process too. So Linux is certainly there in the most popular places but sometimes it's just too big. We've got other LF projects that are focusing on safety considerations as well. Zephyr, what we've talked about earlier has a safety committee focused on getting ISO 61508 certification going through. The CELF4 project is a formally verified microkernel. Zen has got a special interest group working on functional safety on making the hypervisor ready for use in Linux criticality systems. And then likewise, Acorn is also having focus on safety. Now, the Zephyr project, since we started it we wanted to be able to go after safety critical with Zephyr as well as the security. And so we've been putting the processes in place to allow them to learn from Linux to make sure that we have it. And we actually have a functional safety manager that's a neutral one that's been hired by the project to make sure that we can get ourselves ready for 61508 who's helping to build that bridge. Zephyr's also focused on adopting security best practices. Anytime we learn about one, we try to apply it. And we were one of the earlier achievers of the Gold Badge from the badging program as well. Some of the other most important projects though, how do we figure out what the risk is of them? And how do we use it? Well, the best practices badge I just referred to, there's a score cards project, there's a supply chain level software artifacts, and then there's the alpha mega projects. So we've got various initiatives in OpenSSF that are gonna help us understand risk, score, improve, hopefully work on improving and hardening these projects as we identify them. Next step, let's establish a reference systems. Modern open source development is usually based on developers finding something similar to what they wanna use, copying it and tweak it until it achieves their goals, okay? By combining open source projects together without NDAs into reference systems with configurations for different criticality levels and documenting them, we will be providing new starting points. This is one of the goals of AGL. As you've seen from Dan, they wanna have these reference points. This is a very good model for us to start to build from. Similarly, civil infrastructure is also putting together reference points. And I will also say that LF Edge has their various blueprints and networking has blueprints. Again, putting these reference together and making sure they're following the best practice things we can build off of and then others can build from and you have more better chances of getting it right. So, this year, the Elisa Project formed a new systems working group. And our goal in this group is to work with these components, open source components and create mixed criticality systems doing applications. And so in June earlier this year, we actually have Linux, Zen and Zephyr hooked together in mixed criticality interacting and configured and we're working with AGL to get AGL and Yachto integrated as part of this whole system. So this is work in progress and people are interested, please come talk to me more about this later. The other step we can be taking today, education for open source developers as we refer to earlier. Making sure that the developers understand security, safety and system engineering and the implications of system engineering is a good start here. We've added a software engineering basics for embedded systems course as well as the security courses that Jim and others have referred to, the development, secure software development fundamentals. These are all freely available for developers to take and with the Elisa, we're trying to make all of our learnings visible as well. And so, you know, there's virtual workshops and the videos from those are all freely available on YouTube for you to catch up if you have not, we're not able to attend them and we have periodic webinars deep diving into special topics. So if you're interested in learning more about the system side of things, we will be having a mini summit tomorrow here and we'll be going into a deep dive on the systems working group as well as the automotive working group. So this is a topic of interest too. I'd encourage you to sign up and we can take it from there. So as you see, we are going to need a step by step not a revolution to get the open source ecosystem working well with the critical infrastructure systems. This presentation went over some of the initial steps we can take now, but there's gonna be more needed in the future. We don't know the full roadmap yet, but you can sort of see, there's things we can do now to make it better. Japan has a tremendous expertise and leadership in effective system engineering. And I'm hoping that as Japan's critical infrastructure systems evolve and adopt more open source components themselves, that they will share their expertise and lessons learned with the wider open source ecosystem communities as well. This will help improve software quality, reliability, maintainability, security and safety for us all. Thank you. Arigatou gozaimasu.