 My name is Kate Stewart and I'm the director here at the Linux Foundation and I'm here to talk to you a little bit about open source and safety critical applications. This is a topic that ties together a lot of the areas of interest that I've been working on for the last few years and hopefully helps put some of this into perspective because open source is already being used in safety critical applications. We just don't have as good a visibility as we'd like and we need to help work on improving this over time. For example, in autonomous vehicles and on cars and systems, we're hearing about it more and more. There's a lot of devices that are being used in there that have uses in other places. Safety as you can sort of see here is being considered by people in the sense of avoiding accidents and avoiding hurting people, which is what we want to do obviously. But there's safety considerations around the electric system. There are safety considerations around the smart signaling, the infrastructure around it. So safety is pretty much everywhere and open source and it is literally in most of these systems and more and more as new people have new ideas, it's getting within open source and therefore we're going to need to figure out how to make sure we have the right level of analysis to make these systems safe going forward. Another example is this recent SpaceX rocket launches. They are very visible in public about having Linux in them already and various subsystems inside these rockets and on the Falcon capsule. So we know that open source is going up there, is going up to space and we want to make sure obviously it's going to be getting there successfully. And then with everything that's been happening in the world right now, since the start of March there's a group that has started looking for open source solution for ventilators for low cost implementations for dealing with the lack of supply. It started off that way but also in low cost areas that don't have a lot of resources and so forth. We wanted to make sure there were solutions available and all of these are open source ventilator systems. And so people want to help and open source is a way that they can look towards trying to help. Now the big challenge is making sure that what they're doing is being used in safe and effective fashion. Not only is it open but it's buildable, it can be support, it's been functionally tested for the safety, it's been rodably tested and so forth. And so this spreadsheet which I've got the link to here will give you what the latest update is from this group that's working on that. And so people want to be helpful, they want to be creative, open source gives everyone a way to participate. And so the challenge is now how do we make sure we become a safe for use. 99% of code bases actually are open source, at least that's what BlackDeck found when they did their survey last year. And of those code bases over 70% of the audit code was open source code in there. So it's there, it's pervasive. Even if it's not advertised as such, it is present. And all this new functionality, Sonatite's report also says, you know, indicated they've been seeing like, you know, double and triple digit growth for these components and these components are what is making up the functionality that's getting so excited for people, but they're not seeing a slowdown site. So open source is here with us and our challenge is going to be how can we make it effective and safe. This is particularly challenging in that most of the modern digital infrastructure you know, aren't able to accurately summarize the software that's running on their systems. They've gotten a container from someone, it does a job they need to do, they put it in, et cetera. Well, we can't have that type of environment in safety. You need to know all the pieces because one of them could be vulnerable, it could have a, you know, a security bug, it could have a regular bug, it could have a hazard, a timing hazard. We need to know all the pieces to make sure a system is safe and we need to have the full traceability and accountability of the provenance behind them. So getting all of these people, all of this information pulled together is not something that we're used to seeing a lot of in open source. Having that detailed understanding of all these elements is key if we want to be able to do the right level of analysis to have the safety claims and make sure that we understand that systems we're putting out there are safe. A lot of the safety standards have been evolved over the last 30 to 40 years and there's a wide range of them out there and they have different considerations but at the heart they're all looking to try to figure out how to minimize and mitigate systematic faults. So they're looking at, they're looking at things from a system perspective and that's the key. Open sources are projects but the products are being used in systems. So we need to figure out how we can get accurate summary of all the information there and then be able to have the right level of evidence that it's known, it's tested and it's managed. So that's what these standards are looking for which is not, you know, an unreasonable goal. The challenge now is how do we map that and work with that with these open source projects? So closing that gap is going to be key and the first step obviously is to improve the software transparency. We need to be able to very quickly and accurately know what software is being used in a standardized software bill of materials. So if you've got your system you need to be able to, you know, how about your fingertips without having to go and ask someone, be able to serve well, you know, or even worse than going and asking someone going and doing a lot of forensic, you know, digging and checking out that what's in that container that you're running this application on that's doing this and this and this. So we can't do that in a safety critical system. We need to have, you know, accurate information that needs to be summarized and needs to be available. We need to know what latest versions are running and how the components interact with each other. And software bill of materials at least lets us start to do that. Software bill of materials is a formal record, contain the details of supply chain relationships of various components used in building these software. These components include libraries, modules, and they can be open source and they can be proprietary. They can be free or paid for and the data can be wildly available or access restricted. It's not really prescriptive here. Anything that makes up a system we need to be able to articulate in the bill of materials. We've got this discipline in the you know, in manufacturing and hardware. For that matter, effectively it's the ingredient list not serial walks. This information is sort of key and there's been an initiative from NTIA in the U.S. and I think they've been working with the JPSert folk to be able to clarify what is in the software bill of materials and how we can take and make it more effective going forward. And what's the minimum viable? You know, that's what everything we want to capture, but what is the minimum we need to capture? So any organization concerned about bear supporting their software products internally as well as their customers should have a less bomb whether or not it's safe to critical. But it is for the safety critical space pretty much a prerequisite. However, if you're not working out, if you're working on application, you don't need to care about safety critical aspects of it. You want your bugs to be fixed. You want your vulnerabilities to be addressed. So you're going to need an S-bomb for those reasons. Some cases it's contractual. It's part of the negotiated terms with the supplier like a tier one and tier two. You know, there's some supplier relationships that you've got to provide one. In other cases, you'll want to have an S-bomb to comply with the legal obligations of the open source and any regulatory obligations. And then from the technical perspective, this is where, you know, you want to know what your supply chain risk is like. You want to know what your vulnerabilities might be. Are you are using writing a component that has a vulnerability potential? And then in the case here, we're caring about is safety analysis, you know, you need to know what's there to understand whether or not it's safe. And so as well as you might be using it as well internally or system for asset management too. So an S-bomb should be sort of used effectively. And there's a group, you know, working, there's several groups working as part of sort of a stakeholder, multi-stakeholder process in NTIA. They're trying to defy these things and input from anyone in the world is welcome. It is not specifically, we, you know, it's not specifically to one country or anything else. It's literally, this group is trying to help figure out what minimum viable is and make sure it's effective for the industry. And when can you use a software build material as well? You can pretty much use it anywhere in the life cycle. You may have reasons to, you know, when you're developing it, you want to make sure you're getting the software as specified that's been built. You may want to make sure after you've tested it, you know, the results are there, other certifications are made visible or unavailable. When you release it, you know, obviously you're passing it down your supply chain to whoever you're releasing it to. So it's a lot of information you may want to keep. And that's usually the trick, you know, the typical point where this software build materials is kept. But, you know, going forward, looking at this, you know, bugs can come in, you may want to keep that and track that with the software. Oh, there's been an update. You may want to have references to that. Pretty much anywhere in this life cycle for a software, you're going to you may potentially want to have an S software build material. So we need to figure out how we can generate it effectively and efficiently. You know, why care? I guess I've said it for us already, but securities are prerequisite here. We need to know what's theirs to make sure it's secure. And we're seeing a lot of interest emerging right now on the cybersecurity supply chain management. So this is helping motivate awareness across the industry. And, you know, people want to reuse open source, that's the heart of it. And fast time to mark is achieved by reuse containers that make it very easy to deploy things into your system. But what's under in those containers is not always obvious. So software transparency has got to be there for us to do the analysis for safety, but it may not be machine readable. And so you may know from us spreadsheet here and something there, but we need to get this stuff automated and visible and available. And we're seeing a growing awareness now with the regulatory authorities from the cybersecurity threat, but this is also going to help the safety space. So the FDA and FERC, Federal Entry Reserve, they are the ones that are starting to sort of say, hey, no, we need to have this software transparency in place. And, you know, some of the things that we're sort of starting to see is directives emerging. Some are not making directives yet, but have signaled they will. And basically these are under, you know, have an implicit assumption that there is going to be some sort of the transparency is going to be in place and maybe a software build materials, but that transparency has to be available. So how are we going to do this? Well, at the Lakes Foundation, we've got some birth projects that are working towards helping this. Obviously there's Open Chain, which is documenting the processes and the norms for sharing software build material information. And then we have a common format that we're using for exchanging the software build material information. We've taken this from the work we've been doing out of open source licensing and we're extending it to handle these other use cases. And if you've got a format and you've got norms, well, that's great, but kind of need tools too. You need to make this easy for people to use and are going to be what's going to make this possible. So if we go into the Open Chain specification, which is now an ISO standard, you'll see, you know, section three. There's the expectation that there will be a bill of materials for each open source component that's being used from which the applied software is comprised. So there is that expectation right here at the heart of this in the processes that it's going to be an SBOM. To facilitate this accurate tracking through the supply chain, you know, being an upstream or supplier, you're going to need to be able to say exactly what you're brought in. And then as you've built it out and get towards releasing it, you are going to probably have to create a software build materials to send down to your customers and for products and services. Or if you're, you may want to make sure it's accurate and you may want to feed it back into some upstream open source projects too. So accurate tracking is going to be needed pretty much throughout this to have that feeling understanding of the provenance. And more than the provenance of how we get this information through the supply chain, it's also the pedigree of how we're building things. What are we pulling together, or the config options? Reproducibility of the builds and remaking the products. If you can do that, you really know, you know everything that's in there. And the pedigree and having that information is going to be key for a lot of the safety standards as well. So this information is sort of coming together and sort of saying, hey, we've got to start tracking this, we've got to start working on it. And so this is one of the reasons we actually updated the SPDX mission this year, to expand beyond licensing to actually save provenance and licensing and security and other related information, like the pedigree information, how things built. And then, you know, what the usage information is and that's something that I know that some of the Open Chain Japan working group want to start capturing. So we're extending SPDX beyond its original licensing to also enter the security as well as the provenance type of information and pedigree and so forth. So we can actually track what is needed for the safety analysis. And to make it easier for the tooling, the automated tooling workgroup umbrella has formed with there's a variety of open source projects that are agreeing to exchange information with these SPDX documents. And some of them are being done during the build. Some of them are done as inspection after something's created like Tern, which just lets you look inside containers and physiology, which lets you take in, you know, a package and then help you understand what the licensing is and generate out an SBOM document. And then there's also there's also tools from the SPDX project which help other tools as well as let you consume and transform what's happening in this ecosystem. So that's sort of a little bit about, okay, we're going to work on the software transparency piece. Well, what about the actual functional safety? I think the key for functional safety that we're starting to emerge is we really need to understand the interfaces, the quality levels, and the safety characteristics of these open source projects. Because as people pull these building blocks together, they need to understand how they're fitting together and what they need to pay attention to. So right now, we're looking at how we can sort of move this forward. And there are projects now at the Linux Foundation that are considering functional safety. We have the smaller footprints like Zephyr and Cell4. We also have hypervisors like Acorn and Zen. And then we have Automotive Linux. You know, all those automotive use cases are right at the heart of this. And from there, there's also the Alisa project which is trying to look at Linux as a component on its own. And these are larger footprint type of things. So we have a fairly wide spectrum in the ecosystem that we're going to have to look at. And how we can move our way there is going to be key. So let's first look at Alisa. We're looking sort of both ends of the spectrum here. And at least it's just Linux. And what we're trying to figure out is how we can use Linux in these safety critical applications. I don't think it's any surprise to anyone in this room that Linux has grown to be one of the most important open source projects in the world. Last year, over 69% of the embedded systems market was using Linux. And that embedded systems market is where we're going to be finding these safety critical applications. So it's already like say it's there. And what we need to do is figuring out how do we make Linux safe. So since it's so pervasive, there's a group that's formed around trying to work on this problem together. At the heart of it, to assess if a system is safe, you have to understand it. See earlier comments about transparency. That's one aspect. But if your system safety is going to depend on having a Linux running on it, you need to understand how Linux is being used in your systems context. Again, it's the system that the argumentation for safety is being made at. Linux is a component in it. And what we need to do is help people understand the pieces they need to pay attention to. And accurately summarize it so that they're not trying to, everyone's not trying to reinvent the whole thing by themselves each time. So as we said, I can't stress it strongly enough. The difference between Linux development for safety critical and a general application is just how are you using it? You need to understand your system, understand the Linux interactions. And then make sure your system is using Linux based on selected properties, where you want to ensure the quality exists. If you're not using a driver or if you're not using some sort of feature of Linux, it doesn't participate in the safety argumentation. How it's configured and what you're using is what you have to focus on. And so Linux has a lot of the things that the safety standards are looking for. Okay. The fact that it has been in continuous development for over 29 years says a lot in a lot of people's minds. And that it's used so widely means that there's been a lot of testing that's been going on and a lot of debugging, a lot of bug fixing. There is a continuous improvement process in place when they're rare. When they develop, when issues develop over time, it's continually being improved. The maintainers get together once a year and they have mail lists for discussing these topics and how to improve the systemic side of things. So by that definition, it's sort of there. There is evidence out there already for the quality, for the process quality and process improvement quality. These are things that the standards are calling for. And the evidence from some of the analysis that's been done by some people who've had more experience in the safety standards than I do says that the safety integrity level two for selected parts and properties have probably met. Now the challenge is, okay, how do we convey that to those people who are doing the safety analysis in a way that we have coherence? So Lisa's trying to tackle this and the mission statement for Lisa is to define and maintain a common set of elements, processes and tools that can be incorporated into specific Linux-based safety critical systems and make them amenable to safety certification. That is what we're trying to do with the Lisa project. And closing those gaps between the criminal development community or any open source community for that matter and what the safety standards are sort of saying, they're talking two different languages and we need to figure out how we can reach together and convey concepts in the terms each other understands. So we've got two groups inside of Lisa. There's more than that, but these two groups in particular are worth understanding how they've worked. The criminal development process working group is trying to take and look at what the safety standards are asking for and then identify the evidence that's in Linux and carry collateral that illustrates what's there and how that is addressing the gap or finding out what the safety standards are looking for or else identifying a gap that people can work towards filling. Now, the safety architecture and working group is also looking at this problem, but it's looking at the problem from the inside of the interfaces and the functionality inside Linux. And so what's being, how do pieces of Linux work and then what is the common summary of the information from there so that that can be then used in further analysis? So these two pieces together are helping to sort of build up a layer of bridges here. And eventually we should be able to build the gap. Now, as I've said before, Linux needs to be looked at from a safety context, like from a system context. And so we need to figure out how we can prove the path. So there's two working groups that have already spun up in the last year in the Elisa project, one of which is the Medical Devices Working Group. And this group is busy looking at the OpenAPS reference system as well as we start looking at the open source ventilators initially. And what we're trying to do here is look at using STPR, System Theoretic Process Analysis, to take and successively refine down the system until we get to Linux. And then when we get to that Linux, we know which interfaces are being called from the system around it, which we have to pay attention to. At that point, we're going to be turning this information over to that current, so the system, the safety architecture group to go in and help us understand the characteristics there, such that we can get a full picture. And we're working, the OpenAPS project is an artificial pancreas system and everything has been done in the open. And it's a volunteer effort with obvious involved. So we're able to look at all the code. It's based on a Raspbian distro, so it's got Linux in it. But we can look at all the code and we can also do the analysis at the system level down to those pieces that we need to see for Linux. Another working group that we've got on this is the Automotive Working Group. And they're looking at a telltale monitor application. And they're basically looking at figuring out, okay, what parts of the system are important and breaking it down as well. We'd like to spin up other working groups. In particular, I think the Industrial Robotics type of group, but we'd love to find people who want to collaborate on analyzing some of these projects that are open. The key though is they have to not be under NDA. We need everything to be open so that we can do the right level of analysis and we can share our results with others. And so if you've got applications that you're interested in that have a safety element associated with them and the code is available up in open source, feel free to come and reach out and see if we can find others that want to work together to do the right level of analysis and help work with the system. Now, with this project, it is very important to understand the limits. With Elyssa, our collaboration cannot make a system safe, okay? We do want to make sure that as we come up with these processes and methods that, you know, you can apply them and make your own documentation. We're not creating out of the Treaty Linux kernel for safety crew of applications. There is continuous improvement. We want you to pick up those security updates. We don't want you to sort of freeze on something because bugs are going to happen. There's actually about nine changes per hour right now in the Linux kernel for new features and there's one change per hour for bug fixes and backwards. So it is continuously improving and, you know, in some sense is the most secure and accurate is the latest. But we want to make sure that, you know, we have systems in place as to when they're deployed, they're safe. And there's a certain amount of things that we can do upstream, but at the heart of it, it's going to be the person who's making the product and making the system that they're claiming to be safe is going to have to own the responsibility and legal obligations and liabilities. So we're going to provide you a path forward in this project and people to collaborate with on the analysis and hopefully work on getting things upstreamed and healthier for everyone because everyone wants to fix bugs. But it is not like, oh, it's all done. It's a journey here. And so once we're finished, we should hopefully have various assets in place for the processes and so, you know, have ideas for which kernels and features and some assessments for you to look at to see how people have done it. We can show it set feasible for certain reference systems and it is usable by people who are doing the integration. And so, you know, some of the challenges when we are effectively successful is we'll know how to deal with this over industrial grade product lifetimes, you know, more than the five years potentially because products get deployed and, you know, your credit doesn't suddenly become obsolete in five years. You're going to want to use it for 20, well, 10 at least anyhow. So we've got to figure out how we can get these systems and get them to be safe and safe over time. And we want to make sure that, you know, the standards organizations and the safety community certification authorities understand what we've done and have helped us participate in defining it in a way that they'll recognize it and agree that it's matching the goals. Obviously, we want to help improve Linux too. It's been in 29 years of improvement and we want to see it continue to improve for the years to come. And obviously, you know, key collateral being supported by various hardware vendors is important to us as well. So if you'd like more information about Lisa, I'd encourage you in January we're going to be having our next virtual workshop end of January start of February. We're still setting with date and in those workshops, we sort of go through different topics and you can deep dive into the area. And then there's weekly calls that are going on right now. And there's a monthly TSC call. We haven't felt full open mail list. Anyone is welcome to join. And we have some projects we're working on up on GitHub. And so encourage you to go explore and then reach out if you've got questions. Now, switching from Linux to the other end of the spectrum, we've got Zephyr. And Zephyr is another OS and it's meant to be used where places where the Linux is just going to be too big. And there's a lot like on the sensors, on the actuators. And we're going to need to have the same sorts of properties we've got Linux happening in Zephyr. For those who don't know Zephyr, it's an open source real-time operating system. We started the project in 2016 and we knew we wanted to go after safety and security goals because that's what was missing. We had this point now have a vibrant community associated with the project. There's over 800 developers that have been active in the community and there's over 100 usually active each month. And so in terms of the participations, we're looking sort of like I think Linux is nine changes per hour. Zephyr's Bible change an hour. So we're getting 800 plus commits every month type of deal. It is licensed differently than the Linux curl. It's under an Apache 2 license. However, it has a vendor neutral governance in play. It's a technical committee, you know, the technical steering committee is what makes the decisions about the technical content of the code. And it's very modular. There's a lot of things that are similar to Linux that it's using the Kconfig. For instance, it's using device tree. So people who are used to embedded Linux developers tend to find it a pretty comfortable environment to work with. And we're using it also is using something like the it's using the long term support mechanisms that Linux has where every two years in this case, not every year, but every two years, they cut a long term support kernel. And then security fixes are added back into that as well as and bug fixes. So we have that type of that was the lesson we've learned from Linux and we're continuing going with Zephyr. At this point now the architecture as you can see is fairly complete. It's modular. So you only compile what you need, which is what we need for this, you know, very tight footprints. And it has a very good Bluetooth low energy stack and both the host and controller as well as mesh. It's got open thread. It has a wide range now of communication technologies happening. So you can use it to link in the IoT space, which is what you'd want as well as a variety of interfaces into the hardware. This product is out there now running Zephyr. And we're seeing it recently used for things like the Centrius from their technologies for monitoring the COVID distance and so forth. And we can see it in wearables right now, as well as other applications you see here, but you can sort of see we're heading towards wearables and smaller devices with this. And this is why making sure we have, you know, we have effective argumentation and effective analysis and infrastructure for safety is something that the project cares about a lot. So, right, last year we actually started our safety committee off the COVID base was deemed mature enough at that point. We're initially targeting after SIL 3 level. And we've already established coding guidelines. The initial target actually started in 61508. And that is, as you saw from earlier slides, it served like a base understanding for a bunch of other standards. And so that was deemed to be the target from the governing board. So that's what we're working towards with this project. And we've got multiple activities going on to do more of the traceability and requirements formalization, as well as test coverage and tooling. So we're sort of tackling this one in a more of a traditional sense that's more familiar to the safety standards. And we're already starting to engage with the certification authorities. Quality is obviously going to be a mandatory expectation here. And it's something that, you know, we're using multiple maintainers, multiple sets of, you know, we're using maintainers hierarchy, like the least curl does, but we have, you know, requirements that, like, you know, two people must sign off on a commit. And we have a variety of other mechanisms in place that we have want to be, you know, making sure that the code is at a high quality before it gets committed to the repo. And if there's a bug, it gets, you know, regressed out immediately. So with 800 commits plus a month, okay, how do we actually get towards a state where we can actually get the right level analysis for safety? Well, this diagram was initially created in 2015. So over five years now. And then when we launched in 2016, we used it. The idea is that the development is happening here. And this is where 800s a month are, 800 commits a month are actually happening, a committed hour. And then every two years, we cut this long-term stable. Now a subset of that long-term stable is what we're going to call auditable. And this is for, we want, we serve expecting people to use mostly the LTS for products. Some may want to use the earlier development. That's completely good. It's just whatever you need for your product. But the products may need to be certified. We want to, you know, make sure that we've got all the analysis in place for that subset. And so in this way, we can keep that community dynamic going here, new features, new functionality, but we can basically have the scope restricted enough that we can go after the audits. Now we're taking this off of our next LTS. And that's slated for 2021. And the actual processes we're using, we're working in the safety and security committees and coordinating with the technical steering committee to make sure it can be applied across the wider base. We are going to be following the V model analysis in this project. And so we're working what the requirements are. In some cases, we're reverse engineering and documenting things that aren't there. And then, you know, working through the architectures for our system, as well as making sure we have evidence and traceability testing. And then we have, you know, the committers and information about them, you know, documented. No, it's a lot easier to sort of understand, you know, who the participants are and where they're participating with 800 than it is with like, you know, think 30,000 or plus in the kernel. And, you know, over the time period. So we've got visibility into this code. And our goal is to be able to provide the evidence that we can use to map between what's happening open source, what's happening in the requirements and that they've been tested and shown to be working. So that subset of that big, you know, graph you saw, these are the components here, our APIs are going to be, you know, initially there, as well as the interfaces, the hardware, and then some key components like logging and file systems and so forth. Some of the subsets in scope, we'll be starting off with x86 and arm, and we'll be, you know, extending scope over time. I know there's a design of the POSIX APIs, for instance, added here that may show up in the next scope and some of the crypto as well. So, you know, as people are interested in working and we're, you know, building up this evidence space, that's kind of what we're, you know, that's how we're approaching it here. And so we've already started having, this year we implemented project wide coding guidelines to make it easier for us. There's some tooling and processes for traceability that's going to be deployed soon in this next release. And the safety scope functional requirements are going to be starting to be documented as the tools. So we're working a much more traditional path with coming up with the evidence that's being looked for in some of these existing standards. Now, if you'd like to learn more about Zephyr, by all means, please check us out on the website. There's also a Zephyr project website and a variety of orientation. So if you go to the Zephyr project, it's there, but if you want to know how to contribute or the guidelines, you can read up more. We have a very extensive collection of documentation in this project already and it's just building up further. If you want to follow and lurk this email list, you can subscribe to and we have a Slack channel that's very active. And that sort of gives you a feel for what we're doing with Zephyr. So in summary, to get us towards safety and being able to use open source and safety, we're going to have to have that accurate software transparency. These components are going to need to be, you know, tracked and we're going to have to get that all automated. So it's just like in the background, we're not thinking about it. It's just happening. We're going to be moving our way there bit by bit by reaching some of the first projects I talked to. And then we're recognizing that functional safety standards need to and they can coexist, but we have to become efficient at scale. So by these projects tackling this, we're hopefully setting some paths for others. And we're also working with the certification authorities as well as developers to help shape the knowledge here. And the key being, you know, we've got to understand the system that you're using and you understand the components. And then we have to focus on our interfaces and making sure we are able to have the right argumentation for matching the requirements we depend on. And of course, quality is going to need to be there in these projects right from the start. So with that, I'll say thank you very much for your attendance. And if you're interested in these projects, I encourage you to check them out or feel free to reach out to me and I will be available in the chat for more questions. So thank you very much and have a good day.