 Welcome. Thank you for coming and your interest in this topic. It's a topic that's gotten me quite interested in the last couple of years, because we really don't have good solutions out there. So it's a nice hard media problem, and those are the fun ones. My name is Kate Stewart. I work at the Linux Foundation. I'm one of the director's strategic programs there. And so I have a group of projects that I work on and work with to see if we can make things a little bit better. I just want to start this off with a quick survey of how many of you write code that makes it into a product? How many of you guys? I want to see what the audience is. OK, most of you. Good. Yeah. Keep your hands up now. Do you know if you keep your hands up if you know the code you wrote is bug free? OK, no hands are up. So I'm not even sure I should ask the next question, which is, keep your hands up if you'd bet your life on it. Because that's kind of what safety-critical softwares are going to be about, is we need to make sure that our code is such that we can trust it in applications where it is truly safety-critical. See if this will, hello? There, next. There we go. How many people recognize her? Cool. Well, for those who don't, this is Margaret Hamilton. And she was working, letting the team, I think, that did the Apollo 11 software. And it's effectively, she's the first person who did software that runs, first software to run off the planet. And as a result of some of the testing that was happening there that we didn't have catastrophic problems for the launch, she was actually at ICSEE in Gothenburg last year doing the keynote. And I got the pleasure of listening to her, meeting her afterwards. And the story that she has to tell was really quite fascinating. And this is from her slides, from that presentation. And basically, the software had to be ultra reliable. It had to detect and recover. And this is still at the heart of the problem for us right now in safety-critical applications. Everyone was dedicated, wanting to do the best job they can. And one of my first memories was my dad bringing me downstairs to watch Men Land on the Moon on a black and white TV in the basement. But that was one of my first memories. And it would have been quite a different story if they had not taken the care and debugging that it happened to make sure that this software was safety-critical because there was an error. And there's one switch she said that she had a kid in her with her at the office debugging some stuff, and the kid knocked the switch. And there's this really weird error. They didn't know what happened. And they realized that it shouldn't be happening when it was in a certain mode. So they had the switch very carefully marked do not toggle the switch in flight. And sure enough, someone bumped up against it. But because they'd seen that in the testing in advance, they knew how to recover from it. And so all that level of detail and really understanding what possibly could go wrong in making sure we can fix it is key. Earlier this year, I also was fortunate enough to attend another keynote by Dana Lewis. And she's one of the co-founders of the OpenAPS project, which is the artificial pancreas system. And the hashtag is we are not waiting. What they've done is they've taken an insulin pump, and they've taken a glucose monitor, and they've put a Linux Raspberry Pi, Raspberry Pi running Linux between the two, to create a feedback loop to better monitor glucose levels. And they've been collecting data. They're sparing the data. They're doing everything out in the Open. All the source is available for this. It's a safety application. But Linux is being used here. And it's good enough right now for this purpose. It's part of that loop though. And we're seeing people are finding software available right now that they want to use and they can make their life better. I've got the links here in the talk. I'll be posting them. Highly encourage you to watch her video if you've got a family member with diabetes. And she just came out with a book on Amazon as well. I put the books there too. But by setting up a loop and setting up that feedback loop as a hobbyist, they've managed to make this information available. And she's managing to manage her blood glucose levels because she's diabetic herself. So she has a very vested interest in making sure this thing is safe. And it actually has improved the quality of her life by doing this. Every day, in some sense, this is a risk. But she gave a really good keynote. And it helped inspire me on this whole subject too. So I started thinking about the safety side more and more. And it's pretty clear we also have a bit of a culture gap. It kicks in right now. Soap and soft software developers are very much in the fail fast, fail early, fail often, iterate, iterate, make it better. That's where all the innovation is coming from. No question. When we're actually going for functional safety, we have a very regular process of defining a system, looking at things in the context of the system, and so forth. So there's this culture clash that sort of emerged in the minds of the software developers. And each domain doesn't know too much about the other to some extent. You have people that are very, very specialized in the functional safety space, and you have open source. And so part of the challenge we've got right now is how do we start bridging that gap and making people communicate effectively? So why it's the people we need to get in the room together. We need to get the certification authority people who really understand safety, talking to the open source people. The Agile Manifesto, if you look at it one side and you look at the other side, the other side is pretty much what the functional safety people believe in. The Agile side is pretty much what most open source developers I know believe in. And so how do we get it so that both sides can work well together is the challenge that we're facing right now in the ecosystem if we want to be using open source in functional safety spaces. Because the users are going to demand accountability if their lives are potentially on the line here. You know, who's going to be important? The reason that Dana's team is doing this is that no medical device manufacturer is going to spend the money at this point in time because of the liability issues of going through the full FDA certifications. And they want to use it today. Tash, we are not waiting. So people are wanting to use this stuff and the question is no one wants to be first. How do we get to there? So, you know, is it compatible with the safety standards? Well, the short answer is yes. However, there's a large and double of things that potentially have to get done that people are not used to doing in the open source space. There's also a large number of safety standards that have different requirements. And open source is showing up in all of these spaces right now. And there's more standards likely on the way. This is at the heart of this V model and requirements traceability and understanding software in the context of a system is at the heart of the safety and making sure your system is safe. Because you have to understand what's being used in what context. And you have to basically decompose the system and look and make sure all the components interact properly to make sure that the overall story is safe, which is a lot of analysis. And you start with the user story, what you're trying to accomplish and you work your way down. Well, a lot of the open source projects are components in there. They're not the full story. They're being worked on in context of things. And we don't really have a lot of experience of looking at these components in their, in multiple contexts with all the analysis around them. So we're sort of working our way through this. The key for open source is we have to understand how it's gonna get used. We have to define its scope. And as much information as we can track to prevent regressions, it's gonna be one of the key aspects for us. As much as we, yeah, go for it. Okay, is that, just do the up signal for me if I go silent again. I'm not the most dynamic of a public speaker, I'm afraid. Other people are much better than me, but this is something I feel very careful, passionate about. So I really like to make sure people understand and can see if they can figure out how to help here too. But there's a lot of things we can do with the tools available to us today. And there's a lot more tools we're gonna need here to make this seamless and to work with open source. Most of the tools that are out there right now are proprietary. And if you've got an open source project and you're trying to figure out how to get these regression things going and to keep problems from coming back in, there's room for a lot of innovation. So key for us is gonna be building on top of open source strengths, which is public code review, it's available. And there's feedback to improve quality happening all the time in open source. People are filing bugs, people are fixing bugs. But these elements here are elements that show up in the security standards of who's reviewing, are they trusted, are they trained, do they know what is going on? And that's the piece that's missing for us right now that I've seen. Anyone who's free to disagree with me here and educate me, okay, in this room. But this is what I've been finding from talking to a lot of people over the last year. So at the Linux Foundation, we've got a couple of projects that are also trying to start to focus on safety. And they're all taking slightly different approaches here. And we're trying to basically figure out, okay, how do we get the best in breed stories going? And if there's, like I say, we couldn't find good examples out there. And so it's a question of each of these projects is trying to tackle this in their own space right now. So I was just gonna go through a little bit of how these guys are approaching it so you can sort of see the different types of problems we're looking at. So the first one I'm gonna talk about is Zephyr, which is much closer to a traditional RTOS. There's a lot of open source RTOS is out there right now. And for small RTOS, these are sort of the criteria you kind of like, is you wanna have something that can have something that can go to a safety-oriented architecture. This is things that go into sensors, very, very small devices, 8K up, that type of stuff. Definitely under two meg, Linux won't fit. You wanna make sure you have a security, you wanna have your posits. You wanna have these types of characteristics with it. Well, of those operating systems, the ones that have visible explicit paths, FreeRTOS and Amazon FreeRTOS will work through a proprietary one called SaferRTOS. Zephyr has made a public statement of working with our LTS to auditable. And we're basically working on segmenting and refining down what we're doing. Who's aware of what Zephyr is in the room? And I'm gonna go fast. Anyone who didn't have to put their hand up wants to talk to me more about it. I'll be talking about Zephyr later this afternoon. But this is our Zephyr project. And we started it off with the view that we wanted to go after safety and security targets with this goal. And this slide has been around since the start of the project. In fact, before the start of the project. And what we were trying to do is figure out how to keep the pace of innovation that the community would do, but then be able to transfer the code into something that's auditable. And we've done this by taking the same type of long-term support mechanisms that the Linux kernel is using, where you're freezing a code base. And then from that, we're taking a subset and we're working on hardening it and getting it ready for going through certification audits. That's what we're doing with Zephyr. And then in that subset, we're focusing on even smaller parts and then building our way up over time in terms of the work we're doing. Quality is obviously one of the most important pieces here. It's a foundation for that. So making sure we have a very sound code base is one of the key things that's being focused on. The certification authorities right now are very familiar with certain guidelines like MISRA and other standards. And so we are actually looking at implying what makes sense for MISRA and then documenting what doesn't for this effort code base and explaining why so we can work with the authorities on this. But we do have challenges. MISRA is controversial to use in certain spaces. The standard is proprietary, it costs money. And the tooling to check that you haven't had regression or anything else is expensive as well. And we're an open source project. So these sorts of things are challenges and it's not just us here. The open source project will have this type of challenge. So what we're focusing on is the deviations. Some of our members have access to these standards and they're basically leading a lot of the work. But there are things that are potentially controversial here and it's gonna be coming up with okay, which ones are we explicitly being deviating and why? Do we have good reasons for doing it or not? But the developers are understanding that this is a goal and it's been a goal since the start of the project so there's no surprises here. So what we're doing with Zephyr is we've identified some of the components in our stack and we are going to be initially focusing on those ones and working through the reverse engineering and the documentation and the practices associated with that. And then we're going to basically be expanding it out in stages of scope based on the members who are working on certain things. And then we're working with the certification authorities to keep building out our scope so that we have the argumentation ready and available to be used. So we're starting small, limited scope and we're building out as we learn. That's the approach we're pretty much taking with Zephyr. The other case is we've got single core MCU cases and we will be going to, there's other configurations of Zephyr that are out there including ones with hypervisor as guests and hypervisors and things like that. And eventually over time, all of these use cases are going to need to be handled. So again, it's starting small and then working the complexity upward over time. And obviously these requirements will grow with these cases. So Zephyr's roadmap is we're about right here right now working on the MSRC compliance. Making sure we've got all the different commercial support compiler support because a lot of the safety standards require certified tool chains be used, things like that. We've got sort of the next level of starting to work our way up on the compliance criteria and the documentation with the goal by our next LTS in two years of being ready for it. And one of our members is trying to go on an accelerated path so maybe it'll bring things in. So if you want more information on Zephyr, I'll be giving a talk about Zephyr and what more detail about Zephyr for those who didn't raise their hands. At three o'clock along with Marty Bolivar, the two of us will be doing that one. And then that's our website for the project, mail lists and Slack channels for if you've got questions. The other project that I had on my list is in. And Lars is in the front row here. And he'll be giving a talk a little bit later today about Zen and he'll keep me honest right now. So Zen has actually had slightly different starting points. They're basically, Zen is the hypervisor here that would sit in that end-use case for us on Zephyr. And they're working towards getting, you know, Zen working in the systems and avionics and defenses where they've been spending their time and having their analysis up till now. And so Xilinx has a very stack, the Zephyr stack, the automotive. So they've got different use cases they're looking at. But again, they're going after the same goal and they're trying to figure out, okay, how can they take this hypervisor? So they've scaled themselves down to some extent in terms of their problem space by focusing on the dom zero for small systems. And they've got about 50K lock of code to deal with here. And I think they're working this pretty much explicitly for the ARM ecosystem. And they want to make it easy survival and they want to go after ASLB and these other specifications. And they think, the estimates they've got is they think they can do about five to 10 years, man years of effort right now from where they are. And they're focusing on this left side of the V model. So that V model I showed you first, they're focusing on that initially and then refreshing the ARM port. So you'll be seeing that. And again, there's a lot more interesting detail in large talk later today. And the slides are available online and the talk is at 210. And they're working on the downer works in NASA stories too. So if that's of interest to you, you might want to check out that talk. The next project I want to just chat a bit about is ELISA, which is enabling Linux and safety applications. This is a new project. And we spun it up at the least foundation because of our members wanting to have a place to collaborate on how do we deal with Linux? Because both of those two other systems were fairly small. It's a more amenable to the traditional types of approaches. Linux is used by everyone. It's very pervasive, but it's huge. And the rate of change in it is the tip upstream kernel is nine changes per hour right now based on the five to release. The actual patches in from the kernel their way moved into stable is one change per hour. And that was the stats that Greg was telling me recently. And so there's a tremendous amount of change and a lot of those are security fixes and a lot of them are just bug fixes. They don't distinguish between the two. However, with that rate of change and when there's security fixes in that mix, you're gonna want to make sure that your Linux is up to date and then you're still gonna want to use it as a safety critical application, something like autonomous driving for instance. People are wanting to use it in those types of contexts. So the challenge becomes this, how do we do it? So first of all, we really need, we're focusing here on understanding our systems and looking at systems. And you need to understand Linux and how it's actually being used in that system to be effective. And so it's taking rather than from the details bottom going up, it's looking top down and the analysis and figuring out what interfaces are being used and then how can we take this to the next step. So it really is on the way we're using it. You have to understand your system. You have to have better tooling than we have today in some ways to understand what's going on and then who's making changes and may it impact you, what's your traceability? So there's a lot of issues we're gonna have to be looking at here with the tracing and improving that infrastructure for understanding implications of applying patches. We are gonna be working with the compliance standards. We've got underwriters laboratory participating actively with us right now as is to have sued because the certification people are on the same side. They're here seeing these Linux systems coming at them and they don't know what to do with them because they're not the same things that they used to see. So they're sitting and working with the other participants in this project of trying to figure out what makes sense to look at and what gives you confidence that things are gonna be safe. And so the key here for this organization is, okay, how are we going to make this effectively? It's gonna be the processes and methods again. So that is the angle we're gonna be working towards and the complex systems and the qualities of the Linux kernel are gonna be a very important part. And then also, you saw we had a culture clash between these two parts. It's education that has to happen between the two sides. So they'll be outreaching to the safety community the same way I'm hoping to get more safety people here talking to the Linux community. The path forward is functional safety is about managing the risk and the Linux based systems can only be understood within a space and understanding this and building up these proposals is gonna be key for us all. So Elisa's mission is to come up with a set of elements, processes and tools that can be incorporated and make the systems amenable to the certifications. That's what it's focusing on. It doesn't have an endpoint. It's not gonna get one version of Linux done. It's working on trying to figure out a process that you could take a version of Linux through that you could work with your authorities to make the appropriate argumentation that you are gonna be safe at the end of the day. And so this is, we've got researchers participating with us, we've got industry, we've got people building products. BMW and Toyota are two of the founding members of this as well as KUKA and a host of others that care about these real-time systems and these safety systems. So you'll be seeing more of these elements showing up but the answer isn't known right now. The question is getting the right people in the room and discussing it and hopefully figuring out what they're everyone's gonna get comfortable with at the end of the day, so this group is open as well. People are welcome to join our mail list and help work on coming up with what tools we need, what processes. We wanna come up with some Linux use cases using Linux to really study and the open APS use case I was referring to is one of them, because everything there is open. No one's worrying about NDAs and liability. They just wanna make sure it's available to people. And the people are using it from the obvious perspective it's not a sole product. So from that perspective it's something we can actually analyze. The open source community and the safety community and the regulation authorities and the standard bodies. Some of our members actually are participating on the standards bodies for the safety space. So they'll be taking this stuff back into there and I've had some discussions like with the FDA and there's interest there too. Because medical devices in an area seeing this coming down at them too as well as the software build materials and the transportation and the security side. So at the end of the day success is gonna look like for this project, processes and understanding some kernels, features and tools for how to use the processes. And we've got reference systems out there for people to look from. And we can integrators and systems using Linux have guidelines on how to go there. And we're trying to go for the industrial grade products over lifetimes which could be up to 20 years. So we're working with projects like the civil infrastructure project as well as the automotive grade Linux project because we have common goals here. And we want to basically get this fairly pervasive and understood in multiple hardware ecosystems. So both ARM and SyFive are members of this working on different hardware approaches. So we've got interest in those hardware ecosystems and making sure they've got things lined up for this to work with their ecosystems. So we're working with the authorities. We're working on feedback. We've got why get the hardware participation broadened beyond this as well and then get a lot of tooling work and then eventually get towards incident hazard monitoring the same way we do security monitoring today. But that's down the road. And obviously education and evangelism are going to be a key part of this goal for this project. And the limits for it though is we're not going to engineer a system to be safe. We're not going to basically, you know, ensure that you know how to apply the methods. It's just helping you find the path and helping everyone find a path and coming to a way we can move our forward together with some degree of confidence. So we actually had our first workshop now with this project. We're probably having a workshop every quarter. And at that workshop we identified that we are going to go after this open APS as one of our reference systems. And we've started doing the STPA analysis of Linux and how Linux is being used in there and we'll be continuing doing that over the next couple of weeks working towards having a further review at our next workshop. And then the other use case that's happening is the autonomous driving case. There's a lot of interest in some members on this one. And there's some prior work on the still to Linux MP stuff. There's an annex QR analysis that was done as well as a route PDF that's there that they're analyzing right now on the mail list. And I expect there'll be meetings on discussions on how much they want to continue with that direction or find a different one. So that's kind of what's happening with Alisa. And if you guys want more information about Alisa, the next workshop is in Cambridge, UK. And the mail list, anyone can join and anyone can participate in the discussions. All are welcome. And so I just want to sort of close with the thoughts that, well, open source software is pretty clearly eating the world. It's in Apple phones as well as Android phones. It's in BMWs, it's in Teslas. Linux is in a lot of these places. And we need to figure out how to make sure that we have confidence when we're using it underneath the covers that we will be safe. So it can coexist with open source projects. We do need to get the quality levels up there. And we need to manage the expectations and start small, build out, start with use cases, build them up, and then work it from there. And that's true for all three projects as we're starting with certain points and we're building our way up from them and we're learning from different angles and trying to see as much as we can to share tools. So the Elisa project and the ZEM project are busy looking at tooling discussions. I'm trying to cross-pollinate some of that to the Zephyr project. So there's a variety of elements that are coming into play now, but we don't have a full story. And anyone who wants to participate is welcome. With that, questions? Go for it. So the Linux kernel already has coding guidelines and best practices already documented and tried. Right. So I think it's a question of getting enough buy-in from the kernel maintainers that they want to go in this direction. We can't dictate things from the sidelines. Go for it, Darren. If I may. Please. Continue with improving what's there as you see the regression from down to a acceptable level. Again, we're not looking for no one. We're looking for an acceptable amount of risk from what we mean, where we can expect to bug the end from your first slide to be able to recover when those things happen. Yeah. I agree with that in another session just yesterday, where they had a slide about the amount of turn on code. Oh, yeah. A lot of the code is turning all the time. So you're always going to be getting insubmissions. So you're not actually enforcing people to be coding into your code to turn in any way. I think you can get those coding styles in there. Yeah, and in a project like Zephyr, where the community has started with this as a goal in mind, it's more amenable that we can put tools into our CI, CD build loops, and potentially look at this sort of thing. It's more amenable. But the Linux kernel has a process that's worked and has high quality at this point already. And so it's a different set of argumentations that are being used. Yeah, it's a question of, we need to get our reference systems identified, and then start looking at, can we do the analysis? What are common grounds on a bunch of systems to put a framework in place? OK, so that's going to be down the road a bit for us, at least on Alisa. Go for it. Yeah, so some of our members are members of these standard orgs, and they're actually acting as bridges into them for us. One of our members, and Alisa from ARM, is a member of 61508, for instance. And so they're working on the next rev of that. And so he's going to be acting as bridge. And there's other members, there are bridges into other groups as well. And so the more we can get coming to the table to discuss this in these workshops and in these analysis sessions, and coming up with documentation and best practices advocated, I think the faster we're going to move. But this won't work unless we have them participating in the discussions. And that's why when the CTO of Underwriters Labority, he gave us a quote for our opening. So they're pretty committed to be there as is too sued. And these elements are going to be a whole part of it. I'd like to get more of them. There's Exida and there's a few others that I'd like to actually get into these discussions as well so that we can actually get everyone sort of comfortable with what the decisions are. It won't be just the developer's dictating. It'll be the negotiation. And that's, I think, for Alisa. For Alisa, that's what's going to be interesting. Yeah? Yeah. But it's giving you the goal as opposed to a specific task to get there. And so when we talk about trying to use interpretive solutions to show that it's equivalent in functionality or in equivalence and rigor, equivalent in the goal, now we're back to something like the 61.508 standard versus an ISO 16. ISO 16.16. And it makes a lot of sense, and that would be a verge of such a goal in the project. I don't know if it's exactly what we're following. So the UL guys are busy going through a review of that right now, and that's going to be one of the discussions at the next workshop. And we also, the Alisa project meets every week on a Hangout channel. And there's discussion that sort of advances things between these workshops a bit. But the work coordinates the community that way. But the workshops is where people sort of divide into their areas of interest and are working on getting documentation out and things like that. We're also working on making sure we've got all the right disclaimers and things like that put in place to make everyone comfortable too with that. So this is an interesting framework questions. Any other questions? Well, thank you very much for your attention, and thank you for coming.