 I welcome you to my presentation on the ELISA project. In the next half an hour, we are going to talk about the required activities and challenges to enable Linux and safety applications, rather than going into all the required activities that are required from system engineering, safety, engineering, quality management and quality assurance. I'll quickly just summarize the work required to develop a safe system with the following simple statement. To assess whether your system is safe, you need to understand your system sufficiently. So when you build a system based on Linux and your system safety depends on Linux, you will need to understand Linux sufficiently for your system's context and use. This can be broken down into two activities. First of all, you need to understand your system and you need to understand the way your applications use Linux. This includes understanding which kind of interfaces of Linux are used, which kind of libraries are used, how your applications are start up and shut down, but also which hardware you are using and how does Linux interact with the hardware you are using. If you take all this together and you understand this in greater detail, then you can really argue that you understand the system and how Linux is incorporated into that. With that at hand, you can now assure that the selected properties of Linux meet your expectation. For that, you look into the two details of how the Linux kernel and glipc implement and ensure the selected properties that are relevant for your system. Of course, these two activities are very challenging for an individual because you need a lot of knowledge about system engineering, about the system, about the applications running on Linux, but also about Linux, how it is developed and which kind of properties it actually holds. And this is not only a challenge for an individual, this is actually also a challenge for single companies and the ecosystem in general, and hence to face this challenge and to work towards this goal, a number of companies have come together and formed in the LF, the collaborative project called IRISA. You can see that there are companies from hardware vendors, companies from the OEM side, companies from the automotive and companies with deeper Linux knowledge. These members have come together and want to tackle the problem of enabling Linux and safety applications, and we have formulated the following mission statement to work towards that. We want to define and maintain a common set of elements, processes and tools that can be incorporated into specific Linux-based safety critical systems, amenable to safety certification. Let me explain this in a bit more detail. When we talk about elements, processes and tools, we mean software source code that has been developed with its documentation and with its test suits. Processes means that we give you certain recommendations and methods that you can use when you want to build a Linux-based system and the tools that allow you to do so efficiently. If you take all these parts, you can use them and incorporate them in a specific Linux-based system to build a safety critical system and provide that to safety certification and reach the goals of that safety certification. So when this project is successful, we will build up assets for safety certification of Linux-based systems. This can consist of a complete process with selected kernel features and tools and previous process assessments that have been done in different other contexts. It will be shown feasible with reference systems. It is usable by properly educated system integrators. It is maintained over an industrial-grade product lifetime. It is well known and accepted by safety community, by certification authorities and standardization bodies in multiple industries. It has been positively recognized and has been positively impacting the Linux kernel community and it comes with hardware collateral from multiple supporting vendors. To work towards that goal, the ELISA project has formed a number of working groups. There are two working groups working on reference use cases, one in the area of medical devices and the other in the area of automotive. A second group is working on safety architecture. This means they investigate specific features and functionality of the kernel and investigate what it does and why they believe it does what it does. A third working group is working on the kernel development process. They are investigating how the kernel is developed, what the expectation of safety standards is and how this gap between those two expectations and execution can be overcome. The working group has two further subgroups. One is the tool investigation and code improvement subgroup that is investigating tools and learning how to create kernel patches in order to interact with the kernel community. A second subgroup is currently under discussion that we'll be looking into development metrics and how to create evidences from these metrics that certain activities have been done with due diligence. Besides these working groups, there are some tool developments that are related to the project itself. There is the call graph tool, a tool to visualize and analyze call graphs in the kernel, stressed in G, a tool that stresses the hardware and the kernel interactions with that hardware, contributions to the SUSCO project, a fuzzing tool for the SUSCO API of the kernel, some tool developments in the area of the open source tool code checker, a management tool for static analysis findings, some contributions to pasta, which is a tool to analyze and mine kernel mailing lists and some contributions to GDM, a tool to visualize the flow of Git commits during the kernel development. So instead of going into the details of all the activities that are happening in these subgroups and working groups, I'll just pick out one selected question that the kernel development process group is facing. And we are asking ourselves in this group, is Linux good enough for our use? So we can look into the use of Linux and we find out that Linux based systems are almost everywhere. So Linux seems to be good enough for very expensive supercomputers, banking and trading computers, highly secure company servers, ground based air control systems, critical civil and stress culture, telecommunication systems and various user systems. So the question now is why are we even concerned about answering this question? And this goes down to the problem of proving software quality. The problem we face is that we don't evidently know if a software X is high quality or not. And this holds in general for any software that you can consider. So once you came to come to the conclusion, well, this software X is high quality and someone convinced you. You might ask yourself a question, but why is this software X high quality? And usually you come to a conclusion that it is high quality because the development team does some specific activity or a set of activities or combines various tools and activities during their development and testing and verification that leads to high quality. But with that, you of course face the problem. But why does this make the software, why do these steps make the software X high quality? And of course to answer that question, you have to, you're facing a challenge in empirical software engineering where there's very little evidence to show that specific activities or specific methods actually lead to higher quality. Despite various software development projects, there's very little evidence that shows that a specific method has contributed to higher quality compared to its absence. So when you want to show that something is high quality, you have to work with a very weak empirical basis. And this leads to a significant quality mismatch between two areas. There's an evident quality in business practice. You can see that critical applications are running on Linux for many years without regressions, even with multiple major version updates. Linux is extended and it's refactored, cleaned up and released without causing significant regressions for many years. And any kind of fixes are rolled out without causing significant regressions as well. On the other hand, we have quality expectations from development process norms. They expect that all requirements are described sufficiently to be implemented. They expect that products never ship a functionality they don't need. They believe that very specific documentation efforts lead to software quality and they're based on a development process where there's only one final release to be shipped, which needs to be good. And the maintenance of that shift software is reduced to minimal needed changes. This kind of mismatch between business practice and development process norms, we want to overcome in the development process group. So to bridge this gap between these two worlds, we have the following plan. We want to show that we are evidently developing high quality software. To do so, you can refer to traditional software development process norms like ASPIS, ISO 26262 or others. And they explain how to develop software to be high quality and to provide evidences for that. On the other side, we have the kernel community that develops and releases the kernel. Product teams integrate and test this kernel and hence create products based on Linux. And our goal now is to show that the norm that these traditional software development process norms describe are met by the activities of the kernel community. So the first step in this process is to understand what do development process norms expect. The second step in this endeavor is to understand what does the kernel community development and Linux based product development actually do. Once you have these two things straightened out and a good understanding of them, you can understand the gap. And once you have understood the gap fully and honestly, you probably have to acknowledge that this gap cannot be overcome. So is this a dead end for bridging the gap between these two worlds? By no means. You can just consider the history of both these communities. So why do development process norms have these expectations? And also, why does the kernel community development do what they do? And again, now we can again understand the gap between those two and see that development process norms have very different expectations for various reasons compared to the kernel community. And once we have done that, again, we have to acknowledge that this gap cannot be overcome. So I'll be again at a dead end by no means. So why does the kernel community development not intend to meet the norms expectations? And what are the intents and goals that the kernel community development intends to meet instead? And once we have concluded on that and understand the gap, we will have to acknowledge finally that this gap cannot be overcome. When all of this is understood, it's clear how we can show that the norm is met. You just need to develop a kernel community development norm that describes the intent, the goals and the criteria of the kernel community development. With that at hand, we now can explain why this kernel community development norm leads to develop high quality software. And we can provide evidences that these activities that are executed actually lead to high quality software. By empirical studies and evidences from the past that these activities had positive impact on the quality of the software development. And with that at hand we can do the final step to show that this new norm is met by the kernel community that develops and releases the kernel. And by specific product teams that integrate and test the kernel following best practices that have been developed by the community and the product teams over years. Of course to do that, we need to reach out to the safety community. This means that we need to work with the safety certification authorities and standardization bodies in multiple industries to establish how Linux can be used in a component and safety critical system. We have to evaluate the potential system architectures that are available. We have to document the methodology and limitations. We have to define the state of art for working with open source upstream projects and provide access to the technical review meetings with certification authorities. Of course we need to include not only the certification authorities, but the larger safety community to gain acceptance of these project results. We have to present and critically discuss open source in the safety community. We have to present the methods and tools that we develop for peer review. We have to establish an acceptance for use of open source in the safety community and we need to submit amendments and full standards to relevant committees. But this outreach is not limited to the safety community. This outreach is also to the open source community to which we want to provide continuous feedback and receive continuous feedback on our activities. And we want to contribute to the open source community as well. We want to build awareness on safety and its relation to reliability and availability. Build consensus around the activities within a safety expert community related to the kernel, improve the development workflows and traceability and contribute to quality assurance and quality measurement with the open source community. This also includes outreach by education within the companies. We provide workshops and opportunities for knowledge sharing. We want to create courses on safety engineering and backspractices for building Linux based systems. Teach the safety concepts in safety case, documented in pre-existing elements. And we want to train safety engineers in the use of the analysis tools and handling the limitations in order that companies can incorporate the results that we have into their own system engineer. Of course, this project will not resolve all engineering efforts required to build a safety critical system. This collaboration cannot engineer your system to be safe. We simply don't know how your system looks like. We cannot ensure that you know how to apply describe processes and methods. We'll try our best to educate companies and active individuals in our project. But there's no guarantee that you finally understand the required concepts within a limited time. The collaboration will also not create an out of three Linux kernel for safety critical applications. Kernel and its development is moving so fast that the attempt of an out of three kernel for safety critical applications will quickly determine it feasible. And lastly, we cannot believe you from your responsibilities, legal obligations and liabilities. Safety engineering is about risk management and we can provide you all the lists of risks and problems we identified in our engineering efforts. But in the end it's your responsibility and your legal obligation to critically review this and to ensure that you have no liabilities that are beyond the risk that you're willing to take. Besides these limitations and acknowledging these limitations, this project will provide a path forward for you and peers to collaborate on the challenging questions we're facing. In case you want to have more information about this project, we have quarterly workshops and the next workshop will be taking place in January 2021. It's taking place virtually and is a three day event. You just need to register for this event but there are no further fees or registration costs. In case you cannot wait until this three day event in January, you can visit our website, join the mailing lists on the general development discussions or the subgroups that I have mentioned. You find more information in the Google Drive where we have collected our meeting notes and are drafting our argumentation and investigations and you find further information also on our GitHub space that you especially find tools and code that we're developing. So with that, thank you for your attention and I open the stage for questions.