 So I hope you can hear me, right? No, you cannot hear me. Okay, so that's better. I can go away. Have you ever imagined how to use Linux in mission-critical systems like air traffic control? Imagine you're sitting in a plane looking outside and probably you're wondering, are these guys doing their job? And maybe my code is used somewhere in the plane, in the core systems for air traffic control. My name is Geroff Zingheim. I'm responsible at DFS for the operating system development and hardware standardizations. And in the next 50 minutes, I will try to introduce you into the world of air traffic control and Linux software development in air traffic control. The second microphone? Okay, so that's bad. You have another one? So I'll try to improvise while they find the technical solution, whatever. So let's go ahead. It doesn't matter. So we can improvise. We're actually writing to the topic the integration of hardware and software is very important to us. This talk is a memory of my former boss and mentor who happened to pass away just two weeks ago. So the title of my talk, using air traffic, using Linux in air traffic control, hardware and software platforms. The agenda for today is quite packed, so I try to catch up. At first, we start off with air traffic control for dummies. Of course, we need to know what's the world I'm living in, what are my perspectives on Linux, the history of Linux in DFS, how long do we use it, what did we learn on the way. And then I want to show you what are the current challenges and what are the dos and don'ts we learned in the past 10 years using Linux for our core systems. Finally, I show you how the structure of our modular platforms look like today. And finally, I try to show you what will come next, what's the view for the future. So ATC for dummies. The domain I'm living in is quite different from anything else you know probably. I'm living in a world where radar systems are 30 years old, where we have floppy disks still running the software for radar systems and they still run fine and are in good shape. We have departments taking care on these parts and they actually can desolder chips and exchange them with FPGAs and stuff like that. So this is the world I'm living in. What is air traffic control by the way? What is our core business? Probably you'll think of these guys sitting around doing some gestures, showing the plane where to go. Yes, this is air traffic control too, but our systems look more like that. So we have one guy sitting in front of a radar display. He's seeing lots of planes. He's seeing information on these planes and he's talking to the pilots and giving him commands where to go. Sounds easy. These people are always working in pairs. One is the controller who is actually doing the talking to the pilot and the other one is organizing all the interfaces to other air blocks, to other air traffic centers, etc., etc. And as you can see, there are lots of systems behind and our job is of course to provide these systems, to build these systems and nowadays most of them run under Linux. If you take a look in the sky for one day, you can see, sorry, over Germany, but you can see roughly 10,000 flights per day going up and down, flying around and in order to understand this big mess, a short introduction in air blocks. We have roughly two different kinds of air blocks. We have upper airspace. This is everything flying around over countries and we have the lower airspace. This is everything starting and descending. Sounds easy. Why should you separate these two kinds of systems? Because the speeds and the movements the planes make are quite different. As you can imagine, if a plane is flying over a country, it will not so often change its direction but will go with a very high speed in contrast if it's starting or landing. Sounds easy. So what are the systems behind? The systems behind are core infrastructure, are safety critical infrastructure and one important aspect just in advance. We are not into security. Safety is the topic. Safety is more important. Safety means our systems have more or less direct impact on the lives of others. So we have to take care that the safety of the systems is guaranteed. In case of doubt, we have no problem if the system fails but it should be a determined fail in contrast to security topics. So we have special requirements. For example, some of our integration requirements, we could not take a microphone like this, this is bad. Imagine you want to talk to a pilot and your microphone stops working, this is bad. So we need application support for the whole life cycle of the system. Okay, sounds easy. What's the life cycle of the system? The life cycle of a system is easily between updates, eight years. Example, the current system for Europe, which is developed in cooperation with DFS, is in development since 15 years and it will go live end of this year. So this is what I'm talking about by application support for the whole life cycle. Of course, we have high quality standards. We will jump into them later on. And we need proofs that actual the systems work, paperwork alone, a certificate or a document, yes, yes, we can fulfill this is not enough. We actually do tests, even if there's enough paperwork and certificates, existence and reality shows that we are right. We discover failures. What these failures are, I will show you some small examples. Of course, you can bet, we don't need any single point of failure and we need fallback systems or system architectures are, we have redundant systems of course and these redundant systems have fallback systems. So if the primary system will go down, we have a fallback system which will take over. This has implications for the architecture of the systems. If you want to have a fallback system, you should have the most, you should have something which is completely dissimilar to the primary system. Means we use different CPU architectures, we use different operating systems, we use different applications and even different requirements and engineers developing these applications in order to avoid any systematic failures. What kind of environment are we operating in? Maybe this is also interesting. Of course, air traffic control is something controlled by the governments because if you control the air spaces over a country, clearly this is a governmental task. But it has been regulated by the European Union that private companies nowadays, DFS is a private company, can take over these tasks. And we have a lot of abbreviations here. Welcome to my world in DFS. We have a lot of abbreviations. I will try to explain them to you. Farbeck means one aspect of cost saving, of increasing the efficiency in airspace is organizing the air blocks in a more efficient way. In the old times, air traffic control was governmental and if a plane flies from here to the United States, you will cross different countries and every time a different center will take over, this is not efficient. It is better to create the air blocks in a manner that they follow the roots of the actual airplanes. This is what Farbeck nearly means. We have other programs and regulations like CSAR stating that please guys, sit together and know it's more like a command, sit together and find a way to create homogeneous systems, develop them together, create system-wide information management, don't create your own solutions. We have a free market nowadays. We can take over air spaces in different countries and other aeronautical service providers also can take over air blocks in Germany. And of course, last but not least, we have coast regulation by the European Union and this has a strong impact on our daily work. In the last one and a half years, we were able at DFS to reduce the cost for the customers by 23%. And Linux is one important point. The Linux platforms, how we achieved to save this money. So what about the history of Linux and DFS now? We started using Linux in a time where the systems currently in development were still developed for systems with Alpha and True64. When I started at DFS in 2010, we had the last big go live of a big primary system running on Alpha, True64. At that time, you couldn't even buy the machines anymore. We had stocks of machines and this is the time we are living in. But even five years before DFS thought about that, we have to think how we create the systems after Alpha doesn't exist anymore in this architecture. What can we do? And at that time, the Linux Competence Center has been formed and the idea was to provide centralized knowledge for Linux and Linux related topics, hardware, and so on and so on, centralized in DFS. At that time, we also established a partnership with both big enterprise providers, Rathat and Susan. Why Enterprise Linux? It's quite simple. I will explain to you later in the talk what exactly we need from these enterprise distributors. Because, okay, if you first think, yeah, okay, it's a big company, safety is something, yeah, okay, enterprise is great, but what can we really get in reality considering these life cycles, these development life cycles, you can already imagine that if we go live with the systems, it's out of support. So we have something else from these distributors. What that is, I will explain it later. In 2008, the first systems started growing and we thought, okay, we should create a unified solution, the so-called DFS Unix. DFS Linux, the idea was to implement the requirements of our customers for the various products we have in use to implement these requirements only once. Of course, the operational processes for all of the products are the same, so we should implement the requirements only once. In 2010, the idea was to extend this first solution to a platform with the main goal to harmonize the configurations. At that time, we had roughly 1,500 systems in our corpus running on the Linux and it was already foreseeable that the amount will increase tremendously in the upcoming years and the question was how to react on that. And at that point, it was, I think, a very farsight solution to set on this puppet stuff and to say, we are going to use this kind of concept and enable the users to write their own configurations. Besides this standard we created, we thought about how to get our hardware. The systems before, they were bought as appliances. So appliance means we buy the alpha hardware with the operating system together. This is not possible anymore, so we have to provide something here and the solution was create a hardware standard and make sure that these systems in the standard can run on the Linux. Sounds easy. In 2015, the number increased, the number of systems, the complexity increased and the cost pressure increased and this forced us to think in different ways. So we extended this hardware standard to the hardware platform. What that means, I will show you by end of the talk. Finally, in the last year, we cracked the mark 10,000 operative systems running under the Linux platform at DFS and all of them are using our modular hardware standard and this is why it's something away. This means 100 systems, we buy a factor 10 more systems for testing, development and so on. This is why for one platform instance, we easily buy 1,000 systems. So what did we learn on our way? Do's and don'ts. In the following slides, I present you some examples and I try to explain them as detailed as possible but I think it will not be possible to dive into every detail and if you have questions, please write them down, come back to me later, we can talk about that. CT1, what does it mean? Welcome to the world of DFS. We introduce abbreviations for everything. Challenge technical number one. So you have the real DFS feeling. The first technical challenge, a Wacom driver. Well, okay, that's easy, a Wacom tablet. Maybe most of you know what a Wacom tablet is. It's used everywhere and so also we use it in DFS easy but the use case in DFS is quite different. This is where I have a problem with a microphone but it doesn't matter. I will show it to you. An artist makes a movement like this and DFS will make a movement like this. So this is a problem. This is a problem when you consider that both the driver and the firmware is, of course, optimized for the usage of the general purpose. What happened is we had a major observation three years ago, no four years ago already, where some of the air traffic controllers found the bug and the bug was if you place the pen in a specific angle at a specific point, a press event was released in the middle of the screen. Nobody found this. Why should you do that? And as you know, the people who are working as air traffic controllers under high stress and once you detect one of these failures, they show up everywhere. So this is high critical. We have to fix it. Don't wonder if your standard driver and the standard firmware doesn't work. Luckily, we didn't wonder. We have customer tests for this and, of course, this past the customer test, the bug arise later. But we planned ahead for the support. In a case like that, if you don't have established connections to your providers and can ask them what's the problem with this firmware? What's the problem with this driver? Can you please come over? You're in deep trouble. The solution here was actually fixing both something in the driver and something in the firmware and we had to update, we had firmware update, roughly 240 of these Wacom displays in the night. What makes it different? What makes it complicated to update something like that in a mission critical system? You cannot just take down the screens. There are fixed intervals, which are harmonized worldwide where it is allowed to play in updates and you have to synchronize with that and this increases the cost even more. Second example of a technical challenge. Intel IMT. Maybe better known under the name of Vipro. This is a technology probably existing in all of your clients and it provides remote accessibility to your client systems. A very nice feature and there's also Linux support. And the Linux support is a part of a Windows tool written in a bad programming language and it's ported and it's rotting around. So we bought the hardware in 2014. We want to go live. This is actually the system we built for the future. We want to go live by the end of this year. And in 2016, Intel told us, okay, the support is dropped. There's nobody anymore. We sack the people. Oh, that's bad. So don't rely, maybe this is obvious. Don't rely on binary drivers even if they are provided for free. Just don't do it. When you end up with a door, it's closed. There's nothing behind anymore. What you want to do is you want to use open source tools. You want to have established connections again with your providers so they have a real chance providing your solution. You want to assert the lifecycle for the complete lifecycle of the application. Of course, this did fail in that case. And you want to have alternatives. Luckily, we had alternatives so we could figure out a workaround using a different technology so we can go live. A small blocker, but this is a very serious topic. Imagine we have everything ready for go live and we cannot access the machines remotely. This is bad. Third technical challenge. Deployment solution. Of course, obviously, if we have 10,000 systems, we need some kind of deployment. Okay, that's nice deployment. That's easy. If you consider that we have two very different cultures in air traffic control. One of them is the guys living in a tower system. This is what you see at all the airports. And the other guys are sitting in the center doing the control of this lower and upper airspace. The systems, obviously, are completely different. Tower systems are decentralized and center systems, it's already in the name, they're centralized. The deployment solution look the same. It doesn't work. So the challenge here is we need a deployment solution and we have a lot of legacy code. Where does this legacy code from? Very easy. All of you know DevOps. DFS does exactly this, not. We do it completely different. We do the development. We test and test and test and when it's finished, it goes live. The people working in first and second level support have kind of driver's licenses. For all of their actions, they take in the systems. For each system, there's a driver's license. And obviously, finally, they are personally responsible if something goes wrong. So this is a source of tools, of course. If these guys are responsible, they start writing their own tools in the years. Now these tools exist and you provide them a new solution. The challenge is we know the better way. We know state of the art technologies. We know how it's really done and the customer has their own solution. So don't use puppet master. Don't use live reconfiguration, even though it's possible and very nice. Don't use it. Don't use all the nice features existing. It doesn't make sense. If the person wants a chessboard, you can write an application on an iPad. It's nice and expensive, but it doesn't work. If the user wants to stick with this old ladder, don't provide him a Porsche. Very easy. So what you really want to do is he wants to go to the people who are responsible for the first and second level support with a completely different mindset than any developer because they're personal responsible and they feel that way. You want to go to them, ask them what you really need and just implement that. This means focus on the real customer value. The real customer value is, in our case, of course, the people who are doing the operations. And understand that even though you have all these nice ideas, which is very nice and of course they are better, but we only create costs. We create no value. And we did take a wrong path in the beginning. We tried to force the system management to use a different way of deployment. And of course it was better, but it didn't work out because of cultural problems. Interestingly, interestingly, as soon as we started providing them the solutions they needed, they asked, can you please provide us a better solution? Okay, it was two years' work and then take over at the old point. Now let's start diving into the real challenges. The first real challenge, you can see, C1, no technical challenge anymore. The real challenge, there's a gap in the delivery. What kind of gap could this be? As I mentioned earlier, our job is to do the integration of the operating system and the hardware. Challenges? The hardware is delivered by somebody else than the operating system. So we have to take care on the integration somehow. Okay, that's fine, easy. What's in the gap? What do we have to consider for a specific Linux version? Linux versions, in my terms, is something like REL7, not a kernel version, an enterprise Linux version. For a specific Linux version, we have a hardware and driver, hardware and firmware version for every hardware component. Can this work together? Obviously this is difficult because the operating system has been released in an earlier date in the firmware later. Who will test it? Drivers and modules. You've seen an example, vacuum driver. And obviously also tools you need for the hardware functionality, like IMT. These tools and these problems with hardware revisions are in kind of a gap. How does this gap feel for DFS? And this is where we jump into the development processes. A short introduction into my world. So structured software development obviously uses some kind of model, like the V-Model. And you start off with a high-level requirement and you say, there's a hardware unit, it's named X, and it shall boot. And it's provided by some independent hardware vendor. Somebody has produced it, fine. And it shall boot with Linux Y. Okay, that's very easy to understand. And then it should work, shouldn't it? Okay, and we know some more details. We know the low-level requirements. The low-level requirement would be, okay, we have a driver Z for this module included somehow in this hardware unit. And this shall boot with Linux Y. Easy. This is the world of DFS. We write the requirements, we say what we want. And then any distributor will say, okay, fine. The driver is included. And it is included for Linux X because it's newer. Nice. But we need it for the old version. Okay, we can make a backport, fine, okay. We take the backport, we try it out, and maybe there's a bug. We ask them, by the way, how did you test them? Or we did only compile it. What do you mean by test? This is the test. We did do it. Well, okay, we mean something like you use the actual hardware and find out whether the driver really works. Ah, no, this is not our responsibility. This is, of course, we don't have every hardware. This is where the independent hardware vendors come into the game. They should do it. Okay, fine. We have support contracts. We ask them, okay, yes, okay, we test it. And our specific problem is not tested. Yeah, because there's no business case. You have to do it yourself. So you end up with something like this. This is visualization of a Lawrence attractor. The people laughing are probably also physicists. It's a chaotic system. And everybody moves around in a chaotic way. Everybody points in all directions and there's no solution coming. So this is where the gap is. How can we fix something like this? Actually, what you don't want to do is you don't want to blindly blue-eyed. You don't want to rely on what the contractors say. You're talking in different languages. It's not bad will, it's just different languages. You have to somehow make the languages compatible. At first, what you want to create is, you want to create specific customer tests and provide them, provide these test cases to your manufacturers. So they have a good chance to understand what you really need. And the best way where to put this is obviously in frame contracts. So they can even see in advance, do I want to take this bit or maybe I'm the wrong partner? Maybe I won't do it. The second thing you want to do is, you want to create an internal document for the company, an ICD interface control document, writing down all the hardware components you have and all the operating systems you have and test them using your acceptance tests and make a matrix what works, what doesn't work. And if you do that in a modular way, you can save in a tremendous amount of costs. The third thing you want to do is, obviously to steer and control the support. If you want to escape this Lawrence extractor, somewhere you have to manage to talk to your providers and help them understand what you really want. And the last point, obviously, you need developers in-house and external ones who do the actual work. What's really the point here? The point here is that results are only there if you can understand them. Even though the technical solution may exist and may work out perfectly, if there is no understanding and customer side, what this means, the solution doesn't exist. And in order to make the solution visible to a company like DFS, we need these test documents. We need these requirements. Anywhere else, it's not there. Second challenge, lifecycle. Okay, I told you about that before. We have eight years of application lifecycle and we buy hardware, we buy operating systems and we need support. And once we go live with the systems, the operating systems are out of support. Okay, that's bad. And we have lots of safety requirements hindering us, even if there's a patch and even if all the documentation is there, we have to safely approve the system and this will take at least half a year. So that's a problem. We have servers level one and two with these driver's licenses. And how can we deal with something like this? Obviously, you cannot rely on manufacturer support. Even if they understand what you want to do, they will put a price tag that's incredible, ridiculous. It doesn't work. So you have to find another way to make sure that the lifecycle works. So what do we do in DFS to get a grip on that? At first, we need revision control down to the firmware levels. We need it in order to reduce the complexity. If we don't have control about what hardware is in use, and this means hardware, the revision number on the PCB and the exact number of the firmware version, we are in deep problems. A real example, we bought 90 network cards and we do a test for switch over and two of them don't work properly. They have a different time interval for takeover. So what was the problem here? We found out that the two network cards were manufactured in a different country, Taiwan versus China. This was the only difference. And obviously, you want to know that in advance. The second thing you really want to do is create a modular standard. What that means, I will show you in the next part of the talk. And this includes also the repairs. What means repairs? If you know that you buy hardware and for example, but could be anything else, NVIDIA cards and you know that the diodes of the NVIDIA cards at the graphics port, they break regularly. It's better to take all of the graphics cards, solder them out and replace them with better ones. So this is the level we have the knowledge about the systems and what we do. And last but not least, in order not to make a big stock of hardware which you never use, you have to know what is your failure rate and you have to know how this evolves over time. Third challenge, the regulations. And this is really a big one. Easy to solve, but it's a big one. In DFS we have lots of regulations considering the software. For example, we have ASR 6 and ED1 whatever and ISO and blah. And of course, we also use the state of the art processes. What does it really boil to? Please make your software structured and write the requirements document and stuff like that and write good acceptance tests and so on and so on. Okay, that's fine. And finally, we have the DFS core processes. Nobody can know them. Only us can know them. And somehow we have to make this compatible with the outside world. We look like this. So quick introduction for dummies and if you can take one thing home, this would be the thing. How does a software development process look like for safety critical systems? You remember this V-Model? On the left-hand side, you have the requirements down to the code. On the right-hand side, you have the test cases. If you start linking the requirements to the test cases, high-level tests and high-level requirements, you have software assurance level 4. If you can link down to the lower level requirements and link them to the test, you have software assurance level 3 and you link it down to the code, then you have assurance level 1. To the code means every code part has its requirement. Every requirement has its code. And this is how the open source community looks like from our perspective. What to do? There's no common development process. How can we accept the results? The technical solutions are there, but we cannot take them. What to do? Don't underestimate, and this is very important, don't underestimate the differences in cultures and development processes which it has on your results. The solution is quite easy. We just agree on the deliverables. That's easy. And once we agree on the deliverables, we can use our distributors, Rated and Sousa, we can ask them to shield us against these open source community by telling us how do you develop, how do you test. We write down the interfaces, how we interact with them, and we are fine. This is the trick. The fourth challenge is customer acceptance. When you think of a platform, obviously you think at first, of course. The problem here is, we don't own the production line because we have separated the first second level support from the development. So in the end, the guys responsible, they have to like our solution. So if we build a standard in a manner that we say, we know the better way, we end up with something like this. Obviously not what we want. If we want the customers to build something like this, we need to provide them the right modules. We need to provide them the results that they can do their work. How do we do it? We built a flexible modular standard, and this is what I will show you now, how it looks like. We have to document it, we have to keep it simple, and we have to empower our users and our customers to use the standard. Last but not least, challenge testing. We just talked about this in the beginning of the talk. Yes, you can automate. And if you automate everything, you have great tests, but never forget what your test coverage should be. If you don't test what the customer wants to have, this is a problem. And if you know your customer test too good, you end up with something like this. This is maybe not so good. So how do the modular platforms look like today? We have, on the one hand side, we have a hardware platform which includes all the server systems, the client systems, displays, KVM monitors, et cetera, et cetera. We have shared common requirements we agree upon in a board, and we provide service for the frame contracts. We have different persons, not only us, but different persons in product management who take over responsibility for these frame contracts and help if there's a support case. On the other side, we have the Linux platform, of course. It uses this enterprise Linux and implements specific ATC requirements. I won't go into details here if you have questions, ask me later. The point here is we have to provide the same modularity here on that level as we have for the hardware because the hardware drives the operating system. And obviously, our job in the Linux platform is to make sure the integration works. All of these standards are agreed upon in an architecture board, and I think this is unique for a big company like DFS. We have an architecture board where we sit together in every month and discuss what's coming on in the architectures in the near future, and we agree upon our standards and we talk about exceptions and why these exceptions should be accepted. So actually, how does the system architecture look like? What means modular? Modular means if we consider, if we keep in mind that in the end everything together should work, including firmware and so on and so on, obviously a hardware system includes the hardware unit and the concrete adaptation, like firmware, liver, biosettings, and so on and so on. And in order to reduce the cost and make it a platform and not single towers, which don't interact with other people in DFS, we have components inside the modules. A component could be a monitor. A module could be a graphics card, network cards. If we have once approved it, if we have once certified it, you can plug it into any solution. The same holds true, obviously, for the Linux platform. It should not be something like a legacy code. It should be very modular and if there are two different kind of deployment solutions, okay, that's fine. We provide the modules, the customer can use it, or he can leave it. Okay, maybe I'll skip that considering the time. So actually, what is inside? The core components of our Linux platform. The first thing is, in the core components, we have to provide the basic functionality. The basic functionality would be we have to have a Stage 1. We want to boot and install the system, obviously. Then we want to configure the system. Okay, that's fine. We can use Puppet, but not everywhere. If some customers in some tower systems don't like to use Puppet because they have a real reason for that. They have non-unix systems, window systems running. And they cannot use our Puppet deployment and our Puppet configuration for doing their job. Okay, we provide them something else. And two years later, they see, oh, there's other tower. He starts using Puppet and he's going better. I want to have a two. So we provide this flexibility on that level. And last but not least, we have an operational mode where we can do stuff like reconfiguration of the systems and where we can do something like monitoring and so on and so on. These are the core components which are coming every time with the platform. And then we have some kind of optional stuff which you can use, but you are not forced to use. It is one thing to create a platform and say, okay, if I want to have something like, I want to boot SUSE and Red Hat, so I write my own boot loader, I write my own code. Yes, you can do it. But in the end, you cannot make a better job with limited manpower. You cannot make a better job than the distributors. It's better to rely on the distributors' solution and write your own binary code which comes after that. And then we can provide these modules. So what are the technologies we have inside? This would be a completely different talk. We use Puppet. And over the long term, maybe two words on that, over the long term, there are big pitfalls if you use Puppet. If you start using Puppet and consider that the idea of Puppet is you write the nice configuration and afterwards it's very portable. Okay, nice. A life cycle of eight years is not too long. You should consider that maybe you expect that everything works the same with the new distribution. You just make an update, but the configuration, in fact, stays the same. It is not. So what we did now is we reduced the allowed command set to the basic stuff with what we really can control which will not change and the things like template stuff and so on and so on. We create our own solutions because over the long term they are more stable when it comes to the interfaces. So finally, one different view on these platforms, how can you make this platform and it's on the one hand side, it is safety and on the other hand side, it is flexible. You have to use the language of your customer and the language of the customer is like that. We use requirements documents and test documents but here I show the requirements documents. If you want to make it modular, you describe the hardware standard in one set of requirements, the blue ones here. You describe the Linux platform part in another set of requirements, high-level, low-level design document and so on, the green ones and the customer brings his own configuration, his own requirements and says, okay, I'm compatible with this platform and I use these interfaces. What's the big difference here? We only have to agree with our roughly 70 customers, 70 different products. We only have to agree upon these interfaces and we can create all requirements very stable and they are not touched even if the customer comes up with some fancy requirements. So what's next? The next steps for us are when it comes to platform, we continue to shrink the core platform even more. We have reached the number of 10,000 systems and we have such a good user acceptance of our puppet configurations that we were able to give the configurations away to the product management and to the first and second level support. So we can take a step back and redesign our platform, focus it on the core concepts when it comes to hardware platform. What is the next step? Obviously we want to include more modules. There are a lot of legacy systems and they continue to be replaced and all the new hardware we want to include in our standard obviously. When it comes to technologies, we are currently developing the first ATC application using container technology, in that case it is Docker and it is the core component for networking. So quite important technology and of course we will also include this in our platform and when it really comes to software development the biggest challenge for us currently is the security requirements. So far everything I told all the problems we solved are safety requirements. In the world we are living now, security becomes more important and we are rated as a critical infrastructure. This means we have completely different requirements demanding security and the problem here is that security and safety are completely different things. This is a completely different thing to say I want to create a safe system, I lower failure and it is somehow encapsulated in a box and nobody can access it then if you say I want to secure my system, I want to have secure software, I want to do for example live patching, I want to include software updates. Half a year rate is not enough. So hold on, solve that. This is really a big challenge for us. So to sum it up, what did we achieve in the last ten years? We successfully built this platform and it is widely accepted in our business and not even in DFS we are currently providing the operating systems for the European system for ICAS and this is relevant for the whole European Union. We have shown successfully that it is possible to create mission critical systems and fulfill these safety requirements with Linux, it works and in the end we achieved the results and we had a lot of fun on our way. So what is my main point? My main point is here, if I really understand the core, the core processes and the core results I want to achieve both internally of my customer and if I understand the community, that means you, you are actually contributing to our stuff. If I understand that it is possible to use Linux in a mission critical system. Thank you.