 Yeah, welcome back to the second part of the topic, do we need an industrial create Linux? My name is Laskar Blomizer. I'm from the Bosch Group. And in this part, I will discuss this topic together with four experts from the field, which I will introduce now. First of all, Kate Seward from the Linux Foundation, G. Lunadi from Colabra, Jan Kishka from Siemens, and last but not least, Andrei Bakovsky, also from the Bosch Group as myself. So my first question, so I did some introduction on the topic, but I would like to hear from you. What do you think is an industrial create Linux? What are the parts? What are the characteristics of something like that? And I would like to start with the industrial guys. So Jan, perhaps you can start with your opinion on that. Yeah, well, if you think of an industrial create Linux, you obviously think of something which is very mature in order to serve the use cases we have here. That affects, I mean, the whole system. It also affects how long this system is already available and how long it will be available. As our devices are living so long in the field, we are not eager on switching every other year to complete different architecture. So this is one of the key element, I would say. Besides, of course, also aspects like real-time, which is important not for all of the use cases, but for a good share of it. Security is an important aspect, obviously, which affects several aspects, the hardening of the system itself, but also where the packages, where the components are coming from, how they're being maintained and also how they can be updated. If they can be updated easily, that's obviously a very important thing. Safety is of increasing importance, though, I would say for many of our scenarios, it's not yet critical in the sense that we are massively using this based on Linux systems, but for other systems, obviously it is. Yeah, and last but not least, I would say important is also that this is an ecosystem-friendly solution we find. There are many suppliers in our chain. We get software from the communities, we get software from our suppliers and whatever comes out of this, we have to integrate it. In the end, we are delivering it to our customers. So what we get has to be in the right form and easily combinable and degradable and also in the form that it can be upstreamed or is upstreamed to the communities already so that there are less steps to take in order to get something running. Thanks, Jan. André, what are you thinking? Yeah, I can only agree on that and yeah, more or less for us, it's for sure the flexibility for our custom products. So industry-grade Linux for me is always addressing customization and custom product flavors and we need a lot of flexibility for that, especially since we do not only create a single product type. So if you have a wide range of product lines and setups, yeah, you need versatile product boundaries and yeah, then last but not least, we are going on target. So we have a wide range and flavors of hardware variances so very different silicon, SOC vendors, system sizes and resources. And due to this lineup of industrial products, we have typically different technology partners per product type and these are again fading so there are no clear boundaries. So what we need one day in one product, we need another day in the other two and it's no more SOC clear boundaries like in past. For sure, like that, we have long product life cycles which is a challenge. And then we have a lot of aspiring needs like shorter time to market, like already sets is more merging product boundaries. So things what was former two devices becomes no one single device. So things converge and merge into single ones. We have growing feature richness and partner networks and even more than in past, we have an after SOP life. So formerly it was more targeting specific features set, the well-defined features set for SOP. Meanwhile, the features are continuously evolving after SOP, so well lifetime and in combination with all our long product life cycles this is an aspiring challenge. And for sure, like said, together with security, especially the product types which are connected and there's long cycles. So again, this also leads to a challenge on the collaboration and liability ecosystem approaches, yeah, all that and that you design your system already in the initial phase in the way that you can master the challenge after SOP during product life cycle. So that's for me all about industry-grade Linux. Thank you, André. Guy, you're from a company supporting bigger companies in using Linux, what's your point of view? I think both of them have covered a lot of the use cases quite well. The thing that resonated with me the most with what was just said is this notion that in the past, industrial Linux was sort of affording itself to have a little bit of a release and forget mentality where you could harden a Linux-based operating system once, put your application workload on top of it and never really worry about updating it because it wasn't online or it wasn't exposed or it's fairly well secured. When we were preparing this, Kate was joking about having them being welded on the side of something that you would have to go through a significant amount of trouble to undo that. That's changing. Everything is now connected in one form or another. Everything is in many ways more vulnerable and that's a dimension of the complexity unrelated to feature functions, use cases, requirements from the users that we all have to face. And so I believe that the landscape is changing because of that. In many ways, the SOC vendors and others have achieved that convergence that André was just talking about where you can get a single SOC that will address 80%, 90% of your needs and you put that on a module and you can get to carry your board that will do that or you create a product around that but that doesn't address the complexity on the software side. Our life as the software people has just become harder because you have one, two, three, four MCUs next to your ARM cores that need to all make everybody live together in harmony. So while the wiring harnesses might become easier and smaller and we're reducing costs there, the complexity effectively of the software to drive all these hardware-assisted technologies on the CPU sides and various SOCs has definitely grown over the past few years and it's important that an industrial-grade Linux that understands the bigger picture of having to build something that supposedly fit on the 128 kilobytes hard software source, which obviously Linux isn't gonna go down to, but even just 32 megabytes, 64 megabytes. In some cases, we have vendors that come up with SOCs that have 64 gig attached to them. So it's all over the place from very small to very large and that software complexity that helps reduce the total cost of the hardware is something that we've seen increasing over the past few years. Hey, thanks. Kate, what do you have to add to that? I don't certainly agree with everything that's been said up till now. I think a lot of it's been covered. To me, if I'm here in the phrase industrial-grade Linux, I'm sort of thinking that there's actually been a scrub set of configurations that make Linux dependable and predictable. So it can be used in environments where we have to consider ourselves with security and safety. And we've got Linux kernel hardening projects. We've got a variety of other pieces in the ecosystem, real-time Linux projects and so forth. Industrial-grade Linux is probably bringing a lot of those things together and making it so that we have a community that's basically focused on getting those configurations dependable and can be used from iteration to iteration and knowledge can grow as part of that. An industrial system is going to have to be modular. As you see, there's a huge range of use cases that are out there. And so you want to have, you know, components that you trust and that you can basically build off of. And ideally, you'd like to see some of these components being, you know, in well-supported projects. One of the things to me I'd like to see is I'd like to see any project that's being used in the industrial-grade Linux to actually have a CI badge so that their practices are transparent and there's a chance of sustainability. So you don't have this little dependency all the way down that no one cares, that no one knows about until it bites everyone. This is security vulnerability. So those are the sorts of things I'd like to sort of see because I think we're going to need them in the ecosystem especially after we start going after safety certifications and open source. Okay. Thanks to you. Yeah, when while preparing this session, I learned that industrial-grade Linux is not my, our invention. So it has been used surprisingly in the past. And of course for us, it's really the question whether we identified a gap that's really existing or if there are already projects running. So I ask basically all of you, do you know of initiatives that have targets which are similar or slightly different addressing such topics? Well, what I was aware of last year was the OpenIL project from NXP and they're trying to work in that area. I know Jan was looking at it a little bit. Maybe he wants to chat about what he was finding. So I was finding an interesting integration approach. I mean, this is one of the key elements obviously for industrial-grade Linux as we just said already. You have to integrate a lot of parts and this is what this project is doing. It is centered around the NXP projects or devices obviously, pulls in a lot of open source elements. And yeah, this is an interesting point to start with. Obviously you need a critical mass for these projects. That's what we see also in the Civil Infrastructure Platform project. It's not that easy to start in the domain and while I'm in this domain, industrial domain for quite a while now, even before I joined Siemens. And it's not the domain where you may know from the server space and the cloud space which is overly agile and collaborative yet, but it's coming. So the challenge is definitely to get the critical mass for these projects and initiatives and that's probably also why you see a lot of them but besides obviously a lot of commercial offerings, they get the critical mass by the customers but for collaboration, the challenge is really to get from this industry the buy-in to have enough people on the table and working together for their goals. This is actually also an interesting point where we started the CIP project around which was around devices, from the Civil Infrastructure domain, long-living devices where we first tried to get some members or get some companies together in this domain to work on common goals which was new for some of them. And yeah, this is definitely something you have to consider if you start a new project in this area. Yeah, from our side. So maybe I can contribute in this direction. So what we found are several approaches which were how to say focusing and dependent on specific points of interest or sometime around the specific silicon, some way or the kind of functionality has been provided. We also started with several commercial vendors again who coupled these solutions for sure around their technology. So we find the set of the zealots I would say and this is one thing. So finally, we even utilized in different projects, different starting points and this lead to a lot of fragmentation with inconsistency in between which is really one of the major pain points which motivated us to change something. So yeah, this unneeded inconsistency and local focus on a certain scope from bad points. The second thing is that the collaboration model which has been utilized in these approaches was simple and not powerful enough for a kind of ecosystem approach. It was more focused on customization so that you can tweak and achieve what you need for your particular individual product even if you break compatibility on that way with others. And so you need more meanwhile. So this upcoming needs and over the long life cycle needs really a different approach especially in the collaboration model and the flexibility on a wider scale. And yeah, this is something which we haven't found so long in these available solutions and starting points. Since we're talking about communities while we're answering your question about what existed before, contributing in a slightly different angle which is, and I'm not gonna name names but if you look at some specific use cases areas where there has been communities that have come together to create something, robotics comes to mind. And a lot of those projects, academia and outside people doing industrial robotics are using community projects that they're collaborating on but the platform itself, the Linux kernel, the bootloaders, the base layer of the middleware was completely absent of their considerations. So some of them are stuck on like six year old versions of a standard of the shelf Linux distribution. And they're using that and some people are transposing that into production or you have the same thing with some of the current edge computing machine learning use cases where, I mean, Audrey said the evil world, right? They use the vendor SOC BSP platform which again took us back, five versions back of the Linux kernel, not minor like I'm talking major versions back and that's what they're saying, use that in production. And that's what scares me a lot. You see people focusing on their use cases and I really commend them for that again using machine learning or robotics as good examples of that but they've somewhat ignored the complexity, the security, the safety aspects of what they're running on and we can't really let that happen. I think there is a gap here that needs to be filled of a robust, trusted, mature, validated, capable of over the air updates if available Linux distribution type software solution that people can build on top because right now it's a free for all out there. If you're in academia or if you're doing again just robotics or machine learning as examples, it's scary, it's scary today. And we have done the same things in every team again and again. So what we finally have is lots of embedded devices and product teams with their own mission and due to this different starting points and especially the hardware relation or product specific pieces, which is their major focus at the starting point. They started to move forward in their direction, the other team in the other direction and then tons of teams in different directions. Everyone repeats somehow similar things and later on they have to master the long-term challenge. So yeah, what we are looking for is a collaboration model which passes this product boundaries and team setups so that they can really collaborate across this boundaries but still fulfilling their mission in creating their custom products. So what you're saying- I absolutely agree with this. The sound infrastructure to build on top of. Okay, sorry. We need to have a set of infrastructure that we're all agreeing on is dependable so that people can differentiate on top of it. Is that a fairly good summary? Is that the idea? Okay. Yeah, the idea is, how to say, finally it's a modularity. So you really also what we said with how to say to have a real reality check. Finally, there are silicons which we utilize and this embedded an industrial industry where you get a certain set of kernel also from your silicon vendor. It's simple, sometimes needed that you exchange something even down to the kernel or you have setups without any kernel where we deploy something together with other software in a container or whatever. So it's not the way that you build up a system, bottom up and you have it more fixed in the lower level and more freedom in the upper level. You have really freedom and modularity. I think Audrey was bringing up a really good point here which is when we think of industrial Linux we can't just think of bare metal, slap a kernel on top of that. It's never that way in reality. Like if you're how lucky this way, more power to you but for what we see type one, type two supervisors who's kernel environment controlled by the vendor but then everything is containerized. We have been doing that for some time in various form of products. So I think that's the modularity point that Audrey was trying to make which is it's not always just bare metal, hardware, bootloader, Linux, middleware, the applications on top of that. Cause if that was the case, anybody could do it but you have complex SOCs. One, two, three, four MCUs next to it. You need the real time micro OS is to run on those with one, two, three, four, five, 10 virtual networks, ethernet or else shared memory between the different nodes sharing the resources, one GPU to GPU. All of that needs to work together in a cohesive fashion. And again, safely, securely. And I think that the industrial Linux need to be aware of that. You can't just say, this is my system image for x8664. It's been tested on a QMU advert emulator. Go about it because that is so far from reality. So do we also potentially need to look at having an industrial Linux having a test suite for basically making sure we don't have regressions beyond a certain level on some key core functionality. Do you consider that part of it then too? Yeah. Yeah, I guess this is definitely an important point. I mean, testing is not invented for industrial Linux and I hope it will be used by industrial Linux, definitely. I mean, activities like Kernel CI projects exist. They also scratch only part of the problem, obviously, because the problem is large. It's also about here, a collaboration topic. I mean, the nice test infrastructure is good, but you also need to have the right tests for that. And if the users are not really communicating their requirements in form of tests or at least of specifications for that, you as a platform provider are not testing the right thing. So this is also very important to get everyone on board on these things, integrate the different components, not only to run them, but also to test them. So yeah, absolutely agree. Yeah. Definitely. Alderay, we tried to finish your point for you about modularity and the complexity coming in more form than just the single Linux distribution. And Kate brought up the point of testing being a critical part of such an infrastructure that you need to provide visibility and test reports in automating some of that. So yeah, and I think CIP saw the importance in testing from the very beginning. It's one of the areas where that project contributed the most, kernel CI does that for the whole of the Linux kernel. And I personally, I'm one of the founding members of the project and I feel like we test too much and that we've had these arguments internally where you have too many branches, too many tags, too many vendors, like we already sending sort of the wrong message because we're allowing people to be so distributed and fragmented and all that. So maybe there is a need there as well for consolidation. The SOC, the silicone is becoming more tight and compact, maybe we should do the same thing with our kernel configurations. And so we have something a little cleaner the way that Kate was suggesting it. We're not there today, but that's a step that I think the industry should take. Our quality assurance is a really important thing for our embedded devices. What we see on our sites, since we have this wide range of product setups and system resources, we also, we reach complexity and feature sets where we really run into huge challenge of the system integration and to qualify that. So this is one thing which forces us also in regard to the quality assurance to move a step forward and to qualify the building blocks already before we create the composition. So in past, to be honest, we compose the system and qualify that. So we had a lot of test cases on the final image, but if you have many, many partners in powerful systems and if this is the first, if you have the precondition to build the overall image before you can test it, then at a certain point in time you land in a situation where you're no more able to create the system because you run from one issue to the other and you do not come to the point where you really can do your quality assurance anymore. So you need to introduce an additional step to qualify your building blocks already, independent of the final composition. And then you go into the diversity of this different setups and to test the resulting images again. And so the quality assurance for my perspective is even more important in this diversity and system sizes and flexibility. So yeah, so we have two things I would say. First of all, it's not only about testing the image, it's also about testing the building blocks. And second is testing on real target hardware. So that all that is really tested in all variances and all target hardware. So this is an important thing in an industrial scenario. There are other scenarios out there as well where it's important to get that level of modularity and testing and Linux is a common piece in a lot of them. And so like the enabling Linux and the safety critical is trying to start looking at trying to help make that part of it, the story to come together. But for the industrial space, pulling that full story together is I'm not seeing anyone really taking and leaving a community together yet. I think something like that would be potentially benefitting from if we were to stipulate some of the aspects that I think would be valuable. We are a really concrete set of reference hardware that academia and some of those use cases I mentioned earlier could just pick up and see working and something that has all these heads and interfaces that can be used, complex GPIOs, I square C, whatever it is you're using. So you have all these capabilities can for automotive and other industrial aspects and so on. So that people know that they could just readily have it available because some projects try to do that but then you can buy their reference system. So if you're in a university or if you're an industrial vendor and you don't have that relationship with that SOC vendor, you can get the reference hardware. So it needs to be quite broadly available to have a broad appeal and really demonstrate these test results you can see in our CI infrastructure. You can achieve that for $200 mail to your door by using this image. And that's something that I just haven't seen anybody achieving in industrial Linux of late. So one aspect I mentioned in my talk was also the support or the ease of use for the pieces or the modules you want to add to your product. That means, for example, supporting aspects like open source compliance and also then life cycle management, vulnerability management and stuff like that. Could you give some insights on that topic? Well, I'm gonna jump in here for this near and dear to my mind. You're doomed, it can't be done, it's impossible. No, it can't be done. There's actually a lot of focus now starting to happen about the software build materials. Yes. And because we need to know what is there, what are your dependencies so that in addition to understanding what your licensing is, you can understand, is this, do I have the right version here and is it potentially vulnerable to this exploit that's out there and do I need to remediate or not? We need to be able to answer those questions quickly, especially as was being said earlier, everything is connected these days and the attack services pretty wide in some cases. So being able to accurately identify, I'm using this bootload is potentially vulnerable to this. Oh, I'm using this piece of this hypervisor in this configuration component. Oh, I use this compiler to make my system. All of these are elements for attack vectors and having a much better grasp on the software transparency is something I think we're gonna be needing to focus on and baking it in such that it's automatically generated right from the start is something that would probably do a big service to the industry if we had the clear transparency into the software components that make up an image. I know I've been talking with the OctoFolk about they've got SPX metal layer going in to start generating out the material information, things like that. And so I'm sort of keeping my fingers crossed that we'll be having that showing up in the Octo and then the rest of the ecosystem will benefit from it. Yeah, we've been doing that for our customers for a long time as well. We really embrace the SPDX as a way to generate that side of it more of the copyright and the code and not being afraid of repeating us here for industrial Linux having that infrastructure that provides that for people will make a big impact. If you just tell people here is a bunch of deep repositories and a bunch of recipes and then do some tooling and go figure it out and make something robust out of that. It's a bit different like for real industrial solution you wanna be able to say, check, check, check, check, check, check, output verification report deliverable. We're not afraid to share artifacts with you. Here is a build you can use. I think that's very important and it's a different message. Industrial needs to be able to see that. You can't sell to your manager something that doesn't have that literal pipeline of validation from the vulnerability aspect to the copyrights to the license that are claimed versus the ones that are actually being used all the way down to an artifact. Yeah, so what I like to add is that we for sure need and how to say also in regard to the security topics and vulnerability handling, yeah, really an overall solution because it's not sufficient if you have a start point and create your custom solution, you finally have to maintain it and you need to connect to some upstream. So you're not alone on the world. So you have a starting point and you have downstreams and upstreams and you need a way to be connected so that it's a seamless approach. And this is very important from our point of view that the process chain is not interrupted and that you really have it in a streamlined way. So with that, that you have an upstream where you can rely on which has also well-defined security processes and which you connect to. And then we add our pieces for the industrial portion. So for sure one thing is the really easy processing of security patches, again with the quality assurance and so on what we said. So in that you can deploy it in a way which do not harm your system finally, yeah. So that it's in a controlled and mastered way. This is one thing. The other aspect is what you said, the license. So for sure in the industrial scope we also need a license, a ware processing and to take care about GPL v3, not every of our products is allowed to deploy GPL v3 software. So yeah, as you said already with the safety things and so on, if everyone is able to change it it's maybe not a good idea for others. So there are some conditions depending really on the product type and environment. So what we do is also having a good focus on that and it's not the way that you cannot utilize in any product GPL v3, but you have to process it according to your needs. So that you do not get it in if you are not able to deploy it, but if you have a product which can deploy it then you are able to use it so that it's in a controlled way again. So on the other hand, for the long run if you rely on a GPL v3 free solution then over time things may evolve and become GPL v3 and therefore we also then have to cover this and to add countermeasures to continuously provide the GPL v3 free solution for these devices. So the license processing is one thing, the security processing and that it's all in an integrative approach with upstream and downstream and the quality assurance and so on in the middle. Yeah, and what we have further more in regard to this long life cycles in past our approach was to extend the support time by somehow back porting important bugs to informer release and if the product needs a longer support time we try to address this with a longer with more money doing the same approach. So back porting even to even older revisions. So what we see is we would like also to add an additional life cycle into that so that we have a period of time phase where we can follow the mainstream by continuously rebasing to newer revisions and this again influence the way of the system design that your eases this procedure of rebasing and you master the quality assurance and complexity in every rebased cycle but that then the product team can decide okay for their life cycle which phase it will utilize. So a certain phase to follow upstream and then later on to follow the long-term support period until end of maintenance. So this is also one aspect to introduce this phase into the overall product life cycle. You all have a question for you because of a few aspects. I have a quick question for you because you, we talked about the community outward facing but something that already just stressed really well is the fact that you're serving downstream inside your organization as well. You have product teams that adopt what you do and in the super long-term products that you have, how much resistance do you face inside Siemens when you go and tell them you should really be updating this or we need to fix that. Then sometimes they're just, they're probably even more rigid than you'd like to see. So how do you cope with some of those challenges today or how would you like to see improve? Yeah, well, this is a two-sided aspect. You have of course to provide a certain offer to them and the other side obviously as it's a lot of work obviously as it's a large corporation we also have the required processes to remind people at the product development to do the updates. So by now it's pretty well understood across all products, product lines that updates have to be applied. Obviously not all updates have to be applied to all products all the time. So this decision process is individual but there is update processes happening and there is enforcement prior to the release to establish those processes latest not only after the thing is in the field and we got to notice that some device is vulnerable. That is the story from the past. So obviously, yeah, the key point is actually that you have to provide an offer and that was one of the stories basically where the CIP project evolved from is that we saw so many product lines inside the companies but also across the companies doing this ad hoc maintenance which is per product maybe easy seen because it only affects a subset of the features a subset of the components. But if you sum that all up it's highly inefficient and also not of the quality that you wanna achieve. So if you have to go a long-term life cycle with certain components it would be definitely helpful to do this also in a collaborative manner in a more collaborative manner than it has done so far. Maybe know the activities around the kernel regarding LTS support, people companies joining there on the LTS, we start to extend this with the SLTS activities and we are also trying to work together with Debian on enabling certain set of packages to take a longer life cycle possibly. So that will benefit all of us who want to use these kinds of components in a longer fashion while not solving the problems for all obviously 10 years may sound a lot for some people in IT, for other people in industry it may sound like okay that's the first quarter of my product life cycle. So you have to always have to find a compromise and also what Anri was addressing ideal world would be of course continuous be able to rebase. So do your products maintenance by rebasing all your changes over the latest version even the field that involves testing again. So I think this is again to stress the importance of testing in the ideal world we have the full control over our assets and we can test continuously so we can rebase continuously and we don't need to support infinitely. That would be of course the ideal vision this is also what we are working on. And this if you go forward in this direction it's a matter about the maintenance effort and distributing this efforts across all organization and teams so to share this. And then you come to the question or to the thinking about the overall approach. So you have to create your system in a way that it's friendly for this rebasing. So to give you an example in past we had setups where we provided to this individual teams a tool and said to them here's a tool here's your starting set go forward to your target specification and start working. And then they created software on their way to their product and another team again at a certain point in time forked from that. So said, okay I need something similar I will use this as a starting point and go then my way and another way team again and again so you end up in a huge set of branches and forks and something like that where each team is following their own mission and have installed locally in their Zelo their tool, their software, their changes and they do it all according to their target specification. So they do not think much how to structure the code in a way that they can rebase it easily. They are more thinking about hitting the performance goals and so on and the memory footprint and optimizing in that way in this way. So to meet their mission and finally you have a very, very optimized fragmented way and multiple times copied the same setup more or less and if you then try to process a rebase through such a setup then all the teams has to rebase again. They get a new version of the tool they get new revision of a kernel together with the new versions of the OSF libraries they need to rebase all of their additions and each team again and again. So if you sum up all the efforts, yeah this is really expensive on the one hand and the people from their perspective if you discuss this rebasing they are all on their way to their next feature set and to their next, how to say, target specification. So this rebasing cycle is utilizing or is disturbing their work. So they do not come forward in regard to the next features because they have to rebase the tools and to report or to reapply all their changes and on the other hand for sure all that bring in a lot of risks and issues so they need to do this qualification, what you said. So you have to think about at the beginning what is the right setup to make this so much more easier and yeah this lead for example to the central setup that you have one tool chain and infrastructure provided to all so that not every team again and again needs to install a new revision of a new tool version and that you can maintain the common portion and everyone has the same G-Lib C in it maybe but you have it multiple times in each and every team and everyone has applied different set of patches so why doing this? So adding it to a central instance maintaining the tool only distributing the pieces which are really changed across to the teams and so on so you need a different setup and how to say the teams, if you go the first time to such a team they do only know what they have experienced so a new way of working is they need some level of confidence so they need really to see the change and to experience the advantage. So I think this is actually a very important aspect of this team collaboration project. So if you think about it we are talking about technical aspects in many regards but actually this is also our experience internally but also in collaboration with others it's very important that you also live the example that you envision and you make sure that the people are following your example all the time so even if you put the right tool there and they could use it in the perfect way I'm pretty sure they will find a way to not use it perfectly so you have to track them and you have to basically fold again the structure you wanted to unfold and you make a flat after a while again so it's continuous architecture tracking so to say that is simply important and I think a collaboration project has a chance to live that by example. I mean no one is listening to me if I talk about something internally but if I can point to an external reference doing the same thing that I was just preaching that tips a lot so it's a cross-reference also for us in industry to show look that company our direct competitor is doing it this way we are doing it the other way don't you think we are doing it the wrong way and so you have a reference chance to streamline your activities for the better that is also I think an important vision for a collaboration project. Okay so we're close to the end I would close the round with one statement from everyone one priority you would say industrial grade Linux should have under all circumstances relax to start. Ladies first. This is tricky to say, I mean it has to have everything. I'm just, one statement I think is to have dependability it needs to be able to handle security modularity it needs to be dependable and all the components in it need to be dependable they need to be trusted. Over to you Jan. I don't disagree, I don't disagree so if I have to add something else I would say my personal flavor would be to have it in a very upstream friendly way. We might have lost Audrey so I'll go next. I would say something I've seen missing in love the activities is the distortion of flavors the fact that it needs to be available in the many different ways that potentially an industrial Linux needs to be able to be consumed. So this awareness of the complexity of the various systems that is going to be running and providing that as a sort of a stepping guide providing something that's more than just the theoretical. It's the Linux kernel and a bunch of middleware you can do anything you want with that. So for me that would be important something that can be morphed out of the box to be shaped from the project, from the community to help people understand what they have to go through. Okay, finally, Andrej just made it to the task. So the question was asked and Kate answered dependability, Jan answered community friendly. I mentioned it needs to be multi-form and take into account all the factors that we need to be able to support and what would be your number one request for the industrial Linux, Andrej? Oh, my number one request. Yeah, for me it's really modularity. Okay, good. Then thank you very much. It was really a nice time with all of you and an interesting discussion. Hopefully also for the audience. Thanks again and yeah, see you later.