 I'm here with Thomas Gleixner, Jan Kiszka, Martin Koening, Stefan Evers, and Gila Nardi, and we're here to talk, have a good discussion today about what's happening with industrial Linux and what we're seeing emerging for beyond 2022. Industrial Linux has become more and more popular, while we're seeing more and more applications using it these days, and there's a lot of special concerns associated with it since a large case interacts with time domains, interacts with people, and there's various ways we want to make sure that we have a sound ecosystem, and there's various concerns about working with Linux in this space, so the purpose for today is just to talk about it a bit. My name is Kate Stewart, I'm with the Linux Foundation, and I've been involved with the Real-Time Linux project, as well as other embedded projects here at the Foundation, and then with me is Jan. You want to take it away, Jan, and introduce yourself. Yeah, thank you, Kate. My name is Jan Kiszka, I'm working for CMOS Technology in what we call ourselves the competent center for better Linux, which exists for quite a while, so it's in this long-running industrial business, also a long-running group that we are used to support our business units in getting Linux into products, and if you are riding a train, which has some equipment inside it, for example, or if you are in an unlucky situation to go to a hospital and have to do an MRT scan, you will face Linux in these products, and there are many, many more cases, and yeah, I'm very thrilled always to see how many cases they are and how Linux is growing in this field, and that's what we try to enable, and obviously also we try to enable in the communities to make this more sustainable and even better in the future. Martin, do you want to introduce yourself? Hi, my name is Martin Koning, I work in the Wind River Technology Office on edge device software. I've worked as a software developer for the last 30 years at the firmware and OS level, and I started my career in telecom working on OSes for rack-mount networking equipment, things like frame cell switches. Let's see, then a colleague and I, we founded a company writing OSes for DSPs. I was acquired by Wind River, I've been there for the last 20 years. So my primary development environment has been Linux since 1997. I've watched Linux become increasingly relevant in embedded systems since then, and within embedded, Linux started being used in networking and consumer and its continuous journey into aerospace and defense. The point now that pretty much any device that can run Linux will run Linux, for some, if not all use cases. In fact, due to the modern hardware complexity level, Linux is now more than ever, not just one OS choice, it's increasingly the only choice. And so I believe that open source of Linux will always win in the end, and that includes for most real-time and many safety-related use cases across all markets. So on that note, I am happy to see the real-time patches for Linux. Now we're reached in the 5.15 kernel and I want to congratulate Thomas and others who worked on this for many, many years. Well, I guess that segues us over to Thomas. Quite nicely. Thomas, you want to introduce yourself? Yeah, sure. I'm Thomas Gleichner. I have a background in electrical engineering and was doing automation, hardware and firmware development for many years. Before I actually jumped on the Linux train in 1999, I was following Linux since the very beginning as a hobby just out of curiosity because I wanted to know how an operating system works because in the classic embedded space you had an operating system which was a timer interrupt and the main program loop. That's all, it's great, it works. So, but in 1999 I actually decided to go into its Linux consulting business because I was envisioning that Linux will be a big player in the embedded and industrial space. A lot of people called me crazy back then but it seems my prediction was pretty much on the point. So one of the things I noticed because I came out of the automation and motion control area was the lack of real-time. I was looking into that and I'm involved in real-time development for Linux for more than 20 years now. And yes, we made a major step forward with the current merge window. We basically cut the remaining patches exactly in half, which is nice to see. And of course, I've seen a lot of the technologies emerging over the last couple of years and I'm involved in many of these interesting endeavors like running preempt RT in virtual machines and now the new upcoming technologies like the time-sensitive networking which is going to hit the industry sooner than later. And that's all going more complex than before and that's one of the reasons why really collaborating and fully leveraging the open source potential is, in my opinion, the only real choice for the industry. And then I guess, Stefan, do you want to introduce yourself? Yes, thank you. So my name is Stefan Ivers. I'm from Bosch IO. That's a subsidiary of Bosch GmbH. And I'm working most of my time as a Bosch internal consultant in one project or the other. One of the projects I'm working for right now that is a really exciting one is like trying to find out where Bosch has to go in the future in considering Linux. Because one of the part of that was actually to try to find out how much are we going to use Linux in the near future actually. And it turned out that we will roughly produce around 50 million devices per year that are based on Linux in 2025. That's a rough estimation. It might be more, it might be a little bit less. But that already shows that Linux is an important topic for us. And so we have many, many different projects inside Bosch around Linux. But it's an interesting landscape to say it like this. And so we are looking for ways to make it better and also make it more, let's say, together with the open source community. Me personally, I've been working in the open source space one way or the other for the last 20 years. And it's an interesting and prospering field. And it's also very interesting for Bosch. And so we are trying to learn as Bosch and we are trying to find out what can we do. We are working together with many, many suppliers that are much more, let's say, deeper into these communities and into the community work. So we are to a large extent so far doing that more indirect. And we are also looking now for opportunities where we can maybe get engaged more by ourselves. And that is like something that we are looking for. On the other hand, we are also really interested in many of these achievements that have been done in the last years that they are getting much more towards, let's say, to the upstream because that's the problem that we are seeing that sometimes things are staying local. And besides from that, for us it has been an interesting time because more and more devices are getting connected. And when devices are getting connected, this is changing the game. And so things that we have done in a certain way in the past need to change. And I think that's true for the entire industry what I'm describing here, nearly everything. I think many other companies would say the same thing, one way or the other. And so it's interesting times and it's, I think, time to improve many things. So I'm looking forward to the discussion. Thank you. Thank you. Ki, what's your perspective here? Hi, Kate. I think like everybody else, I think I've been working on open source a little longer than I'd like to admit. The transition that I felt was really interesting for me personally was about 10, 15 years ago, 12 years ago roughly, moving from the enterprise Linux space into more of the embedded one. And then seeing the differences in how, for many reasons, the enterprise vendors, I'm thinking in particular of Red Hat and Suze where I spent many years, were operating with Linux and their deliverables and their services to their customers and how it seemed that all the industry, all adopters of Linux were underserved by either their internal teams or their suppliers. And so the journey for me has been to try and take as modestly as I could some of the learnings from the enterprise vendors and the people that have been providing long-term support and a different experience around their deliverables and some of the other ways of doing things and bringing that in. Very recent developments have shown that even those vendors now are expressing very strong interests in working directly with the embedded devices and the IOTs and the Edge and things like that. So it will be a very interesting next few years to see how the commercial availability of supported long-term available software deliverables around Linux will be shaping up. The complexity is always growing. Like Stephen said, everything is now connected. It doesn't matter how small or fragile that might be. And so that's what I'm really looking forward to. There's been some great developments. There are some amazing milestones coming up in the near future as well. So looking forward to discuss what next year and the next few we'll be bringing here. Well, I think with that, then let's start digging into one of those points you raised, which was long-term support. You know, in the industrial space, when devices get deployed, they may stay in production in use for, you know, 5, 10, 15, 20 years or more. So one of the questions is, you know, how long we keep having the back ports and the kernels supporting the kernel? What's actually really practical? What's efficient, what needs to change? And I guess I'll turn that one right over to Jan since I know he cares about this area a lot. And we can start the discussion there and keep going from here. Oops, can't hear you, sorry. There he is. Now it works. Yeah, thank you, Kate. Yeah, actually, since the beginning of this month, I'm now dealing also with the CIP corner directly. The Civil Infrastructure Platform project was founded five years ago to, well, not only, but also enable long-term support for the industrial space of Linux base systems. One of our cornerstone is the CIP kernel, which is basically an LTS kernel, a little bit longer maintained. Well, by the time we started, it was eight years longer maintained the promise than the current, back then, the promise of the LTS kernel, which was two years, but now it's only the four years. And in addition, we do also, we accept also upstream merged back ports for enabling newer boards in existing LTS, that says CIP kernels. So why are we doing this? Well, the reason is simply that, as Kate already said, the products live very long and they also take a while to be developed. So the product development time starts, well, one year, two years, three years is not unusual. So you already are in a range where, well, you should ideally but normally you don't update on a daily basis the kernel until you release. So you usually freeze earlier. So you lose a little bit of maintenance time that way. And then the product has to live four decades. The longest products we are shipping they live for 60 years. For example, in the railway domain but also in power plant domains. So obviously the devices, at least those which are built today, I mean the old electrical relays, they lived actually 50 years unmodified. The new electronic devices don't live that long. They have to be replaced over that time with newer designs, redesigns but functionally equivalent. And the same applies to software. Then the challenge is obviously how to get the software feature equivalent and functional equivalent into the field. Ideally by always testing the latest version and being ready to ship the latest version without seeing any difference from the customer side. Practically that is not working yet very well, sometimes not working but not that often. And that's basically where the demand or the desire for longer support comes from. I mean it's not unusual we mentioned the enterprise domain. We have similar patterns there. Theoretically people could update every other year practically they don't do sometimes for the better, sometimes for the worse. And that's a simple situation we have in industry. There are good reasons like certification processes which is a lot of paperwork that take a long time. A lot of costs that you don't do unnecessarily, repeatedly if you can argue that the delta is smaller probably have an argumentation about what is the delta. But anyway, that's the situation. Interfaces in the kernel are important to stay because in the ideal world everything is upstream. In the real world not everything is upstream. I could also talk about this probably if we should. And that's basically a driver for us to have a longer support in the kernel and the same actually also for user space components. That's the other story because the kernel load doesn't make a system. What's the actual support time you're aiming for right now? So we are aiming for 10 years at the current stage. That is catering the current demand. I mean some users are happy with 5 years actually. Some actually would love to see 20 years because this is for how long the product actually physically have to be placed in the field. It's a long time. We don't want to touch their own software stacks on top. The compromise is the 10 years and well we say it's a promise for the existing kernel. I wouldn't predict that this is exactly the same number we will have to ship maybe in 5 years from now because maybe the demand is shifting and if only very few users remain at 1 to 10 years and are happy with 6 years or 5 years then possibly we will focus on other things. The project is agile enough to do other things as well if this is no longer the demand but this is the current situation. The question is how do we get to a point where we make the industry agree on something useful like 4 or 5 years which is reasonable. Right. Because one thing if you look at my main example why this model can't work is the whole hardwood vulnerability problem we had a couple of years ago with Metal Spectrum. If you look at we were able to back port for 3 long term stables easily and then all hell break loose. So this was I mean aside of the fact that the handling of the whole problem was more than suboptimal that's a different issue but on the technology side if you really run into a situation where you have to back port that complex stuff it's going to be a train wreck no matter what it doesn't work but the other downside is this whole back porting business is finding so much talented engineer capacity which we really would have better available for other things. So the goal has to be to shorten these LTS frames to make people aware of that moving ahead is smarter because the delta you carry with your vendor patches or your particular driver which isn't yet upstream is usually less critical and less dangerous than back porting the whole big thing. There's another issue we had that couple of times. You find someone finds a bug some researchers find a bug in a recent LTS kernel so we fix it and then we look at the back ports and then we figure out oh three years ago no this doesn't apply and actually yeah the code was rewritten the problem didn't exist back then exactly that problem didn't exist but five lines down it existed in a different way exactly the same thing so those those issues happened over and over and one problem here is that if you look at the focus what the security researchers and the penetration tests do they focus on the relatively new kernels they do not go back and test your 20 year or 10 year old trunk kernel they won't do that and because they have not enough capacity either so we have a short return on engineering capacity on all ends but then we divert it into oh yeah let's back port it forever create total of horror kernels which are undebuckable and which are very very special very narrow which means they do not get the wide exposure in testing and all these other things we have around that's my main concern so the focus must even if we can't convince people right now but we really have to work on pushing them into the direction move along with your thing and if you do that collaboratively then even the certification will be slightly less problematic because if people do that together you have the effect of it's a shared problem it's actually it's pretty complicated because sometimes when you push somebody to a newer version of the software and especially when we talk about vulnerabilities you may be updating them to a software payload that has more vulnerabilities than the one that they were on because the latest versions don't always get better with respect to vulnerabilities and so you really have to understand what you're moving to and whether it's actually better because you may have validated and secured within a really tight context and so sometimes putting a box around things is also an interesting option although I totally agree that you need to be able to update and stay current to close certain problems like the ones you just discussed in hardware sometimes for software there's also tension to not do that right? I recommend you to read Keith Cook's blog about that topic which he published recently it's a really interesting read and he points out why this is exactly that oh the new kernel is going to be more vulnerable than the old kernel is just a bullshit argument really because we may more focus on security issues right now than we ever were that means the new kernel is the more mitigations for classes of attacks it has the more testing the more research effort goes into those kernels which all the kernels never will get that attention again unfortunately that argument comes up time and time again and in these really big industrial companies you have layers of people security officers then centralized group then the open source expertise they have internal suppliers that work with external suppliers that are the interface to the BSP vendors for the SOCs and by the time you do all that you get into month worth of timelines for procurement and purchasing and validations and so now just this week I had a customer ecstatic about the fact that they are going to be able to move to 4.19 I'm like this is ridiculous this is not sustainable this is exactly the point that Thomas is making by having these trickle effects of these little fears or these little commercial arguments that are being raised by vendors here and there sometimes not of their own fault sometimes because their own internal validations are huge and we're thinking of a little bit containerization over here a virtual machine over there a hypervisor that hides some complexity over here and things will be fine things are not fine when in 2021 September an industrial vendor that has thousands of developers and people and infrastructure lawyers tell you that they're moving to a kernel that was released three and a half years ago that's going to end of life upstream in the year or so that's the problem and that's what we need to change it's all about trust right it comes down to understanding the provenance of the software not just in the kernel but in all of the code in the system and being able to track that really helps and allay those fears so people will be able to get an update through the software chain you have to you have to change the mindsets in the companies I mean I was talking to quality assurance people recently in a larger company that basically told me no you can't change the version number but we can apply a gazillion of patches to that kernel so I said okay problem is solved I move you to the latest kernels and patch the version number back to that old kernel yep I had that conversation on the Enterprise Linux side we were saying that they didn't have a problem with that that's ridiculous because they say if you don't change the version I mean you don't have to re-certify you retest everything what if you change massive piles of code in the central in the center of OS you definitely want to retest everything actually here we come to an interesting topic that is I think also really important to understand like coming from the different hardware like providers that are bringing in pieces for example for a certain device you get a lot of things long like patches that you're supposed to put in and many of these things for example are not upstream and I think the demand inside different companies and the understanding that this should be actually going upstream is like another big problem so at the end of the day so many things are heavily patched that I agree that this is actually maybe introducing a lot of problems in particular according to the entire maintenance over time thing because when all these patches and all these things are coming in and this is not only for example a vendor gives you something like a board support package that there's not only his specific drivers in there but there's a lot of other stuff in there that is not necessarily exactly only for running this so then getting this upgraded somehow is typically nightmare and maybe the corresponding vendor is no longer interested in providing for example for the next version of something so this is like another argument why we would love to see more and more things mainline upstream yeah there's a huge challenge understanding what is in your system also if you don't have you know a complete software bill of materials and understand the versions and of all the code that's feeding into your system then you can have a trust issue as well right with doing like an open update to the latest versions you don't know really how many levels you're jumping so the partitioning of software and the understandability of the bill of materials I think those are really important aspects to this conversation because if you can separate out things and manage risk with a perimeter and Thomas mentioned the kernel but you might have multiple kernels in your complex heterogeneous multi core SOC then you can make these decisions somewhat independently depending on how mission critical a particular OS instance is whether it's in a container or it's in a virtual machine or it's on a compute island the partitioning can help us there there's a topic that I think beyond can speak too much better than I can. Some of our North America based healthcare hardware providers have industries mandated things that say for example any software date requires 250,000 hours of testing that's just something that's written in their QA standards etc so there's a reticence in being able to do some changes you can parallelize that of course but there's only a limit to how many things you can run overnight for month and so on to be able to do that so there's a challenge around some of the regulatory compliance around quality assurance requirements and some of those very long term extended processes that you go through I think healthcare is a good example of that where they have pretty rigorous test cycles sometimes. Yeah it depends on the market actually I'm not an expert on the certification exact rules for the healthcare market we got to touch a little bit on the railway domain and industrial domain and yeah sometimes if you look at this today the regulation is in conflict if you look at for example the railway domain they demand obviously a safety function for your device that doesn't mean that the kernel itself and all the stuff has to be certified but the function of the device at the same time by now for good reasons they demand a software update concept so you are already in conflict that you need to do updating on a regular basis while you have to certify the safety function at the same time that is so far still solvable with a certain approaches and certain complexity management this is growing I think over the top and I think there is also some movement in this domain to realize okay we have to find different ways to achieve one or the other goal and definitely I mean it all boils down as mentioned before or not only but it's a key important thing to have a better confidence in the test results that we produce for the version we ship not only but also for the version we could ship the other day and that is a key element to drive that forward and whenever I talk about long term support I would also like to talk about it obviously by investing into long term support we also invest into testing you are not the only ones you are not the one who invented this but we also contribute to this we have to it's a key element and if we improve this even better than the support of the LTS we can get rid of the LTS by just having a perfect test coverage in the ideal world by having a quicker confidence in the results of a newer version on the other hand this is also what I have to emphasize we are looking ahead we are looking for the perfect world but look into the reality today what are we mitigating the situation today is mentioned before the vendor kernels maybe enriched by some integrator kernels or integrator extension to this by your own extension to this and that obviously maintained the product per developer in their spare time exaggerated I mean obviously there are maintenance programs but the reality is that too many of these activities happen in parallel not to the optimal quality not to the optimal result so the first logical step is to consolidate wherever there is a demand for reasons or good or bad ones let's at least provide them an option which is consolidated as much as possible and also for vendors who want to consolidate on these activities and I think everyone is by now from a silicon world is providing LST based long term support based board support packages at least in the past that was random and the next step would logically be to have them on board and getting them via the upstream pass like we demand via CIP by now you haven't mentioned the fact that that chain a lot of those violate the spirit of the GPLV2 license of the kernel some of those components are only available in proprietary form for the vendors are never available to the final customer and all of that affects the quality and the security of those kernels for the final product the fact that we are now starting to see the software basically a strong push for the software transparency and having the S-bombs available and visible will make some of that much more explicit I think which will make it easier to start tackling some of these types of issues I think that's one of the changes that's emerging now that is going to probably sway some things in fact open embedded for instance has just managed to get the automatic generation of software materials happening I'm hoping that other places other distros will be doing that as well and so that we can start to get this type of transparency of when there's patches things like that that are included it's more than just a version I kind of think we also need to potentially get down to the source level of knowing which source files are making things up but I think we've got to go to at least get versions first before we get down to that higher I still want to see a Linux kernel when we can build a Linux kernel and we've got to build materials generated for each build that takes every file that's actually in there I think that will help with on the safety side so we can build wishlist item now I mean these things are being worked on and they will show up in the foreseeable future the other thing is getting the vendors especially the system and chip vendors and the IP vendors to bring their patches upstream I mean that's something only the customers the larger customers can demand I mean Google was very successful on the Android side at least to some extent because they pushed it no it's not perfect but it's way better than it used to be I think the Chrome OS team might have done even better than Android from that perspective Chrome OS is probably the prime example for it they did a very very good job for it and I think if the larger part of the industry especially those who ship gazillion of devices per year tell their vendors get to act together and ship that stuff upstream then this will have an impact so just to share some some aspect on this Thomas because I've recently had been a discussion with one of our suppliers on that topic complaining about while actually I was trying to make it positive looking for a better way to collaborate on the upstream work that they are doing and that we have to do in lockstep and kind of lockstep and the feedback was great Jan let's work together on this and it turned out when we discussed the exact boat we had to admit actually we would love to have a forum for our customers to discuss this but the result is in the end Jan you are as Siemens far ahead of many of our customers in this regard they don't demand this but actually the problem we could do more there and the vendors would do more if more users ask for that and that is actually our duty as obtainers of these chips of these IPs to always remind that and insist on that and this is what we are trying hard but obviously it's volume matter so you have to have the right volume in your backhand which is not always the case in the industrial space I mean we are shipping products sometimes in hundreds of units per year that is an aspect you are also shipping large as well then we do have leverage also in some areas and another silicon vendor just recently told us that thanked us because they rethought their upstreaming strategy they are already quite good on this and they said okay with your feedback we thought about it again and we would like to make it really available everything is upstream and that's why we are retiring our upstream well vendor branch until that happened and we make it a mainline tracking branch until that happened and that was actually what we are asking for as well because during the product development that is what should happen and though they are shifting this I mean it's still a way to go and it will take maybe another product generation to get there but it's another step forward and another step also to use against or in the favor of the argumentation with other suppliers another aspect to this one is that we are all platform developers and so we like the control of being able to select and packages and pull things and create platforms from my user's perspective if they are building a product they might just want our platform a ready made system and trust that someone else is worrying about the problems of the ingredients so we go into that instead of us rolling up all the complexity all the time of creating platforms to users if we can get to a ready made platform offering that can move and just work for a certain vertical and certain hardware and then have a trusted entity responsible for that I think it's a good paradigm to try to shield users from this problem because all the complexity just always rolls right up when we have a component model rather than a platform model and I think it's because as computer scientists we're building components and building platforms users that are doing applications sometimes looking down on the platform have a different perspective of what they actually want to see I think actually that is like what we have been observing so we have two things that are happening on one side we have a change because of the maintenance topic that is becoming more and more important like everyone is getting more and more aware of there's something that's really hard to solve and so something needs to change so no matter who you're talking to right now in the industrial manufacturing area who is producing devices they realize something needs to change the problem is just how to do it and that has not been let's say there's no clear path yet so we were really happy when we were seeing the industrial infrastructure project coming up because I think it needs more space like community space where these things are discussed and solved in a good way so let's say these people that are doing it on a daily basis that are not there yet not so familiar in this community work in this way of working that they know oh okay this is becoming more and more like the way how things should go and that they have like an easy way let's say to go follow this path because we should be clear about that these people that are producing the devices they are very very busy to get their start of production project finished and that is like what takes all their energy and the start of production point comes the more they are getting into this sweating position where they say oh we just need to get it done that it's working properly so if these things have not been prepared long beforehand so that at this point where they have potentially a choice to go another path this path is already let's say somehow has proven to be a good industrial choice then they will not take it because it's just introducing too much risk for the start of production and that is the main issue we need to have more space where these things are somehow industrial like the factor standards how to do it where everyone is saying yeah this is a good way I think every company who is in this area now they have the feeling something needs to change we need to improve the things but just they don't know how and the smaller the company is or the let's say the more conservative they are the less it's expected to come from them and Guy you mentioned enterprise and IT OS vendors moving into the embedded space I think there actually are things to learn from the way they do things that's relevant to this conversation around the platform model and trusted and provenance we need to bring somebody worrying about creating a platform and pushing that out and that hasn't traditionally been the way we built embedded systems it's been sort of see if components and build your own, compose your own platform and so maybe there are some key learnings there that we can take I agree Kate I just want to add one quick thing on that topic that Thomas started and somehow we all managed to walk around the soapbox but it should be acknowledged that the ARM license season system have been in a large way getting away with murder for years if we ignore the immediate small supply chain issues that potentially will resolve itself there still are a lot of complexities out there we hear Jan and others telling us about 15, 20 years hardware support and all that you can't even get the reliable end of life for an SOC from any vendor to save your life or some or anything like that they give you one and then it comes 18 months earlier or 25 months earlier and then everybody does things differently sometimes you have different bootloaders sometimes you cause an injury otherwise they're stacked up bare metal linux is an illusion it hasn't existed for 5 or 10 years we all work with vipervisors with virtualization solutions the multiple cores like Martin told us that it's not just about the Cortex A's that are out there there are other things that run on that SOCs that the products need to be compatible with and working with and so all of that has I think exacerbated your platform is necessary we need to acknowledge that there's all that complexity around that so there's all these components out there that make up a you know there's a linux kernel itself but then there's all the components that make up something that you're running on top of it and all those components each have their own different life cycles right now and their different support paths and different levels of quality associated with them so one of the things I'm wondering is the same way that the kernel has made its cycle very transparent should these upstream all these other components in our open source ecosystem be the explicit about their long their end of life and make it visible that they are out of service at a certain point in time on the other hand you know there's the other side there's things like you know the time utility it's sitting there it's been working forever and no one's been touching it or very freely every 10 years and you don't need to touch it that often but is do we need to get more transparency on that aspect about the elements of the software I was going to say like some of those vendors require five NDAs and like a month and a half before they even give you the time of day so I think we're very far from getting the number one offenders here to be able to have a constructive conversation around that but I'll let the industrialist talk to that yeah I think we're talking about a general software management problem that's not specific to the linux kernel right so it's applied everywhere yeah one thing I would like to add here we were talking about more or less that okay it's fine that we are on the component level so and maybe it's time to move up to a platform level but actually like when you see it we are actually in many many cases we are sub-components because the components are getting patched in the embedded space a lot and we also have technologies like recipes and things like that that are actually driving that so to optimize it so we are actually even the components are not like a consistent element where we know okay this element is in this device and it's for example then it would have this vulnerability or whatever so that is actually something that we are trying to drive that we have at least this understanding okay this is this component and then it's also easier for example to look at the maintenance issues and when at least we know okay this is like like a distribution like handling okay this is one this is the component and it's not getting patched like 50 times through because of different reasons just like as a computer distribution I would say by now I think we are very much at the point where distributions play a major role in embedded that's also what we do ourselves but also what we hear from others I mean just recently down to sites anyone who did private conversation said well we are seeing less and less of these purpose built embedded Linux devices the classic way where you fiddle with every component and choose the right version you would like to have but rather distributions rather to a certain degree customized but generally taking as they are because there's the complexity of the integration is already high enough and the maintenance effort is already high enough why doing it ourselves exactly and that is what I was trying to say that this is like in too many ways the way like what we hope for that this is like the way forward whenever it is possible in some cases of course it's not because you really need to get the last byte out of it but whenever it's possible you also plan this way to just reduce the complexity because it actually also lowers the software maintenance cost dramatically so you can invest a little bit more in the hardware cost if you can lower the software maintenance cost. I mean every byte you no longer need to squeeze out of this I guess I mean there are still some cases I guess more on your side than on ours but yes I think at the same time we see a trend where this squeezing optimizing the last bit is decreasing you don't have the time anyway in your product development cycle though it's more about fixing something or adding something I mean yesterday I had to patch system D which is a nightmare obviously if you're building this division based embedded systems you have to rebuild it and you have to integrate it obviously at the same time I was patching it I was also trying to get it upstream so even if it's not going to happen tomorrow maybe it will happen for the stable phases if someone acknowledges it as a stable bug or at least it happens for the next release so this is something we have to do in our daily work and we have to drive our engineers and our suppliers to if there's a need to tune something for the better make sure that you do it and also upstream it at the same time otherwise it becomes unmaintainable distribution based and then also like upstreaming as far as possible at least to the corresponding distribution when you have to change something but even better like trying to push it up to the corresponding upstream project for example the kernel itself the main kernel ideally there first if you can I mean sometimes the upstream says no you're too old you ask a distribution but if you can and the problem is persist in the latest version anyway then do it this way yeah but to be honest that's like this kind of behavior that is changing that inside big companies and in like in the mass of the corresponding embedded developers takes a lot of time and that is like something it's like a cultural change and a cultural change everyone who is trying to drive change in a company cultural change is the most difficult one and the most expensive one because it takes a lot of time and people convincing and things like that so it's difficult I think we have to be a little bit patient but I think it's something we need to do Okay I think we're just about out of time Thomas last word I'm not expecting that to change tomorrow but yes there is a lot of historic burden on there and there's a lot of culture yes we are special and we need to hack it because I see a lot of industrial criminal patches and also through other packages every other day and most of them are done for the very wrong wrong reason and that's a culture thing I know but we have to address that culture problem over time otherwise we run into a situation where this gets out of control it comes unmaintainable we just had we just had a project with a customer it took us two years to consolidate their kernel and root file system they have actually a total of 50 different kernel versions really kernel versions plus 25 different root file system creations out there this is unmaintainable we have one customer that has more kernel versions than employees okay overall we need to be making focus on upstreaming first we need to be making better ways we can simplify make things more transparent and you know the distributions and get all the pieces pulling together effectively and sharing some of that load with the testing so that we have a better system working forward as whole ecosystem there's also another aspect to it this is about the emerging technologies I'm involved in my employees are involved in TSN related activities for the last couple of years so we see a lot of the industrial players including the system and chip vendors everyone hacks his own specialties and a magic solution together at the very end we need something which is upstream and we need something which is actually maintainable forever and solves the problem for everyone but it's so hard to bring these people together and say hey let's do it let's put down the requirements what we have let's look at it from a technical perspective let's look at it from an open source maintain a perspective what the upstream people will say about this and what expectations do they have and then let's work on it together no it's not happening we're just seeing tons of horrible hacks being done for no good reasons Thomas some good news my internal users are starting TSN they're very same what you say so there is at least hope from the consumer perspective that we do also some pushing in that direction culture changes as needed still for industrial Linux to be effective and I guess on that note I will say thank you very much to all of the panelists it's been a fun discussion and I think we could have gone on for another hour very easily thank you bye bye