 Yeah, thanks for joining us in our talk civil infrastructure platform We will tell you something about a project to be recently started and Currently building up promoting it and I think it could be interesting what currently happens well in excessly used today and in future and Just to briefly introduce myself. I'm Osglaim. I'm with Siemens We are a central unit providing Linux for various Siemens products internally and Yoshitaki Kobayashi from Toshiba Hi, I'm Yoshi Kobayashi and I also doing this exactly similar stuff I'm leading a Linux project To provide a Linux environment for all products inside the Toshiba So that's why we are joined together and doing this kind of launch at this project Together, so thank you very much Thank you So let's maybe be start with that brief introduction so you might ask yourself. What is? civil infrastructure what is meant with this so Generally, we are speaking about systems which Supervised and control our daily lives. So mainly systems. We do not see which are invisible which are built into trains Which which are doing the traffic control build in health care devices or very Current hype topic is how to distribute the power if you talk about renewable energies for example power plants And of course smart cities all the devices you have in the houses you have in the infrastructure That's what we call civil infrastructure systems in Japan. Sometimes it's called social infrastructure systems means the same in Germany we say just infrastructure and The companies who are involved here, which are Toshiba Hitachi Siemens code thinker and platform are active in building these systems and The good news is That even today and you might be surprised where linux is already used. So we have it in in transportation Train control systems all kinds of rail automation Up to the things you can see which are the automatic ticket gates the controllers in the trains controlling everything You can see and you cannot see in energy automation We have power plant control turbine control For example in industry automation You see just some examples of controllers we use controller switches or CNC controls. This is the devices who if you have a smartphone with an aluminum case Who do these things at Foxconn for example, and you see some of these example We have a booth outside on the first floor You can you can see some of the examples there. So you can look at them. You can touch them Health care, okay, we didn't bring an MRT device. It was too big but Also there these devices are running for more than 10 years on linux already Building automation is things controllers controlling light HVAC and blinds and whatever you have in big buildings and So you see this is a great variety, but these systems have some things in common and These are the requirements which are different to other systems well in it is currently used if you think of current IoT architectures of cloud and so on they have requirements what we call industrial greatness and This is basically reliability these systems have to work 24-7 We just want that we we we do not want to recognize them. They just have to work functional safety is Is a topic for many of the systems so you could not imagine you do not want to That trains crashes for example so functional safety is everything which harms people. So if the if the system does not work correctly People's can be injured or even die and this is a big thing because there is a lot of effort done to ensure functional safety to certify this and so on Security of course gets a or is an important topic because you do not want anybody to control your trains your houses your power plants and So there's a special or there's a high emphasis on on security topics now these days At last but not least we are controlling and we have a lot of control loops And so in most of the cases we have real-time capabilities another big difference to to other products is The product cycles and the lifetime so if you if you think of your smartphone maybe you throw this away after two to four years and In this kind of systems We talk about factor 10 roughly so we are thinking in decades and What if we look at these requirements one thing is what's also different from other systems is that They are not not updated every week or every every month So and this has a reason so mainly all these industrial greatness Requirements shall not be jeopardized. So the update strategy is very conservative and they are only updated if really necessary if there are security problems for example and It's also Sometimes not so easy to upgrade them nowadays. This is changing but because you need physical access to get to the device and So This is an important topic and we will come back to this on the other hand We also have the same problems as every other company. We have to reduce our maintenance costs We have to reduce the development costs and the time So this is similar to to other businesses Looking at the systems today and comparing it how they evolve you see Things are changing. So if you look at the systems which were built like 10 15 years ago They were very proprietary very closed Homegrown operating systems were used and now These systems get more and more complex and you cannot afford really to do everything on your own It doesn't make sense at all. So of course linux is used. I mentioned this other open source projects are used and Systems become more and more open. So Because people want to extend functionality Even if the systems are already in the field good example is analytics So people started to collect data. They put analytics functionality on the device to react on the data but In many cases, we are still learning so people want to exchange the analytics functionality over time and want to Want to upgrade this and this is something which was not done before and this means openness on the other hand There are systems with customer specific extensions and the customer want to do it on their own So another change is really that we go away from standalone systems, which were not connected Which is easier for security, of course But as I said, you need physical access really to to to update these devices For example, you need physical access to configure these devices to provision these devices and this changes This gets more and more connected so We all know what happened in the area of consumer electronics What happens in iot and all these architectures and ideas? go into the systems as well and So this is the reason why it's really important to to set on standards to not Re-invent the wheel over the time again and again So that's the reason why we Said it's it's really time. That's the title of the time for for creating a joint project And not only inside one company, but also joint forces of different companies So we've seen this in other areas. So a lot of things are going on in the iot area We know for example several efforts in the automotive area where really OEMs which did not talk before and now work together in in one project and this is basically the same approach for these industrial great infrastructure devices so One thing is really to put together From the ground up from the operating system a system a platform which can be used by everybody and these are things Which are not differentiating our product. So it's it's it's things which are commodity everybody has to do this and But there's also room for improvement like for example bringing things upstream we have a lot of off-tree code currently and so This is the chance really to get this forward The point in the middle maintenance cost. I already mentioned we are currently maintaining several internal several versions of the Linux kernel several distributions several configurations and Even inside the companies it's it's now needed because we have more and more systems on Linux to harmonize this and It also makes sense to share this with other companies and Last but not least I said the architectures are also changing. So we have influences from iot We we get analytics on everybody is now talking about artificial intelligence. However, this is used there But the thing is that we really need more Complex software stacks to to to run these today's applications on and So we really have to have to build a platform to to run this. So the statement is civil civil Infrastructure systems require a super long-term maintenance because I I'll come to the lifetimes again Industrial great embedded Linux platform for the smart digital future. I Would like to go back. No, just just to summarize this The the thing is what we are building here is a base layer as I said a platform starting from the Linux kernel and Evalu evolve evolving from this and really filling the gaps between what is currently there in the open source Projects and what is additionally needed for for for for industry It's not a specification project. So we provide the implementation and Yeah, we want to get more and more companies in providing building blocks providing tools and So we'll we'll come to this how this will Grow in the future and how the roadmap is the initial focus is setting of course up agreeing on on on the Versions agreeing on the tools of the bilge build tools the the integration test tools and so on But the initial benefit we will get out of this is a long-term maintained base system and I Said this several times now super long-term maintained and I wouldn't would like to explain a little bit Why we do this? So let's look at some numbers. So if you if you look at railway technology, for example, so this is This has been there for a while this shows a little bit how long these systems stay in the field if you look at these They have quite a few years development time so mostly between three and five years development time ensuring all the Functional safety measures for example, we have to do there. So it's very very slow the development because of the additional safety requirements Then there in this case different railway companies their international Specific rules and then these systems have to be adopted. So this takes again two years or more and We have then really to deal with the certification authorities to ensure That the systems in this case are allowed to use in this context. So in Germany, it's for example, TÜV but on Bundesamt and others and This takes again a year To to to also run all the tests to do all the safety argumentation to provide all the documents and so on and This is an important point You have to do this again and again whenever you change something at the system. So we have Three months and more we need for Resortifying an existing system. So if we just Want to do a firmware update exchange the kernel. We have to do this certification again Of course, this depends highly on the amount of changes but the end we have to re-certify the whole system and Yeah, last but not least I already said this Timespan is 25 to 50 years So next example, that's even older. So this is the pre-computer era So don't take this too serious But the the numbers are about the same. So we have three to five years development time We have customer specific extensions again We have heard some different numbers like how long does this product have to be supplied like six to eight years? 15 years plus maintenance and Again, we have lifetimes about up to 50 up to 60 years in power plants for example depends of course on their systems we On the system we provide on the product So there are a lot of systems in one power plant some are stay for long some don't stay for so long Okay You can see an example at our booth There's a power plant control. I'm not going into details here come to our booth and we'll explain this Yeah, yeah the question was if we have These long certification times for updates Do do we have a big window for for exploits? Yes, yes we have Actually, I think this this this Has to change for for certain security things, but we'll come to the point How we do these updates and if it's about security the strategy is really to to really do small security fix and this very quickly and We are currently in the process of learning together with the authorization authorities and how to bring in these Updates very quickly, but the thing is We have a chance if it's really a small patch We don't have a chance to to shorten this If we if we just switch to a new a kernel version for example and now we are back to this topic And don't get me wrong. I'm not telling we are Maintaining one kernel for 50 years. This will not work But we will we need it longer than the current efforts in long-term maintenance And are dealing with the calls and we will we will come to this so Getting back to the kernels. Why do we do this? Okay clear fear of regression performance regression you all know this if you have a smartphone or Also depending on the operating system on your on your computer So every new system is a slower than the old one or it happens from time to time at least Stability, okay. Sure. We talked about this recertification costs another thing is we are working mostly we are starting up with with Linux or PSPs provided by Silicon vendors and So what we do we take this and Depending on the quality we Work on this we try to bring things mainline as well, but we have a lot of different versions So it harmonized a little bit because of the LSK and LTSI efforts So people start start there or silicon vendors very often start there But even Using those it's it's quite a variety of versions and we would like to have less versions in the in the in the field Fourth it's not nice, but still we have winter specific code. We have forks. We had off out of tree code We are internally working on on bringing whatever makes sense upstream but in the reality I would expect that still we have some of tree code in in future and There's also reason. This is always a lot of effort to bring this to to new version and to maintain this for a lot of versions So this scope is as I said We start from the from the kernel ground up and we add things we need So it will be at the beginning very basic like the kernel real-time support And then we add some more libraries on top of this An important point I would like to mention is the tools go along with the software stack on the device So it's not enough to just Provide and maintain the device stack. We also have to do consistent tool stack which also can used over the years and Enables us to maintain the whole thing The right pillar concepts. Yes, I said it's not a specification project, but we try to also Provide Documents which help us to do the safety certification later on but that's a little bit later on the on the on the roadmap currently so What kind of systems? Are we focusing on so if we look at the processors and memory sizes? It's been basically the systems where you can run Let's call it a normal Linux with without pain So a starting at a device class of Cortex M4 and up so if you compare this to to other boards, it's about the Magnet order of magnitude of a Raspberry Pi and it goes up to really big systems Which you might not consider to be really embedded systems. It's really big servers But they are also special purpose computers for built into devices like MRT scanners for example, so this is the whole range and We will start with some reference platforms or we started with with with PCs at the moment But we will come to the to the infrastructure. We are currently building up So the test infrastructure will be built in a way that we can really connect a lot of different Products and hardware in the different companies as a distributed Distributed Integration tool chain so with this I would like to hand over Oh another question Yes, the question was do we have different platforms and do we test different kernels on different? Processor architectures different platforms. Yes, definitely. We are building this up in a distributed way So we we are not having one room because this is a lot of overhead to put all the devices in one room So we keep this the infrastructure distributed. So currently in all the participating companies. We have some targets set up which which are Which are used for the for the integration and regression tests Okay, thanks. Maybe we can talk later about this I take over from this slide Currently we just set it up our new project and there are some new members We include the Hitachi, Siemens, Toshiba and working with Codesync and the platform So each company is funding some budget for to for the to hire CIP developers for maintenance and also and also each company Developer will be joined this project. So To because we should work To solve our issues and this kind of development work have to be done with upstream projects such as Some distribution project and also the other projects and so on and our first focus is a super to Establish in super long-term support and so we when we Decide something we should correct some source codes from the upstream and or maintain for Long time. So this is what we would like to do in this CIP project and we have a policy and One important policy and this is upstream first policy Even we maintain For super long term such as more than 10 years, but we would like to have any effort Going to the upstream and then back port it If we don't take this policy we will have serious issues For example, if we have a local changes just for the CIP and upstream changes are same Fixes lay down That will cause a serious conflict. So we don't want to have this kind of conflict that we would like to work with upstream again, this is Development process again All shows yes a propellant example and also railway example and The first probably this line is a timeline and Usually civil infrastructure system related system takes us long such as three years to five years so when we consider about the BSV development it usually down at the beginning to support or specific hardware and Typical LTS support period is two years. So that means When we release the products, it already expired for the LTS support period and I can say we need to Provide such kind of products for 20 to 50 years sometime and This is a huge gap between current LTS and our requirements So and sometimes also hardware replacements happen And what we are what we currently have is some LTS project is a Community have LTS long-term support corners. It's support two years and also LTS long-term support initiative also support around two years same as LTS so But Our requirement is we would like to have 10 to more your support and currently what we are doing is we are doing by ourselves or by Even inside the company each department doing by themselves. So that means many Similar effort we have And so that so that cause are quite serious issues. Yeah, because we cannot manage all staffs Inside the company also. So what we would like to do is we would like to join And development one common base layer which can be support more than 10 years So at the beginning we also need to have some futures but It has to be support more 10 years. So that means yeah, we Currently planning to allow to have some backport effort inside the CIP corner But usually the current development is quite fast. So if we Direct this Backport from the newer feature from the for example five year later. It can be cannot be done So to avoid this kind of issues, we have Yeah, something like a two or two or three years release period for newer version corner, but It's a before the new releasing We would like to have some multiple merge windows to allow to backporting and then After the backporting period also we have a super long-term support period for the security fix and critical bug fixes and Today we have one important announcement so This version 4.4 Will become a CIP super long-term support kernel version There are some selection criteria Because I'm before deciding this corner version. We have a lot of discussion inside the CIP current but To decide as versions we have to consider about our requirements also of course and because this is a collaborative project funded by the companies and but at least we should have LTS version and I think they synchronized with LTSI So Unfortunately, this year LTSI is not happens so we should consider the latest LTS versions for the Fagani versions and It can be used for the upcoming product They'll see in the previous slide our development our BSP development is usually done very at the beginning So we have to consider about the upcoming project can be used 4.4 and also need to support for long time and Next SLT because it's current kernel version is not concretely fixed yet but It will be announced in two to three years because if we support only one kernel version for more than 10 years This is not real This is not can be done so And at that timing we would like to synchronize with LTSI kernel versions Because we are doing a similar effort. So that's why we would like to do that And we have a super long-term stable team inside the CIP We already announced this stuff at the in Xcon North America and the Ben hatching from Codesync He's a first super long-term kernel maintainer for CIP project So he's a very well known as a Debian contributor and he's a maintain currently Maintain two kernel versions for Debian kernel so So then we will be supported by one additional developer because and yeah only one persons Doing super long-term support and this is not safe enough. So we would like to have to have two developers to maintain the kernels and this work has already started in September last month and We currently setting up a super long-term support development and validation processes and Also, yeah, we need to have some repository or something some kind of infrastructures for the LTS kernel This shows our plan and We will have a development process will be similar to LTSI because we would like to have some bug posts from the upstream kernel But one important stuff is we have To Consider but regulations so that means a if the bug for changing the kernel API or ABI It will be not acceptable for CIP kernel and then we will do some validation period Currently, we are establishing a kernel test infrastructure. I can show you latest latest, right? so and to validate the kernel we also Do setting up the testing infrastructure and the testing goal is We would like to perform the testing on the real hardware not on the VM Because we are using that such kind of real hardware for the projects and That's why so and Yeah, just said I read just focusing on the CIP difference platform But difference platform is not decided yet. So this is a quite open to discussions and if someone want to bring Yeah, your hardware for testing it Might be very welcome. So but yeah and The important thing is a critical fix is we need to be fixed as soon as possible and if we try to you try to functional testing such as a such requires long Time to be for to down is not a bit difficult and also this kind of effort already doing by the other Project such as OSADL for retention issues. We would like we don't want to duplicate such kind of effort. Maybe we should work together so and The last important stuff is we would like to share the testing results for our not only are in CIP, but also we would like to publish that and We have already setting up a kernel CI instance just for local just working for local one, but then it is based on the Bargrain based and easy to I think you can pick up the environment from the GitHub I think and Yeah, you can start to testing for the with a kernel CI So and The other consideration is a yeah, this is current for the kernel and the other consideration is we have to pick up some user space packages and This is currently Yeah, I'm going for the selections But we would like to start as minimal as we want because our Initial focus is we are mainly for the controlling systems. That's kind of systems not to require such a big set of the packages. So This is a some one example which includes Linux kernel with a preemptity patch and the user and includes for example bootloader and busybox and best library something like a legitimacy and Security reasons for open SSL or something like that. So this is quite small one But even it is small. We have to consider about the builders tools also but this build the tools Only for using development not working on the Device on it. So that means this part is have to be support for a long time, but the other part is Have to be keep it for the protocol. So, yeah, we're just starting as we Start very small packages then extended to the Yeah, based on our requirements. So and This is what we are currently under discussion. So CIP should collaborate. We would like to collaborate with other projects for maintenance efforts and also development efforts. So just looking for the kernel project Lsk is also doing some backports from the upstream and the other distributions such as Ubuntu and other SOC vendor also are using a 4.4 currently We would like to collaborate such kind of project and the next one is a selection of the futures for backporting, but Preemptality is as you know, not upstream yet, but most of our industrial companies strongly relies on preemptality future so but This future is not upstream. So that means the preemptality might be a margin to a separate branch and for testing. So Probably a CIP have a two branches which includes Preemptality and not include a preemptality and both brands will be testing And this is a currently under discussion and also security is one of the most important futures. So we'd like to Backport probably the case PPP effort and Something like all the stuff is testing kind of maintenance policy and user and package selections So this is a milestone. I don't have enough time for just quickly just a service for this year and we are deciding a lot of stuffs and and we'd like to finish this kind of establishment in This year and next year will be more development will be happens. So This is a kind of a commercial for the two, yeah to ask for a joint, please join to the civilian plastic platform and current member is Hitachi, Siemens, Toshiba and working with Cosync and platform. So Why join is yeah, we are we will provide Yeah, industrial grid Relax base layer. So if you are using this kind of base layer Maybe you can say this is our CIP base layer and stable and Also have a long time support period by CIP or something. So This is maybe you can convince to your boss or something So This is a contact information. The important stuff is really have a website www.cipproject.org and also inside this CIP website you can go to CIP Wiki and The development effort will be opened to the CIP mailing list and also CIP Wiki and please join to the CIP mailing list We now we are open for the current development We decided and it and many Development will be open happens in this mailing list and CIP Wiki. Please come to Join So, thank you very much for that and any questions. Yeah, please Yeah, I know that Greco-Hartman is maintained, but LTS version do not allow to feature backporting So, yeah, I think a CIP would like to collaborate with Greco-Hartman because and yeah during his maintenance period We also have Have some effort for the backporting So that's kind of a security fix and the security fixes should be go to the Greco-Hartman But the other stuff, yeah, we have to consider for example KSPP effort Need to backport from the upstream. Yeah, that's kind of stuff Have to be managed by Ben Hatching please I'm smiling because this is all I heard a lot of time in industry because they had currently all the homegrown proprietary operating systems and I said if we if we go to Linux We have a common platform already As I said, we will improve the security because we we have shorter reaction times We can do small fixes even on old systems, which without changing the whole thing and I think this increases security much more than just Hide behind some proprietary solutions, which are not secure at all. The systems are Really isolated from the rest of the world. They are not connected to corporate networks and to internet and in practice The most risk is business issues like fire safety System not reacting to fire. This is a higher risk and what people do they develop small fixes and The customer and the producer have to agree that the risk of not doing thing is higher than the risk of installing this and In the process of the previous certification of the system, you don't only certify the system But you certify the process of producing the system So you have a minimum level of assurance that this small fix is going to be safe And it will be certified then in the next window one of the things I've seen is that Systems that you don't think are connected can be attacked a Friend of mine showed that actually it's now not difficult to go in and tap phone systems or what was considered to be Secure links which were carrying on it unencrypted data. So I think people need to be very careful about these things because they may assume that they are on a safe system, but you can't guarantee that Somebody hasn't physically Compromise the security or just messed up a configuration and left the system open to the world Thank you. And there was another question In your slides, you mentioned that you wanted to have a period of five years where back ports from upstream Would be integrated like SOC support so the Just to confirm that the intention is to get all the required stuff mainline first and Validated there and only then backport Because In the current environment, I don't see the required SOC support happening mainline in the time frame for Systems interesting for you without additional effort, but it's it's it's getting better. So that it's it also They feel the pressure and all these companies mentioned here have these pain at this point And so I think things will change. Maybe it takes a few more years, but it's changing One more Sorry So you're saying that not only you're doing security fix back ports. We're also doing feature back ports for a long time I've been on projects like this where we started with a kernel So it was say it's kind of 3.8 and we've back ported security fix and those were easy Then the customer wanted oh by the way, we want this latest USB fix and by the time You're like nearly a thousand patches down the road It's becoming increasingly difficult to take stuff from upstream because upstream has moved in certain and Are there plans to? Have increased effort as the difficulty of the task increases I Are these are people committing who are going to commit to this five to ten year plan also going to commit to Actually down the road. We need more engineering effort on this It's it's hard to give a general Answer on that. So I I think we have We have to decide case by case and we have aligned to the requirements of the companies who are participating and Where this is used so At the end, yeah, we'll start with a case-by-case decision and maybe after two three years We can can give you a set general statement was was the best practice and what we learned so You want to comment on from the LTS I view Maybe another comment on the feature back porting discussion yesterday at the LTS I presentation I took a look at the current state of the LTS I per se and it's basically Yeah, three dozen patches from Altera support for their SOC and some 150 or something patches from Renesus and those are not just back ports Those are basically from my point of view Pages that have not been reviewed by the community Obviously Greg does what he can but he's not a arm SOC maintainer or a subsystem maintainer And so things will get through which would not have been Accepted by the respective subsystem maintenance. So you will be basing on a kernel that has drifted from just fewer pure Mainline feature backpots I Think that for the back porting effort of features I think that LSK is a good example at least is the example that we we are looking at I'm not saying it's perfect. I'm saying that it's working for the Leonardo members and the fact that we are Bay that we are taking a look at LTS I doesn't mean that we're gonna take the code Consumer industry has different requirements that That's CIP so The fact is that this is a this haven't been done. So we have to base our first work and learn on on on projects that are doing this and Basically, there are two right LSK and LTSI. So we are taking a close look and we will have to learn our own lessons The fact is that these companies are already doing this behind the firewall. So It's it's about putting this together and bringing more people. It's not completely new to them They are already doing it. So there are tons of lessons that they have learned inside That we are trying to bring outside of a firewall. So I'm not saying it's and nobody's saying it's gonna be easy All right, but it's possible and the fact that it's possible is that they are already doing it and the economics work It's just that we need to maybe Learn how they are doing it and from that and from other projects just come to a Conclusion of how this can be done in a way that is sustainable It's the same case as some years ago with with with the mobile industry We were not convinced right and it's working. It's not working. Well But it's working. I mean it so it's it's working way better than when we started Agree agree the challenge is high everybody everybody agrees on that. Thank you very much for attending