 Okay, thanks for coming our presentations. I'm Yoshitake Kobayashi from Toshiba, and I'm leading an MBA degree in a team in my company to provide many products and which relates to the Sibling Cross assistance. Thank you. Welcome also from my side. My name is Urs Gleim. I'm with Siemens. Oops, we seem to have a problem with the connection. Now it works. I have a similar roles than Yoshi, so we also have a team that's our logo providing Linux for different industrial devices inside the company, and we both together were founding members of the Civil Infrastructure Platform project, which is a Linux Foundation project, and today we want to give you an update what happened. We called it industrial open, industrial great open source base layer development, and I will start with a short introduction setting the context. What is it about? So what is CIP? And first of all, yeah, what is CIP? It's compared to many other projects you see here on the conference. It does not have the latest and greatest newest features. It's one I think it's the most conservative project you will find on the conference, but nevertheless it's one of the most important projects of our civilization, so we will save the world at the end, but seriously what we are going to do is we provide an open source base layer for embedded systems used in civil infrastructure systems. We work closely with the communities. We will see examples about this, what we are not. We are not a new Linux distribution, so it's much less, you will after the talk, you will know what we are talking about. We are really starting bottom up here, and I would briefly start explaining what kind of products and what kind of systems we target, and actually you see all the systems on this slide. So the special thing about these systems is that mostly these systems are hidden. You don't see them, but everybody expects that they are just working and providing our electricity, our water, our transportation, and we have some pretty typical product examples which our companies provide, and so starting from left to right, we have all the transportation systems doing the track controls, controlling the vehicles up to ticket gates. We have energy systems, energy distribution, power generation, turbine control. We have a lot of systems in the area of industry automation, controlling all the production lines in the car manufacturers, for example. Here you see the CNC control machines, which are used, for example, at Foxconn for doing the housing for iPhones and other mobile phones, and communication devices. We have healthcare products, building automation, and also broadcasting devices, and if you look at all these products, so we will see later, there are members like Hitachi, Toshiba, Siemens, and we all, all these companies have products in this area since the early 2000s, which are run by Linux. So you will find examples everywhere, and the older systems are more than 15 years old now, and so we gained a lot of experience in using Linux and these kinds of systems, and there are also some issues. So the issues in those systems mainly is the long lifetime, the reliability, and more and more of the security questions comes up, and to show you the difference between all the other developments, so if you look at automotive industry or other industries, we have different requirements. So these systems are long-term in the field, we have also a completely different cycle, development cycle, and I show you one example how the typical lifetime is. So this is a railway system which is still in use, which is more than 40 years old, so just imagine, look around all these embedded boards here on the conference and let's maybe meet in 40 years, which is 2057, maybe the ELCE is in Prague again, and then maybe we have a picture with the current board in the presentation, and this is the lifetimes we have to think of, and if you look at the current product development cycles, it really takes a long time. We have three to five years development time, we have again two to four years customer-specific and country-specific extensions, especially in the railway area, then we have all the safety certifications, authorizations on top, and each and every change takes a lot of effort to bring in, to do all the certifications again, for example. And this is the reason combined with the fact that these systems will run for really long time from 25 years up to 50, 60 years in power plants. You can maybe imagine that it's not an option to switch to the latest Linux kernel each and every year or every two years, and so we need different solutions for this, and actually we already work in a different way inside the companies, and there's an additional factor which also influences this direction. Everybody is talking about IoT, industry is talking about industrial IoT, but the reason why we have to look at this now is we have a lot more devices here close to what we call the field, so the sensors and actors, and here in the middle we have IoT gateways, we have edge devices. Most of them are run on Linux, and the number of these devices massively increases, and especially security-wise it's not manageable if you have like 100 different products with 100 and different Linux configurations on running on them, and so this really is a huge pull in the direction of harmonization and to find a solution which is more sustainable here. So we have all these devices, especially in industry people move functionality down from the cloud compared to traditional IoT to these devices, and this is very important currently to set up these devices in a sustainable way, and otherwise the effort maintaining these systems will kill us. So the problems are summarized, we have to survive a very long time, we have industrial requirements, robustness, security, reliability, and what we do here also in the project, and we will go into the details shortly, is nothing new because this was already done for years, but we did this in several companies, each company on its own, and we did this even multiple times for different products, so in several companies for several products, and that's why we said it's time to change something, and we looked about a way to organize this, we talked to our competitors and all the people having the same problems, and we agreed that nobody buys these systems because we have a special Linux version in, it's just a requirement that we ensure the long-term maintenance, and this is a perfect setup for a collaborative project, and we decided to do this under the umbrella of the Linux Foundation, so you might know there are other projects, if you go to the website you will recognize the names and the logos, most of these projects focus on IoT enterprise and cloud technologies, they have in common that a lot of companies back this in terms of people and money, and this was the reason to say we need the old industry collaborating in the same way, focusing on long-term maintenance, industrial-grade Linux stack and a close cooperation to all the other projects and focusing a lot on upstream work, so what came out in April 2016, which is one and a half years ago, we founded the Civil Infrastructure platform and the next slide gives a brief overview, who is currently participating, this is Hitachi, Toshiba Siemens, who have roughly the same product portfolio in many areas, what we do is, and then we have Renesas as the first teleconvender, who uses the SIP platform as a reference platform, we have codes, I think, with a lot of open source expertise and system software development expertise, we have Platt Home coming from the industrial IoT side and all these companies provide first of all people participating in all the activities, but beside this they also provide a little money which gives us the freedom to fund projects, related projects to fund maintainers and to really get this up to speed and this is how these kind of projects work but what is maybe more interesting for you is what is done exactly, what is the actual work going on, we started, really bottom up, we started with a kernel and set up the what we call super long term supported kernel, we agreed on a kernel version, we built up the infrastructure around this, Yoshi will go into the details and now we are building this up, bottom up, we add first packages, which are the least common denominator which everybody needs, but we will see it's much less than a distribution and the idea is to evolve this over the years, so to add success step by step additional packages and to really have then a common base layer which can be used by everyone and with this I would like to hand over to Yoshi, going into the details of the current activities. Thank you for this. Okay, let me describe the current status of the CIP basically at development. At the first I would like to announce three stuff, the first one is CIP just released for that desk, this is a CIP current testing environment, made by a CIP project and co-sync and that is quite using rubber and kernel CI features, so this is a kind of collaborative project for that and next one is a CIP core is just launched, so we need to create a base layer for industrial grid systems, so CIP core is a first step to create our base layer, so I will describe the details in the later and the last one is CIP just decided to take Debian as our CIP primary distributions, so the meaning of the primary distributions also I will describe it later. So our scope of activity is quite a variety of stuff, but yeah we need to prioritize some technical topics for each and when we launched the CIP project at the first we started to create a long-term support strategies which based on stable kernels because long-term support is quite important as Ruth mentioned before for our systems and the second one is real-time, real-time is one of the most important features because most of the controller need to support real-time features and the third one is testing automations and that is a relative board that desk and finally we just announced a build environment as a CIP core project, so I will describe the details. So the first one is kernel maintenance, we picked up one kernel to maintain more than 10 years and the second one is preemptivity and the third one is testing and then CIP core. So the details of the CIP S3DS kernel development and this as well shows the details. So we picked up the Linux 4.4 as a CIP super long-term support kernel. So this kernel is based on the Linux stable 3 and probably as humans may know Linux 4.4 is maintained maintenance is just extended to six years and that was announced by Greg Crow Hartman but CIP needs to have more long-term support more than 10 years. So we decided the maintainer is a very hot team he's also famous persons as a Debian kernel maintainer so he have an experience to support kernel for a long time and the latest CIP kernel is just released on last week. Yeah, this is the latest status for the kernels and when we create kernel we have quite important policy. The first one policy is a absolute fast policy. Without this policy we believe we cannot maintain the kernel for a long time because if we have some local patches it cause serious regression issues in the future. So this is why we have an absolute policy first. And then yeah, we currently back voted some features from absolute kernels and we only focus on limited features because if we back voted a lot of stuff from the absolute kernel it's also cause serious issues. And fortunately in our use case doesn't need to support a lot of new features because we are kind of a conservative project. And now we have some security features and also both support packages from the absolute kernel. And to maintain the CIP kernel we have a good example here. So we picked up the stable kernel and the pattern is a 4.4. So 4.4 is currently under maintained by Greg Hartman but our CIP maintainer Ben Hatching also helping them to review the patches. So recently this patch is submitted for 4.4 stable review but also Ben reviewed these patches. So this one line cause memory leak. So this kind of stuff down by CIP to make a stable kernel more stable. And probably here maybe you're also creating a super long time survival systems and maybe want to know the next CIP SLTS kernel versions but currently we're focusing on 4.4 to maintain a super long time. But next version we just say approximately two to three years. And yeah, maybe happens maybe next year or year next. But at that time we also plan to pick up the kernel version from the stable kernel tree. Because otherwise we just maintain one kernel without any help we would like to collaborate to the stable kernel development. So, and we know some chip vendor want to have out of three drivers to support their board support packages. And in CIP in general we can say all out of three drivers are not supported by CIP. But even that it is also useful CIP kernel because most of the common part is maintained by CIP and maybe our associate vendor only need to support their device specific part for the super long time. Okay, so next one is a print out key part. Most of members in the industrial area want to support real-time futures. And currently our real-time patch is development our real-time Linux project. We are picking up the real-time patch from the stable RT tree and also merge it to the CIP tree. So currently the CIP RT is under development and not part of the CIP official project yet, but soon will be, yeah, become the CIP official into the CIP official repository. So if you want to see the current development status you can go to GitHub state, GitHub. And the most important is the CIP is just joined the real-time Linux project as a gold members. So we directly helping their support and we discussing about the stability maintenance together with the real-time Linux project. So yeah, if you want to have more information you can go to a real-time Linux project page. So next one is the CIP testing. This file shows that the milestones for the CIP testing. At the beginning we need to create one testing environment which can use for the not only the CIP members but also currently developers. So we created a board at desk for single developer to support, to use the testing environment on their, on top of the desk. And the next milestone is the CIP current testing. We need to start CIP current testing as soon as possible because the testing is quite important for us. And currently we are, we, I think, yeah, we are on this stage and trying to define the current testing more concretely. And yeah, CIP just announced CIP testing environment board at desk version one is released. So if you go into this URL you can find the details of the board at desk environment. So there are a lot of documents available which include how to set up the board at desk for your use case, and also what kind of things can be done by a board at desk. So this is quite easy to use and based on the upstream project like a kernel CI and Lava 2. So this is also commonly used for the kernel developers. That's why we choose this kind of stuff to include a board at desk. Next step, we also want to collaborate with other testing for effort. For example, automotive grade Linux also using kernel CI and Lava 2 for their environment, for testing environment. So we would like to collaborate to share the kind of effort with other projects. So next things we also try to define how tests should like and also how results should be shared. Because sharing the testing result is important for the kernel developers. So to recognize the regulations. So we would like to share this testing result. So this kind of features will be added to the board at desk in soon. And now I'd like to say CIP decided Debian as a CIP primary difference distribution. And the meaning of this primary difference distribution is CIP would like to work with Debian to have a longer term support. And also CIP will select a CIP core packages from Debian packages. Because currently a Debian have a long term support project inside the Debian project. And they currently have a five year support term. But CIP requirement is for example, 10 years or more. So we would like to fill this kind of gaps by collaborating with distributions. And CIP members also interested in IOC project as ability systems. So IOC project is a quite flexible framework for us to build the CIP base layer. So the shape of the CIP base layer is quite small at the beginning. As you can see in this list, there are I think less than 10 packages here which includes the kernel itself. So we should have kernel of course and very basic utilities and libraries and the security features and so on. So we usually start as you said, as we said, we start as minimum as possible. And this is our initial candidates for the component set. So to define the component set, we all must concrete idea as minimum as possible. But defining the component version is quite difficult. That's why we would like to collaborate with other distributions such as Debian. So actually a CIP core package is started and CIP core aims to provide a way to create and test installable images. So this figure shows what we will be done by CIP core. So CIP core user of course are CIP kernel and Debian source code or Debian build and pre-built packages to create a minimum base systems. This is the concept of the CIP core. To create a minimum systems, we use Bitvec or Debian packaging systems. And currently we already started this project internally and already supported some borders which include the runesus board and big bone block, something like that. So the current status of CIP core development is based on the meta-Debian layer which called Debian. So this one create target systems based on the Debian source code and kernel source code. But we also would like to use binaries because pre-built binaries can accelerate our development time. So there are some approaches already available. One is either and the other one is Elbe. So this kind of effort also direct to consider to use inside the CIP to build a CIP core. And this slide shows the difference and also a common part of the currently available features. So as you can see, for example, either an Elbe using a Debian binary, but a Debian using source code. But Debian and either using Bitvec and something like that. So currently these three projects also talking to each other how to join or how to merge their effort in together. Because all three projects based on Debian. So this slide shows gaps and common goals between Debian and CIP. So currently Debian support, five year support term. But we would like to extend to more than 10 years. And the other possible option is open source license compliance. Debian has quite nice review process to have a license compliance. And they also decide to use depth five format, which is very similar with SPDX, which are defined by Linux foundations. So if they create a depth five adaption, it's easy to generate the license informations for each customized project. So we direct to exchange this kind of license review result which we are already done internally. So this kind of effort can be done. And the other steps we are discussing is functional safety or this kind, yeah. And year 2038 issues and security and so on. So this kind of stuff just started the discussion but this is, we know this kind of effort is also important for the other project. So we would like to collaborate with other projects. So this is my, sorry, thank you. So let me, just two minutes, just summarize what we heard, thanks Yoshi. So what should stick in your mind is CIP is the open source base layer for industry, at least will be in the future. We currently focus on kernel maintenance, including real time support, testing, which is very important for us to build up a common test infrastructure to share the test results and to share the tests inside the projects and also outside. And the third point was the CIP core packages really starting bottom up with a minimal set which can be long-term maintained which is much, much more complicated than just the kernel and the kernel is complicated enough. We will hear more in another talk. I will come to this in a minute. So what we currently get as feedback, so we hit the right time to start this project. We have a very good feedback inside our companies and also outside. We provide this base layer based on Linux. This is clear. We have big companies backing this and with also a semiconductor company joining us with Renazus, this really pushed this, hopefully others will join. We have a close cooperation with other projects, Debian, Pre-M30, we use kernel CI, Lava, and also we don't reinvent the wheel. We try to bring together what is already there and adapt it to our needs. We have a strong emphasis on tool change because we believe this is important to make this manageable at the end and we talked about the tests a lot. And last but not least, inside our companies at Toshiba, Sime and Hitachi, we get a strong traction. So we have a lot of products, we have a lot of business units. These companies are really big, so people are calling us and want to join these efforts also company internally. So it's getting up to speed as quickly as it's possible for these domains and these big companies. So everything takes a while but we really see traction now. And I would give you also some hints on other talks. So in meetings, so after this meeting there will be a CIP developers meeting which is open for everybody. So everybody who's interested, please join. Then later, this afternoon, we have a talk going more into the maintenance topic by Augustine and Ben, who are also here in the room. And last but not least, we have a keynote by Jan, a colleague of mine who's also here on Wednesday morning. So please join us for the next talks. We also have a booth. You will find us on this upper level. It looks like this. So you will recognize the logo. You see some demos, some examples. We have about a desk demo between three and four 30. There is somebody giving demos. So have a look at this. And yeah, with this, we both thank you very much for attending. And now we are open for questions. Are you thinking about NXP platform as possible testing platform or something like that? Because they have definitely bigger grade of the temperatures and this kind of stuff. And they definitely have really nice BSPs. Of course, in our industries, we use also NXP platforms, but NXP currently is not yet a member. So we are focusing on the reference boards of the members currently. So maybe NXP is interested in joining us. Okay, that was another question. I have two questions actually. And one is related to two, one related to cybersecurity. I work for HGL, so we don't have to keep our stuff for so long, but for us 10 years is plenty enough. The first question about tools is, what is your strategy to allow to keep building all stuff on new PC? With the problem I had in telecom 20 years ago, where you have to rebuild a software, which is 20 years ago, but obviously the machine, which was used to build it, doesn't exist anymore. And so you have a new one with a new tool. Especially if you build using DBN tool, which are not really cross-compilation based, it's very difficult to rebuild the old tool. And then quite often, you cannot rebuild the old software. I've not seen anything on that. I would be interested to know what your vision and the second one related to cybersecurity. I have not seen anything about management of connectivity and protocol stack and communications, which is going likely to be fairly difficult to keep static during 20 or 30 years, because cybersecurity is going to hack it and likely will have to change it. What is your vision and how are you going to manage that? Maybe start with the tools. Yeah, for the tools, yeah, we know hardware is quite fast development, but we direct to know, keep the development tools as long as possible based on using, for example, Docker or virtualization environments to able to reproduce building environments. So this is actually currently we are doing to keep inside the company, but it's already taking more than five years. It works, but not for, we don't have an experience for more than 10 years yet. So, yeah, is it answered to your questions? Yeah, we actually do the same as everybody does, using VM images, archiving them and so this at least ensures that we can reproduce the builds that we have. So we have to put in all dependencies and can archive those. Regarding the connectivity question about the network stacks and protocols, it's not addressed in this project yet. So we cannot solve all the problems at once, so we decided to go step by step and start with the kernel, which is difficult enough. And then, as I said, at additional packages, regarding the network stacks, what is currently done is we work on the update mechanisms to make this robust and secure to be able to exchange the upper layers in a robust way, but this isn't, it's on the list in those projects on the topic list, but it's not addressed yet because we have enough work with the existing topics already, so we go step by step. Stefan, working also on Agile, it's a good idea to try to build things for a long time, but the question immediately just after is, how do you upgrade your devices or whatever will use your distribution? Because what we hit on Agile is that we may have multiple ways to upgrade what's in the car, okay, not only the IVI system, but also telematic system, gateways or whatever, but we see that there is no real full solution, I mean on the client side or the device side and the server side to handle millions of small devices or big cars, but in fact, it's the same for us. So do you envision something that could be done at CAP level and could be reused by other projects as well or do you just plan first to, as you said for the kernel, just to try to stabilize the kernel and just try to use what exists? Yeah, the answer is pretty much the same as before because yes, we also faced this problem, we didn't put it in this project yet and there are solutions in the different companies already and there are discussions and there are candidates to be moved also in this software stack then, but as I said, it's not on the list or it's not decided today, maybe next year we can tell more about this, but we really want to set up a sustainable software stack and not create a big software stack at the beginning, just keep it manageable and that's heavy enough at the beginning to start with kernel and some packages. Anything to add? Yeah, as I also said, our concept is just we improved the base layer to use for any product so they can extend the base layer to fit their use case. So this is our concept, basic concept. Maybe one thing that you can imagine how we use this also in the company, so there's this base layer provided by CIP, then in many cases, there's a central unit providing additional functionality or additional packages on top of this maintaining this company internally and then there are the different product units who also put something on top so you have a three layer approach basically if you just look at it from the 10,000 feet view. Last question. I guess that your answer might be the same but have you done any tests in the future? Because I've been doing builds in the future for reputable super builds and we noticed that some keys expire and if you build in 10 years, many keys will have expired of the software when you validate it. And have you tried building beyond 2038? Yoshi, have we? Jan, do you know? Okay, let's stick with, it's a good point. So thank you very much for attending. See you.