 Thanks very much for joining this session. This talk is about CIP Colonel and Test Team's activities. We'd like to talk how we are interacting with UPS3 and other projects in order to achieve our goal. First of all, let me introduce ourselves. One of the speakers is myself, Masashi Kudo from Cyber Trust Japan. I'm currently acting on CIP Colonel Team Chair. So, Chris-san. I'm Chris Patterson. I'm working for Renaissance Electronics in Europe. And I'm the working group lead for the testing side of things at CIP. Thank you, Chris-san. So, today, we first explain about CIP. In the CIP, UPS3 first is a development principle. In the next section, I'd like to explain what it means. CIP Colonel and Test Team follow this principle to work on tasks. In the following sections, we'll share what we are doing and what we have accomplished so far by each team. CIP automated testing is covered by Chris-san. Other sections are covered by Kudo myself. Then, let's get started with what is CIP. CIP stands for Civil Infrastructure Platform. It was founded almost four years ago under the Linux Foundation. When you hear civil infrastructure, you may imagine heavy-weight systems like power plants. That is true, but there are a lot more around us. Even industrial IoT devices can be categorized into civil infrastructure. The way to develop all those systems and devices have been changing. Before 2000, they had been developed with proprietary components. But after starting millennium, software layers were clearly divided into competitive layer and non-competitive layer. And people had focused on proprietary applications in the competitive layer to differentiate functionalities. Recently, the situation has become even more complicated because mobile and cloud technologies have become commodities. Systems and devices now consist of much more components. But the development resources have not been so much changed. Therefore, people are focusing only on the proprietary applications with limited resources. Now, there are millions or trillions of civil infrastructure devices running worldwide. They share the same industrial requirements, that is security, sustainability, and industrial greatness. They should keep satisfying those requirements during their life cycles, which are usually very long. Therefore, super long-term maintenance becomes a key. However, there were no common solutions for base building blocks of civil infrastructure. Therefore, similar development and maintenance efforts should be spent each by each separately, even in the same companies. By being motivated to solve these issues, civil infrastructure platform, as known as CIP, aims to develop building blocks to satisfy industrial requirements with open sources. We named such building blocks as open source base layer in short OSBL. OSBL consists of CIP SLDS kernel and CIP core packages. SLDS stands for super long-term support, and we aim to maintain SLDS kernel for 10 plus years. CIP core packages contain only dozens of packages. They are carefully selected and will likewise be maintained for long term. You will notice that more packages, say hundreds of packages, are needed to develop real systems or devices. While CIP provides OSBL as commonly used building blocks, those additional packages should be added by users or provided by Linux distributors. CIP governance structure is explained in this slide. The governing board is organized with Platinum members. It decides CIP directions overall. More specifically, whom CIP should collaborate with, what CIP should invest, how budget should be allocated and so on. All the technical issues or directions are discussed at the technical steering committee, TSC. All the member companies can join TSC meetings. The meetings are usually held once every two weeks with web conference systems. Under the technical steering committee, the six activities are proceeded as teams or working groups. We formed the CIP kernel team to work on SLDS kernel as well as real-time Linux. The testing team was formed to work on automated testing. In addition, there are three other activities that is CIP core team, security working group and software update working group. The CIP core team is working on CIP core packages. The security working group is working on IEC 62443 conformance with CIP. For your note, IEC 62443 is a security standard for industrial automation and control systems. The software update working group is initiating a prototype of software update for CIP. Currently, there are eight member companies in CIP. They are actively working on those activities. Our annual membership fees are pooled as budget and used to support maintenance and developers in CIP. The budget is also used to invest projects other than CIP. One of the member companies reported that up to 70% effort reduction can be achieved by applying CIP to entire organizations in the company. It is because activities like OSS license clearing, vulnerability monitoring, kernel and package maintenance etc. can be commonized instead of doing each by each. That is how cost saving was achieved. Up to 3 first is a development principle. I will explain what it means. Here, two development models are pictured. The model on the left hand side is own community model. The project with this model branches its base from upstream and evolves by its own. This model enables the project to ramp up quickly and in the long run, it will be difficult to incorporate upstream purchase into it due to conflicts. The model on the right hand side is upstream first model. The project allows patch commits only if those patches are already in the upstream. It may take time to introduce a desired patch. The reason is that if the target patch is not in the upstream yet, it should be accepted by the upstream at first. But this model eliminates the risk of conflicts. At the same time, the project can share its outputs with the upstream. As a side note, please take a look at this graph. It displays a growth trend of commit counts for each stable release. As you can see, a few hundred patches are committed to each stable release per month. This trend makes cherry picking quite difficult. Because CIP is aiming a long term maintenance, upstream first model is a desired approach. As explained so far, the upstream first principle is essential to achieve industrial requirements, especially in terms of long term maintenance. We collaborate with upstream projects. Before using the outputs, we upstream what we have and don't keep them locally. By rotating upstreaming and using continuously, we are moving toward our goal. Here explains how CIP artifacts can be used by CIP users. CIP refers to source or binary packages in Debian. If you would like to use Debian source packages, you can use Yoakt-Pokey as a build system. CIP core packages contain tens of packages which may not be sufficient for the development of end products. So users can add necessary packages from Debian by writing recipes. Debian provides LTS maintenance and even extended LTS can be provided. So super long term support including user run packages can take advantage of these maintenance frameworks. Then let's move on to the CIP kernel team activities. The primary goal of the CIP kernel team is to provide CIP SLTS kernels for 10 plus years by fixing versions to fulfill the required level of reliability, sustainability and security. There are two kernel maintenance, one kernel mentor and one kernel developer in the team. While we are highly motivated to work on the project, we don't think we can achieve the goal by ourselves only. We definitely rely on upstream project activities. The question is how to use the upstream outputs and how to work with the upstream projects. So what does upstream first mean for the CIP kernel team? Our upstream is Linus main line and stable releases. By upstream first principle, only patches which are already in the main line or stable kernels are allowed to be incorporated into CIP kernel releases. So CIP members proceed to upstream their preferred code. And once the code is incorporated into the main line or stable kernels, the code is allowed to be back ported into CIP kernels. On the other hand, the CIP kernel team takes actions from a different perspective. One of the CIP kernel team's objectives is to maintain CIP kernels safe and sound. For this objective, the team monitors stable releases carefully and contributes to the stable releases where needed. In general, patches are committed to the main line at first. Then they are back ported to each stable kernel. However, by some reason, such back porting might not be done on some specific stable kernels. It may be because such patches are irrelevant to them or because back porting is not trivial for such stable kernels due to the changes of implementations. The CIP kernel team reviews those patch status. If the team identifies some patches to be back ported to some stable kernels, the team contributes to them. We are concerned about security patches as well. We check the status of the security patches by using open source tools. And if some patches are missing in stable releases, the team contributes to such stable releases as well. By incorporating necessary patches, the team releases CIP kernels based on upstream artifacts. This is the big picture of the kernel team activities. And patch review, CV check, contribution and kernel releases are four major tasks of the CIP kernel team. Now I'm going to elaborate those four tasks each by each in the following slides. The first task is patch review. The CIP kernel maintenance review patches which are included in stable release candidates for 4.4 and 4.19 because CIP kernels are based on those releases. As a result of the review, if the team finds any issues, patch review comments are sent to the kernel mailing list directly. A black window on the right hand side is an example of such review comments. Review results are saved in the git lab and if the team identifies some patches should be back ported, the team initiates the contribution process. The second task is CV check. For security fixes, the team follows a separate process by developing an open source tool called CIP kernel sec. The CIP kernel sec gathers CV information from multiple sources such as stable kernels, Debian kernel and Ubuntu kernel. The kernel team focuses on maintaining the CV-affected kernel 4.4 and 4.19 and may backport the specific CV commits to the stable kernels where appropriate. The CIP kernel sec provides simple graphic layouts as well as CLI interfaces. The users can get detailed information via those interfaces. It provides multiple information regarding kernel CVs. As I mentioned in the previous slide, the purpose of the CIP kernel sec is to track the status of security issues identified by CV ID in mainline, stable and other configured branches. This tool is public and can be found in the git lab under the CIP project. You can also reach it on the website via the QR code. CIP kernel config collects kernel configuration from CIP members to define the maintenance scope in CIP kernel 4.4 and 4.19 respectively. This is also the maintenance baseline by using CIP kernel sec. When CIP kernel maintenance review CV fixes, they consult with this CIP kernel config to see whether those fixes are related to CIP supported boards or not. If the fixes are related to the CIP supported boards, then the team goes ahead to contribute associated fixes to stable kernels. The third task is contributions. There are two objectives for the contributions. The first objective is to fill the gaps. As a result of patch reviews and CV check, we identify missing patches. Some patches are needed for CIP kernels to fulfill industrial grade requirements. So the team contributes them to upstream so that CIP kernels can be based on the stable kernels which include those patches. The second objective is giving back. Because we take advantage of upstream outputs, we are grateful for upstream activities. Therefore, CIP kernel team works on contributions of bug fixes and security patches to all stable kernels, not limiting to 4.4 or 4.19. These statistics show the counts of the contributions by CIP kernel team to stable releases. I reported the statistics at ELC North America 2020 in June. Since then, the team keeps contributing to all stable releases, as you can see here. Compared with June timeframe, the team added nearly 100 contributions in total. How we contribute are recorded in commit logs in stable releases. Reported by sign-of-by, act-by, in addition to authors and CCs, major counts in the total are around 1700. The last but not the least task is CIP kernel release. Again, one of the CIP kernel team's objective is to maintain CIP kernels safe and sound. Through stable patch review, the team identifies missing patches and contributes them to stable kernels. Also, CIP members want to backboard their preferred patches. They send patches to CIP mailing list for CIP kernel maintenance review. By incorporating acknowledged patches, the testing team starts testing. After everything goes well, the maintainer in charge tags it as a release candidate. Another maintainer checks and acknowledges it, then the CIP kernel is released. The announcement of the release is sent out to CIP Dev Mailing List. So, by subscribing to CIP Dev, you are notified of the CIP releases. As I mentioned already, CIP SLTS kernels are based on 4.4 and 4.19 stable releases. The first releases of 4.4 and 4.4 RT were done in 2017. We plan to maintain them to 20-27 for 10 years. The first releases of 4.19 and 4.19 RT were done in 2019. And likewise, we support them for 10 years until 20-29. Currently, 4.4 is released once a month. So, 4.4 RT is once every two months because commit counts for 4.4 are decreasing. 4.19 is twice a month and 4.19 RT is once every two months respectively. So far, we have steadily released kernels thanks to our maintainers by following release frequencies I just explained. I also reported the counts at ELC North America 2020. Compared with the counts in June, the team made 20 additional releases and the year total so far is 46. Toward the end of this year, several releases will be added for sure. Upstreams of current CIP releases are active and we are taking advantage of their outputs while we are contributing to them as I explained so far. However, because we intend to maintain CIP SLTS kernels for 10 years and the upstream lifespans are both six years, therefore, the gap periods should be maintained by CIP. Because maintenance of 4.4 stable release will be finished in January 2022, CIP start to maintain CIP 4.4 by ourselves. The CIP kernel team is discussing how to work on this. We have been relying on upstream developers and other stable contributors for their outputs. CIP cannot rely on this after the end of stable release maintenance, so CIP kernel maintenance would review each other's work. The details are still being discussed and I hope we can share with you the plan at some event next year. In order for the CIP kernel team to effectively work on these tasks, testing acts a very important role. Chris-san, chair of CIP testing team will share what the team is doing. So Chris-san, I'm handing over the control to you. Okay, thank you, Chris-san. So to start with, I'd like to go over the testing goals from this working group. So ramming towards having a centralized and distributed testing. So CIP is a global project with developers spread around the world. Not everyone has access to all of the different CIP reference platforms. So we want to host them all in a centralized way so anyone can access them whenever they need them. We aim to support our software over a long period over 10 years. So if we were to do that testing manually, the effort would be huge and it would probably bankrupt us. So in combination with our centralizing testing farm, we need the ability to run our tests automatically. And we're also very interested in open source collaboration. We want to work with other projects to use and improve their code rather than introducing us another test framework into the ecosystem. We want to avoid reinventing the wheel. So more on our open source approach. This diagram is similar to what you've probably seen before in other CIP presentations. The first step is, as always, is to upstream first to relevant projects. We then take the output of these projects and integrate them into our own setup. So specifically for testing, we're currently funding KernelCI. As a premium member, we're also helping to manage and steal the direction of the project. We contribute code and upstream and do code reviews on projects like Lava, KernelCI, Linares, automated test definitions. We then take the output of these projects such as Lava Docker, which we then we used it to manage our board farms and run automated testing. We then use that to test the CIP software such as the super long-term support kernels and our reference file system. We also use this setup to run testing on the Linux stable release candidates coming out from Greg and Sasha. So overview of our architecture. We split into three different sections. The source code is all stored on GitLab, more publicly available. Our CI builds are done on the Kubernetes cluster, which is running in Amazon. We bring pods up using AWS on demand instances. When we need them, when we don't need them, we kill them to save money. This spring up and taking down is all managed by our GitLab cloud CI tool, which again is all published. We use different size VMs depending on what the job is. So for testing, we might only have a very small pod because the pod just sits there waiting for tests to run on our board farm. So we only need a micro instance. There's no point in spending for a massive VM for that. Our artifact storage is again on Amazon using their S3 buckets and Lava masters running in AWS EC2 as well. We didn't have four Lava workers, which are the actual labs that host the boards which we're going to test on. These are dotted around the various member companies. So we have one at CyberTrust in Japan. We have one at Denx in Germany. We have one at Mentor in India and the fourth one is at Renesus in the UK. So at the moment, we're currently testing through all the CRP kernels. Every push gets built in about 30, 34 different kind of configurations which have been provided by the project members. And then they get downloaded and tested on the boards where appropriate. So you can have the boards. CRP has a number of reference platforms. These are platforms we support in our kernels and we test on them. So if you have one of these boards, you can reasonably sure that the code running on them should be fine. Most of them are all supported on the 4.9 kernel, 4.19, which is our latest one. Some of the boards are all supported on the older 4.4 kernel. They cover a range of articles, ARMv7, ARMv8 and x86. And some boards we also test using real-time configurations using our real-time kernels. In terms of tests, there are a number of test suites we currently run. We use a simple boot test, which runs on all the boards. We use the spectra meltdown checker tool, which highlights any of the speculative execution CVEs the system may be vulnerable to as reported by the system. This is just a checker tool script. It's not running any tests to verify that it's not vulnerable. At the moment, it's just using the kernels pullbacks to say, yes, I think I'm protected against this. We run a large sway for the LTP tests, Linux test projects. Some of them take a long time to run, but we're doing our best to run them all. And for the real-time configurations, we also run cyclic tests with hackbench running in the background to add some load. At the moment, GitLab's managing most of this. We use GitLab CI-CD. This is an example of a pipeline. So on the left, we've got a list of all the recently run pipelines. These are running on every push that's done to the CIP kernels, every tag, every branch, every release we make. In the middle is just a breakdown of one of those where you can see all the different kernel configurations that have been built and tested. A couple of them have failed there, so that's what's been blown up on the right, where you can see the actual build log. And you can see where the failure is here. There's a network driver that's got an issue. So now the developer, you can pick this up and investigate further. They probably end up building it locally just to debug further and run Git. At the moment, all this is happening automatically, which saves a lot of time for the maintenance team. They can have lots of different builds and configurations done automatically whenever they need to. So it saves a lot of time. To view test results, we view these currently in Lava. So Lava has an output you can see whether a particular test job has passed or not. You can break down further detail and see the individual test cases, whether they've passed or not, and click further through to see the actual log, which is relevant to that test case. One downside of Lava is that from the top level, it's quite hard to see what's working and what's not working, what's got worse. You can see where the whole job ran successfully. So the whole platform didn't crash, for example. But you have to break down, you have to go into much finer detail to see which test cases passed or failed and how many. So this is something we're hoping to improve by integrating with the Kernesee I project, which I'll speak about more now. So Kernesee I.org website has been around for a number of years. But last year, they actually formed into a Linux Foundation open source project. CRP joined as a founding member alongside Bay Libra, Collabra, Foundries, Google, Microsoft, and Red Hat. So we're working together to improve the project and the software coming out of it. CRP were contributing code and code reviews. We also helped manage the projects as much as we can. The next major step for CRP is to actually integrate Kernesee I's front end into our testing pipeline. So the benefit here would be that Kernesee I is a much better view for viewing all the test results and how many have passed and failed. And it also adds functionality such as automatically detecting regressions. So whether that failure is new or whether it's always been there. And I think they also support automatic git bisetting, which is something I'd love to integrate into our setup. There's a buff session tonight. And there's also a talk called let's test with Kernesee I on Wednesday evening. So if you want to learn more about projects, look at attending those sessions. Projects actually seeking contributors. So if you're interested in helping develop the project, or even if you've got some ideas on how to improve the current project, they're open to collaborators and keen to hear from you. So let me know. And that's about it from me. So I'll hand back to Kudo-san to sum up our session. Thank you. Thank you very much Chris-san. So let me conclude today's talk. CIP kernel and test teams follow upstream first principle and contributes to upstream. By taking advantage of kernel LTS, the team steadily releases CIP SLTS kernels and aims to maintain them for 10 years or more. To reduce CIP SLTS kernel release cost, the kernel team is closely working with CIP testing team to build automated testing systems. We are eager to recruit new members. If more people work with us, we can expand our activities more and we can contribute to upstreams and other projects more so that the whole ecosystem will grow. So please join us if you are interested in the CIP. If you'd like to know more, there are links for related information. This page talks about our weekly IRC meetings. It is open to everyone. Come talk to us on CIP channel other meetings. This page talks about repositories on GitLab. Links to open source tools explained in this session are here. The testing teams links are here. Please check out. And other information is listed here. In this event, we have CIP main site and two other CIP related talks. Those talks are about security. Please sign up to get more information about CIP. That's all from us. Thank you very much for having joined us. Are there any questions?