 Thank you for joining our session. This session is merging your kernel testing code into kernel CI. How to test your own kernel project with kernel CI. My name is Hirotaka Motai. I'm a software engineer for Emeted Linux, RTD-Nex, and also CIP representative in Cybertron, Japan. If you have any questions or suggestions, I will be happy you could chat on Slack with Alice or come and ask me. If I'm not infected with Corona, I'll be in comparison. The technical contents about kernel CI will be given by Alice after my talk. I would like to explain our motivation for kernel CI development for a couple of minutes. We are doing collaborating development with OSS project. CIP, the civil infrastructure platform project, I'm to establish a base layer of industrial grade to link using the Linux kernel and other open source projects. This base layer will be available for use by developers creating software building blocks that meet safety, security, reliability, and other requirements that are critical to industrial and civil infrastructure project. CIP is an open and collaborative project. We have already joined the outcomes of CIP as we use for our Linux product. We also contribute by developing kernel testing infrastructure and providing CD reports. CIP has a testing working tool that I'm to design and implement centralized testing infrastructure that can be used to test CIP software on CIP reference hardware. CIP testing working tool has already integrated with the kernel CI project. In this presentation, we would like to show you technical knowledge about kernel CI with actual use case of integration CIP testing into kernel CI. Alice, please continue presentation. Hello everyone, I'm Alice Verrazzi. I'm a Gen2 developer and the current Gen2 kernel project leader. I'm the creator of kernel CI, a Gen2 automated kernel testing tool. I'm also a member of kernel CI, a technical steering committee, and CIP civil infrastructure platform testing working group. I'm a software engineer for Miracle Linux, powered by CyberTask Japan CoLTD, and the lead of the continuous integration system for new Linux that is an embedded Linux distribution. This is today's agenda. In summary, I will talk about kernel CI and I will talk about the implementation of CIP into kernel CI and I will talk about KCI DB and the implementation of Gen2 kernel CI using KCI DB. And in the end, I will talk about future work and give a conclusion. So, kernel CI is a community-based open source distributed test automation system focused on upstream current development. It is divided by the technical steering committee, which one I am a member of, that is, formatted by a kernel CI core developer, a maintainer, and this focuses on the technical part of kernel CI. And then there is an advisory board that is formatted by a premium organization representative that are involved within kernel CI. And this is a part that is managing a budget and help coordinating tasks, particularly the financial part of kernel CI. And then we have the kernel CI website that is linked in this slide. And kernel CI is useful for anyone who is developing kernel and need some tools for automating kernel testing and getting results from many boards. KCI is an open source, so everyone can use it, but also the system is online, so the result of KCI can be checked by everyone. And currently it is also reporting to the stable mailing list. So kernel CI is composed by the core tool that is containing the main configuration of kernel CI. The backend that is currently reworked that is keeping the API of kernel CI. We have a front-end and test definitions that are particularly lava jobs. And then we have Lava Docker that is an orchestration system for easily implementing your Lava laboratory. And currently all the scheduling is done by Jenkins. And then we have KCI DB that is the tool that is collecting all kernel CI data. But not only kernel CI, it's collecting also each testing framework that wants to contribute with kernel CI. So currently we have a kernel CI native system that is the main kernel CI part that is created by kernel CI and maintained by kernel CI. And then we have all other continuous integration systems that are contributing to kernel CI by using the KCI DB tool for sending data to the kernel CI database. So kernel CI testing laboratory can be distributed. So everyone can start their own Lava laboratory by using Lava Docker and connecting the laboratory to the kernel CI. And so they can give a possibility to kernel CI to send tests to their own board and checking their own board with a kernel tree in mind line or other different kernel tree. And any Lava laboratory with a publicly available API can be added to kernel CI. In the future, also known Lava laboratory farm could be able to contribute. And as I said, Lava laboratory can be easily installed by leveraging the Lava Docker orchestration system. Currently we have many laboratories that are contributing board to kernel CI. So if you want your board tested with kernel CI, feel free to send requests for adding your laboratory to kernel CI. And these are some of the current kernel tree that are currently automatically tested by kernel CI. Probably I forgot to mention someone, but these are some of the main that we are currently testing. In this talk I really sometimes use framework or testing framework and I am talking about kernel building, booting and testing code as testing framework. For example, the CI PC infrastructure platform project that is on testing framework for testing the CI-P super long term stable kernel tree. So kernel CI native will be the main topic of this slide and I will explain how we manage to match the CI-P repository to kernel CI. And next I will talk about kernel CI DB client and how jint to kernel CI is sending data to kernel CI DB. And there is another way that is of using kernel CI that is to implement kernel CI locally but I will not talk about writing this presentation but I will leave a link on this slide if you are interested. So we have a kernel CI native implementation that is composed of kernel CI and of course you can do automating, building, booting and testing kernel tree and kernel CI native is managed by the kernel CI team and it have native test job that are in lava format and generalize it to run on different kernel CI lava laboratory. The good point about using kernel CI is that it is possible to collaborate on the same code so reuse the same code and reuse the same test and getting a regression check from the kernel CI system. And also by merging directly in the code to kernel CI upstream you can also leverage all the boards that are already connected from lava labs to kernel CI. So CRP current was using a fast kitlab pipeline for doing some tests but we wanted to de-duplicate the work and getting more tests from kernel CI so we decided to match the kernel CI testing framework into current CI native and in the next slide we will explain what is CRP and how CRP manage to merge. So CRP is a civil infrastructure platform it is a Linux foundation project that aim to establish a base layer of industry-grade tooling using the Linux kernel and other open source project. So there is a CRP testing framework as I was saying that is using git-like pipeline with lava CRP and laboratory for building bootian testing with SLTS and SLTS-RT kernel and it's doing some tests with speculative meltdown testing for example and it have rootFS, the user-CRP core and it have some own kernel configuration. So by getting a CRP merging into current CI we could get a regression test mail and release testing mail and we could test the kernel with some of the configuration that are in CRP kernel configuration we also implemented the user-CRP core for testing CRP kernel against but we could also leverage all other tests that is with current CI I have already implemented like some of the test-tests, LTP tests and also many other tests that are not brought here and also they are updated constantly by the current CI team in some case current CI have an automatic B-section of regression but is still experimental feature and we used it in few cases but not already enabled by default and by using current CI we could also leverage all the board that are linked to current CI by the lava lab So adding a tree into current CI just adding a link is mostly sending up request asking your tree to be added to current CI so just adding it is simple and if you are working with kernel upstream I think anyone could request to be added and for CRP we also selected which Linux tree branch to be monitored as CRP is working on different kernel version and each one is monitored and automatically built and tested by the current CI if you can see we are also adding the real-time branch and just by doing that we can get current CI result in me that is describing how many board how many build worked and how many build failed and then we have also after the building we have also the board reporting mail that is explaining how many board worked and which one didn't and we can see the log file for each board for each build and if there is a regression error we can check about what was the problem as I explained previously we also made some work for adding the CRP to the FES as CRP wanted to test the kernel also against the ISA-CIP core so we enable also for testing the kernel which means the ISA-CIP core root FES storage instead of the usual Debian one that is used for each build then we decided to add the spectrum mail down test and for adding that we are linking to the current CI test definition that is keeping all the lava testing file and then we are calling such file by the kernel CI core lava configuration that is that it will create in the end the YAM file that will be send to the lava laboratory and in the end it will create the lava test job that we send together with the boarding boot part of the file and the testing part of the file and in this case we will do the spectrum mail down check against the CRP root FES and CRP kernel file and we also was able to make the ACIP kernel dashboard where we can check each kernel directory from the dashboard and not only from the mail for who want to check from the dashboard and also from the dashboard we can still see all the log file and problem for each kernel version there is still some work left and we have a kernel CRP-CNCI organization board on CNCI CRP testing organization board on CNCI that is a COM board that we are using for managing the CRP issue that is on CNCI and one of the things that we are working now is cleaning the result because as CNCI have a big number of board and there are some board that have some problem and sometimes it produces some false result so we are working on fixing such board actually on filtering such result for as CRP is we have some architecture that CRP is currently not working on so we are filtering such architecture that have problem from CRP testing but also some configuration that are currently not working on CNCI so we are filtering some of the CNCI problem that currently we have for not creating too many false result on the resulting mail list on the resulting mail to the CRP mail list and we are also working on implementing test for the IX standard and there is some work done on this but we are still working on this and then there is the CRP CDB is the CNCI database service and tool is a tool that is packaged for submitting and querying the CNCI report coming from Independence CI system and formatting the service behind that the good point is that it can be easily added in the current workflow and so if you already have a workflow for testing and building your package the CDB could be something that you can add more for contributing to CNCI and it can so by contributing by your result to CNCI you can use the CNCI regression tool, CNCI tool analysis about all the results that are coming from all the Independence system and also it gives a way to give some kind of standard to the reports that are sent to the kernel upstream on the stable mail list for not having too many mail sent there but mostly having some unified report of what is going wrong and what is working the CRP is currently using by by these organizations and implementing a CNCI CDB tool is practically just a command and so as I said like CNCI is for example working with Pilbot and so we are just at the last step for calling the CNCI command for sending the result that we are collecting from vision to kernel testing to the CNCI database and actually the command is called cdbsubmit and it has a key a token for extracting the access and it sends the address on fire with the description of the result this is what looks the JSON file looks like and I think this JSON file this game is a bit old currently they move to some different version but this is already a good representation of how it looks like and it could not only send for example on gen2 we are sending which patch we had for each kernel tree but it's possible also to send for example logs of the building and the logs of the result so for viewing cdb is a bit detached from CNCI and it's using its own frontend it grafana which is for example with the gen2 CNCI report so in the future CNCI core is currently moving to a new CNCI API and a new CNCI pipeline but we are still in early phase there is still much work to be done and another consideration was about moving Jenkins and maybe using billbot but it's still in a decision phase and I personally think that as also gen2 is using billbot for many of the kernel testing I think it's a good and flexible tool for managing tasks and then there is also idea about creating a cdb plugin for billbot but also we are still in development phase so in conclusion CNCI is a great tool that can be part of your learning development workflow and having a way of getting multiple test results for each kernel change without going around with board something for testing like in specific board but just sending such just pushing such test to something that is dispatching the test to some different laboratory it can be a good way for doing test in a easy way and that was also one of the reason that I started making gen2 kernel CI so that we could automatically testing kernel source and patch before releasing it in some way that I didn't have to go around with some machine that is powerful enough for building or testing kernel but I could also do it with for example Raspberry Pi or such it's more resource environment and I think that's all from my talk and in the end there is presentation and I will leave this slide in the schedule website on this speaker talk so if you are interested about kernel CI or CIP or gen2 kernel CI you can check this slide and check the link that I had it for and for example the CIP infrastructure project with technical channel on IRC and like most of them I have technical channel on IRC so that's all from my talk