 Hello everyone and welcome to this presentation. Today, I will talk about how to merge an existing framework into KinesiAi, and specifically about how you can test your own Kana project with KinesiAi. Hi, my name is Elise Ferratzi, and I'm a Gen2 developer. I'm the Gen2 Kana project leader, and I am the creator of Gen2 KinesiAi. Gen2 KinesiAi is a continuous integration system that is automating Gen2 Linux building and testing. I'm also part of KinesiAi. KinesiAi as the technical steering committee member, I'm part of the CIP, a civil infrastructure platform, as a CIP testing working group member. I am working for CyberTask Japan as a software engineer, and I'm making EM Linux that is an embedded Linux distribution. I'm the lead CI system development of such a distribution. In this presentation, I will talk first about KinesiAi, about what is KinesiAi, how the organization of KinesiAi is formed, and why KinesiAi is needed. I will also give a summary about how KinesiAi is composed. Then I will talk about two system of implementing your framework testing into KinesiAi 1 is by using KinesiAi native implementation. I will at first talk what is the KinesiAi native implementation, and I will give an example about the work that I did for CIP that stands for civil infrastructure platform, and the work that I did for CIP for implementing emerging CIP testing into a KinesiAi native implementation. In the end, I will talk about KinesiAiB. I will explain what is KinesiAiB, and I will give an example about how Gen2 kernel CI that is testing the Gen2 Linux kernel is using KinesiAiB for sending a result to the KinesiAi common database. At the end of this presentation, I will give a small conclusion. What is KinesiAi? KinesiAi is a community-based open source distributed test automated system focused on upstream kernel development. So with automated testing system, the main scope is to test the upstream kernel. We are currently testing the upstream kernel on 155 of physical board and visual board. So in this slide, I will talk about the KinesiAi organization. KinesiAi organization is divided by the technical stream committee and the advisory board. I'm part of the technical stream committee. The technical stream committee is formed by KinesiAi core developer and maintainer, and the KinesiAi core developer and maintainer are helping keeping a good status of KinesiAi repository and infrastructure, and keeping a good maintenance of KinesiAi. The advisory board is made by representative from KinesiAi primary organization that can be seen in the slide. These representative are managing a financial budget and help coordinating KinesiAi task. So KinesiAi is sending kernel testing report upstream to the kernel 3.0. He is collecting such a report in the KinesiAi result mailing list. That is linked in this slide. Also, we are collected on the KinesiAi dashboard. That is the website in this slide. Such test can be useful for anyone that is involved in kernel testing and kernel development. KinesiAi also give you a tool suite that can be used for, that is already ready to test kernel tree in a variety of different unique board. The KinesiAi composition is made by the main tool of the KinesiAi is the KinesiAi core. That is keeping the core configuration of KinesiAi. Most of the configuration that you will need are done into the KinesiAi core. Into the KinesiAi core, there is also the core tools of KinesiAi that are the tool that are sending jobs for building and testing the kernel. There is also the backend that is currently being reworked and inside the backend, we have also the old KinesiAi API. Currently, we are making a new KinesiAi API. The old KinesiAi API can be seen at the api.kinesi.org website. Currently, we will support the old KinesiAi API until the new KinesiAi will replace the old KinesiAi API. We have a front-end that is the KinesiAi web dashboard, that is showing all the data available from the KinesiAi backend. We have a test definition repository that is keeping all the lava test jobs that are used by KinesiAi for sending each test job. If you want to add a new test to KinesiAi, replace to add the code using the KinesiAi test definition. We also have the lava docker repository that is useful if you want to have your own KinesiAi lava testing laboratory, and that can be useful for collaborating with KinesiAi with your own board. If you want that it can be tested also in some board that you own, or if you want to share your resource for testing with KinesiAi, and is easily implemented with docker. We also have Jenkins, that is the part that is orchestrating building and testing by using a KinesiAi Core tool. Also, we have a QCIDB, that is the tool to submit the kernel test data to the kernel common database. In the next slide, I will show the bigger picture of KinesiAi. The upper part is the KinesiAi native implementation. So, there are the lava labs and labs that are used by KinesiAi for building and testing the kernel. And in the under part, we have the independent testing framework that are the independent framework that are sending their own result to the kernel CI common database by using KinesiAiDB. And so we are collaborating to KinesiAi by using KinesiAiDB. So, KinesiAi try to be distributed as possible. And so, also each KinesiAi test lab can be started by anyone. And so anyone can share their own lava lab with KinesiAi. So, any lava lab with a public API can be added to KinesiAi. Currently, we have many test lava lab that are connected with KinesiAi. And we hope in the future to see more of these testing labs connected with KinesiAi. So, in this presentation, we are talking about framework. What we intend to say about framework is a testing framework that is including kernel building, booting and testing code. For example, the CIP, CIP infrastructure platform project have its own framework for testing the CIP SLTS kernel tree. So, as I was explaining during the summary part, we will start from this presentation, we will talk about how to merge your testing framework into KinesiAi. And from what I see, I see mostly two way of merging a testing framework into KinesiAi. One is by using KinesiAi native and the other one is by using KinesiAiDB. KinesiAi native is by merging directly to the KinesiAi testing code. And for example, in the next slide we will explain how we managed to add the CIP test framework code merged into KinesiAi native. And the other one, KSIDB, is the tool for sharing KinesiAi testing result with KinesiAi testing result in the KinesiAi common database. For example, in this presentation, I will explain how the Jun2KinesiAi that is automating the Jun2Linux testing is sending the Jun2Linux kernel testing into KinesiAi common database for collaborating with KinesiAi. So we have the KinesiAi native implementation. The KinesiAi native implementation is VCI for the KinesiAi for automating, building, booting and testing of KinesiAi and that is the main scope of the KinesiAi native implementation. It is managed and developed by the KinesiAi TSC team and the KinesiAi community. Also, we have the KinesiAi native test job and the KinesiAi native test job currently are in lava job format and we need to be generalized for different KinesiAi lava lab environment. Because KinesiAi native tests are generalized by merging their own code for testing kernel into KinesiAi you can avail of all the generic testing feature that KinesiAi already have. Also, it's possible to use KinesiAi API support for sending tests, for example, to KinesiAi for the KinesiAi is offering sound tool for interacting with KinesiAi. Also, by using KinesiAi native we already have a production-ready tool for testing kernel tree. Also, by using KinesiAi native we already have a production-ready tool for testing kernel tree. It is a community maintained by the KinesiAi community and it can use KinesiAi test lab. The cons about KinesiAi native, for example, is implementing some tasks or steps that are not into the scope of KinesiAi so not about testing kernel upstream. For example, with Gen2 we have some task steps that is for testing some kernel package and such things are currently not possible to be upstreamed to KinesiAi because they are not in the scope of KinesiAi but they are also complicated to implement into KinesiAi and so in such case there will be needed to have a separate fork of KinesiAi. So on CFP working group we decided to start merging some tests into KinesiAi to start to do some CFP kernel testing with KinesiAi and this decision was taken because of the KinesiAi native pros. So in the next slide I will explain what is CFP and how CFP managed to merge is testing framework into KinesiAi. So CFP stands for Civil Infrastructure Platform and is a Linux foundation project with aim to establish a base layer of industrial grade tooling using the Linux kernel and other open source projects and you can see the page link in this slide. CFP testing framework is using GitLab pipeline together with the Lava CFP lab for building with booting and testing the super long term support and super long term support real time kernel and on the CFP testing framework we have some tests like spectrum mental testing and some user space tests like some security standard tests. Such tests are done by on Routefest using ISA CFP call that is used for user space testing and is the CFP Routefest. We also testing CFP kernel is tested by using the CFP kernel configuration that is on the CFP kernel configuration repository. So by having merged CFP into KinesiAi we started to get regression test mail and release test mail so for each CFP release we get the current status of CFP kernel. Also we could start testing some configuration from the CFP kernel configuration repository and use some such configuration with CFP kernel on KinesiAi and we implemented some tests using the CFP core Routefest ISA CFP core on KinesiAi and we could run some CFP and KinesiAi tests on the CFP kernel for example KSF test and LTP but also we could merge specs and mail down check upstream into KinesiAi. KinesiAi have also a tool for automatic bisection of regression so in case that KinesiAi is finding some regression if we start to do automatic bisection sending mail of the bisection we just recently implemented it so we yet have to see mail about bisection and we hope to see more mail in the future and also by using KinesiAi we can avail of KinesiAi resource and KinesiAi laboratory, KinesiAi connected laboratory and all the board of the connected laboratory. So adding the kernel tree to KinesiAi is very simple so as I was saying from the KinesiAi structure the main repository of KinesiAi is with KinesiAi core and with KinesiAi core we have a configuration directory that is keeping the main core configuration of KinesiAi. We have the build configs configuration the testing config and the laboratory config and rootFS configuration for adding a kernel tree we have to add to the build configs configuration so in this case we added the tree CRP and with the CRP tree URL and for making it monitored by KinesiAi we have to add a build config that will allow the tree branch to be monitored and tested by KinesiAi in this case we added the branch Linux 4.4 CRP and we have a variant that will specify which configuration will be tested and also like which my building setting will have also the tree will specify what we set it before so the CRP tree and just by doing this we will see we will get some result email so this is one example of the CRP result email and in this example we had 184 build and three of that was failed and we have 101 passed it and we get such a report for each CRP release and also if something broke we can get a regression email and this regression email it will tell if something that previously was working currently it doesn't and here we have the log file and we can see the log of the board and that will show more in detail what happened and we can see the last success and so the last time that these tests passed and we have these pass it test with CRP61 and then we can see the some snippet of the error so after we implemented the rootFS and for doing this we are using my rootFS has still created using the GitLab pipeline and using ISAR that is CRP the integration system for automatic root system generation that is using CRP all the creator rootFS are pushing with Kinesi API to the Kinesi Store server we are still using Kinesi old upload API and in the future we will start to move to the new API when the new API will be ready and in the under part of this slide we can see the example of the Kinesi storage and these CRP rootFS image are used by Kinesi for some CRP testing and we can here see how Kinesi is using the rootFS that are in the Kinesi storage and this setting is still done under the Kinesi core part that is the main part of Kinesi and the Kinesi core repository under test configuration setting and here we have the rootFS configuration second we started implementing a test that was on the tower on the CRP testing framework but not yet implemented in Kinesi upstream and the spectrometer down check was not yet implemented in Kinesi upstream so we decided to upstream the test to Kinesi and this the test was actually already on the Kinesi test definition but just was not enabled so for enabling the test we just had to add the configuration script to the Kinesi core repository and there is the lava configuration directory and we had the configuration script for making a spectrometer down check work and by enabling a spectrometer down check we could enable such check not only for shipping but also for all other work and kernel so here is the example of how is a lava test job that is in this case used by spectrometer down check so is some YAMLify and this lava test job is made by a metadata and inside this metadata we have the name of the test in this case is spectrometer down check we have the format that is lava test with test definition 1.0 we have the description that is about the test job description we can add the maintainer and the environment that in this case is using lava test shed environment then we have some steps that are needed for running the test the test job in this case we have a share script and so lava test job can be also some share script some other code that can be handled on the lava platform and then we have the same lava that is checking the output result of the share script that we check if it found any fail or pass and same such result to lava also we could have CRP web dashboard that is under CRNCI.org domain and is a good way for see the summary of all CRP test result and the build status next I will talk about the other way of collaborating with CRNCI CRDB CRDB is the CRNCI database service and tool and the CRDB tool are used for sending result submitting and querying result to the CRNCI common database so CRDB can be easily implemented in any independent continuous integration framework workflow that is testing the kernel we have a tool CRDB is a tool for unifying test result from different continuous integration system and sending such result in the CRNCI common database because of that CRDB is trying to standardize some of these continuous integration system report that are testing the kernel for using CRDB we need CRDB credential so the good part of CRDB is that it can be easily implemented in your current workflow and so you can if you have an independent continuous integration system that is that is that have some part that are not specific to a CRNCI scope but you still want to send your result to CRNCI by using CRDB you can implement you can send collaborate with CRNCI on your independent continuous integration system so it is useful if you already have a kernel testing framework that differ from CRNCI scope and of course because it is your independent testing framework you can tolerate your system on your own way not only about the current scope of CRNCI but because CRDB is mainly about sending test result to the CRNCI common database you cannot benefit from CRNCI native feature like Bissection or Regression Detection and you need you need to maintain of course your own independent CR framework and you if like CRNCI tests are using lava jobs so it can be still compatible with some of the CRNCI tests but CRNCI native is already enabled for his own test framework and also you cannot use CRNCI connected laboratory and the machine and board connected to such laboratory unless you ask for access by the laboratory owner so CRDB is currently used by Gentle Linux, Red Hat, CKI, ARM Google Seasbot, CRNCI and NinaruTaxid so as I said the implementation is really simple and the Gentle Linux kernel testing system implemented it by having a step that is collecting the kernel the Gentle Linux kernel test result and a step for sending such test using CRDB to the CRNCI common database so because it is implemented into your testing framework you don't have to send change to a CRNCI native implementation or your change to be approved by CRNCI team for sending a result with CRDB is using CRDB submit that is one of the tools inside CRDB and we have we are aggregating the Gentle Linux testing result into a data file and we are sending this data file to a CRNCI common database and we can see an example of this data file but this example is still using CRDB IO schema 3 but recently version 4 came out and started working into implementing version 4 also into CRNCI there is a CRDB dashboard that can be seen from CRNCI dashboard by clicking on view statistic about all the data and we can see CRNCI common database and the dashboard that is made by Grafana so concluding in this presentation we will see two ways of collaborating with CRNCI one is by using CRNCI native and another one is by using CRNCI DB for sending a result to a CRNCI common database we think that collaborating and reusing code test case is especially on doing CRNCI test is really helpful to the CRNCI community so if you are interested about CRNCI you can access to the CRNCI documentation on this slide and also CRNCI has its own channel so if you have any question you can use the question or answer on this part of this presentation so if you are interested in helping out with CRNCI or have any question but out of this presentation you can ask everything on this channel and any question is welcome so thank you so much and thank you thank you for listening