 Okay, so I think it is about the time we should start So hello everybody Welcome to the presentation of bootstrapping the local kernel CI So at first just let me be a bit more specific about the Time frame of a bootstrapping. Let's say it will I'll try to tell you how to bootstrap a local kernel CI in less than a day so What's this presentation all about it's it's about creating your own kernel CI environment which you can use to run your builds test Which can be suitable if you are a kernel developer or kernel CI developer or just want to test your patches So who am I? My name is Michał Gałka. I'm a software engineer who works for Collabra, which is a open source consulting company and My daily work is Connected or is about the kernel CI development so Who is actually aware of kernel CI? Who know what kernel CI is? Okay, who has actually used kernel CI in their job Okay, so so kernel CI is basically an application a system that's supposed to test the upstream Linux kernel so it's it's meant to build the kernel build the branch that you've selected Upload the artifacts then use the artifacts to run your tests and get the Test results back together. So it's meant to be a single place in which you can See the stored test results the boot test results view them compare them and truck What is more it's it's meant to work in a distributed way. So whenever you build the kernel it The builds are distributed then the artifacts are gathered in single place Where all the test jobs are also distributed to the text test backends Which then fit the results backs back to the kernel CI. So in this terms it becomes a bit difficult to set up a Local instance which you may need for testing or developing so Just to see a bit of a kernel CI to those who Not using it. It's when you when you go with your browser to the Kernel CI org page, you can actually see different kernel CI frontend and You can see what where the recent builds done by the kernel CI you can see the boot tests So you can precisely know Which kernel which branch Which tree was built and You can dig a bit To see the build blocks to see the test results so on and so on so let me start with my first days in the kernel CI so What what is the kernel CI beginners? checklist So the first thing is to understand how the kernel CI works. So what are the mechanism what is triggered by what? What what actions you need to take to actually get the results that you need? when you know the mechanisms but logic wise He you need to understand what's under the hood So what are the applications involved to make the kernel CI up and running? What are the software components? What are the tools? What is interacting with? with watch and What needs to be done to actually establish this interaction? later you can build your own local development environment and The last of my goals was to start development when I have a fairly working Local environment so at first let me tell you How the kernel CI roughly works? So the kernel CI well, this is some kind of a flow diagram so the Main part of the kernel CI is is Jenkins. So it's it's the orchestrator that actually triggers the most of the jobs in in the kernel CI so There is a monitor job which monitors hourly the changes in the in the Kernel repository. So how does it know which repository it should to monitor? There is a YAML file If you go to github kernel CI github space, you will see that there is a config builds YAML file. It defines which builds Will be done for what configurations and for what architectures and which kernel trees will be actually monitored so Every hour it checks if there are new commits. How does it know that the new commit has actually come? so it it goes to the storage which is another part of the kernel CI and It checks whether the last commit ID is the one that's The last in the repository or if it's not it just Starts the build trigger job. So build trigger based on the configuration runs the build Jobs, so it tries to distribute and build the kernels that you need and When it's done with the build it uploads it back to the storage So it uses the API endpoint city. It uses the backend rest API to put it to the storage and The storage makes it makes it available over over HTTP for the next stages When the builds are done you can run the tests So for for the tests You need a test backend usually it's lava. So I just put here the lava part so the tests Based on the based on the done of the build on the builds done the test jobs are generated and Submitted to lava Together with the test jobs lava gets the callback which is which is used when the test jobs are done so when the when the test job is done it calls the callback and submits the test results back to the Kernel C8 backend After that It sends the The test results are available in the the front end and the test results emails are being sent there's also a separate part that tries to Detect the regressions and do the bisections to detect which part or which which commit actually cost the Boot error, but it is it's not the part of this presentation. It won't be covered here so basically there are a few work phases so We probe the repository then we build the kernel then we upload all the artifacts and the metadata Back to this the Kernel C8 backend Then we're ready to generate run the test collect the results and report the results so it's Quite a lot to have done on a single PC So what what are the parts? What are the software parts that are involved in it? So basically Kernel C8 is Consists of two parts the Kernel C8 backend and the Kernel C8 front-end Both of them are like fully fledged web applications So the Kernel C8 backend Exposes two endpoints one is storage, which is just a simple HTTP endpoint that allows you to download the artifacts and the API endpoint which is used to communicate With the API to store the results to store the artifacts and do all the job that's needed In the back end it also uses the salary which is a distributed task key It also uses its readies for some session management and all the data is stored in the MongoDB So Mongo actually stores the collection of documents Responsible for storing data about builds test test groups test cases and so on So the Test so the API endpoint and the storage endpoint are used by the external Applications like Jenkins and lava to interact with the Kernel C8 So lava uses the storage to download the artifacts and download all the files that's needed That it that it needs for testing Jenkins uses the storage and the API to Push the results and interact with the back end Front end also uses the API to get the results as you've seen on the Web page on the front end web page of the Kernel C8.org This is just an application. That's calling the back end API to retrieve all the all the data So there is quite a lot of things that you need to run on your machine to have a local environment so My first take on the setup of a local development environment Was to go to the Kernel CI github space and use the Kernel CI front-end config and the Kernel CI back-end config repositories. They store the unsymbol Playbooks That have like the recipes to set up the front end in the back end And they are meant to set up like a fully-fledged production environment My next part of my plan was to install lava So I needed a Debian machine and lava is already packaged for Debian so you can just do APT get install Then I needed Jenkins so I can easily install it from the Debian package and Then I need to configure the system. So I Need to create the necessary API tokens. So all the parts can actually talk properly to each other. I Need to configure lava. So I need to Set up a virtual device that will run my tests So I need to define the QMU device type Define the QMU device itself and then I need to recreate the Jenkins jobs, which is kind of a Big work if you if you may judge by the diagram that I've shown before So what were the results? Well, I set up the environment and this is the methods Proven to work. It's used in the product in the Kernel CI production. It's used in the Kernel CI staging environment so if you're brave enough to go through all the Necessary configuration it will give you a nice environment Quite similar to the one that's run on production What are the cons? Well, it takes quite a lot of time to set it all because you need to set up the virtual machines or the physical machines You set up the network in between them And especially you need to set up the Jenkins jobs As you as you probably see while doing it not all of them are the pipeline jobs Some of them are the good old matrix jobs. So recreation of the Whole environment takes quite a lot of time apart from that you need to do some changes in the Ansible playbook so it suits your local environment and Well, it's enough to say that the install file for the Ansible Configuration itself is about 300 lines. So it's not Not a simple It's it's it's a bit of a time-consuming task So after some time well, I I got it environment running and after some time I just figured out that there might be a simpler way To set up a local development environment that will give me maybe not a fully fledged Environment but something a bit more minimal that will be able to do the build upload the artifacts run the tests Get the results maybe with without all this Unnecessary steps maybe without the whole orchestration. So the second take on the setup of the local environment was that I Would like to install kernel CI and lava in a docker-raised version Then I would like to just configure my local environment and possibly Get rid of Jenkins in the meantime Guillain Tucker who is one of the currency developers develop the tool that's called KCI build and It nicely wraps all the kernel build faces so you can run them separately and Script it the way you actually need So first things first, let's start with the containers. There is a container maintained at the Or the docker Image contained maintained and the current CI github space. It's called current CI docker however, it's still a Work in progress and it's a bit outdated. It's not fully functional. It doesn't include for example the storage so I kept digging in and occurs that The automotive great Linux project Forked this repository Unfortunately, it's not easy to to to be upstreamed but the fork itself is more functional than the than the Docker image that's available at the official currency I Space it provides you with all the necessary Components so it sets up the back end it sets up the front end proxy The salary ready is mongo. What have you? It generates the API token during the startup and it makes sure that the front end and back end has already exchanged the token So they are ready to talk to each other right after the start and it's also meant to be used in the for the development purposes So if you're a kernel CA developer or you want for some reason modify modify the kernel CA source code You can easily plug it in. It just needs you to Put it in the search directory, which is then mounted as a volume in the docker So it simplifies already quite a few steps That don't need to be manually done There is also a set of containers There are not official lava containers, but they are based on them and Dan Rue, which is who is who is also one of the currency I developers created this docker Compose files that takes the Lava master lava slave and all the necessary proxy and database packages wraps it in one thing and lets you install the lava Environment as as a single command in a meantime it pre-configures the lava master for you, so it creates the admin account and It creates the virtual device type the chemo device type and it creates the chemo device instance so you start with lava running and Having already configured the chemo device for you. So if you're a developer, it's almost everything you need to run your tests so it facilitates a lot and Running Kernel CI and lava containers is fairly simple However, there are still some manual steps that need to be done. You need to configure This two set of dockers to interact with each other. So Start with the beginning so the kernel CI can Be start with the helper script which will provide you with some useful information to tell you what your Master API token is you will probably need this to set up a few few things later And it will tell you on which ports which part of the kernel CI is actually working. So you have already a running access to Front and back end storage and you have the API token the Lava containers are not the talkative, but the only thing you need is just to use the make file. That's included. You just Run it and it runs lava. It runs the lava web panel And the rest API on the board 80 So we have two Running sets of containers We can access them all But they cannot access each other. This is the first thing you need to do after the setup There are easy work around is just to connect the Docker network Of the kernel CI Docker with the network of the lava, so we just need to call a few Docker network connect commands to Connect the lava containers to the doc kernel CI Docker Then you need to configure the lab Laboratory is a needed instance is meant to represent the place where you store your devices and you run the tests so It boils down to generating Another API token you can use it you can do it with the rest API, but it's not always very handy However, there is a tool or the whole repository called kernel CI admin And it and the repository can contains the KCI tool, which is the admin tool for the kernel CI back end If you run it You can easily with a single command do all the create read update delete work connected with the tokens labs, etc So behind the scenes it uses the kernel CI API But you just do not really need to remember which endpoints you need to call and what arguments you need to pass so and it can serve you to Administrator to administer a few Kernel CI instances all you need to do is just to Feel out the settings dot by file it's a kind of a simple file which contains just a Python dictionary and The dictionary needs to contain the access token the one that you you are already provided when starting the container the URL That that was also Visible there and you need to set up an arbitrary name for your kernel CI instance So as long as it is Python's dictionary the only Limitation is that you cannot have two laborers of the same name So you issue the KCI Command the ad lab is is the command that lets you add the laboratory you need to Specify the lab name the administrators Name and email and you're basically done if you run it from the common line it will do all the necessary changes in the back end and will display the laboratory name and The token that's supposed to be used by this lab The manual part is that you need to access the lava admin panel so we just need to access the admin URL and your lava instance and You need to pass To paste the token there and what is quite important and not mentioned in too many sources is that you need to Put the callback name that's going to be used by your Job in the description field if you don't do it it tends to End up with the authorization failure when and when trying to Push the test results back to kernel CI later you need to generate The token for your lava device for your lava instance as you're going to Submit the test to the lava instance you will need this token and basically you're done You have the kernel CI in lava running. So the next thing is that You probably don't really need a fully fledged Jenkins instance in your In your environment, so you can use the currency I built as I mentioned before it's a Quite new thing. It's it has been developed. So I guess a Month or maybe two months ago Now the full documentation is can can be found on the kernel CI Wiki Which is hosted on github. So the KCL build is basically the tool that was meant to give you the opportunity to get rid of Jenkins and Replace it with a different orchestrator So it wraps up all the phases you all the phases that you need to build your kernel starting from configuring the repository ending up on uploading the test artifacts and you can run it from Anything you need specifically you can write it from the command line, which is the Which is the way I use it quite frequently to just test things that I need so Just this is how you prepare the build. So you need to clone the repository then you can call the KCA build Update repo command that will prepare the repository for you. So it will be ready to do the build it will set it up Due to the configuration that you have in your build configs YAML file so when he So when you're done with that Basically, you can Generate all the fragments. It's also based on the configuration on the configuration of your build So the config parameter takes the config name from the config build configs YAML file Then the build kernel is the command that runs the compilation process What's worth mentioning is that you can Specify the dev config that you need you may specify the architecture and the build build environment and Build environment is not the compiler that's going to be used It is the build environment label that's used inside the build configs YAML file. So this is this is the label that will be stored in the build metadata in the kernels a backend Then you can install the kernel and you're ready to publish the artifacts which is as easy as that you already have the Appropriate API tokens generated for you during the container start startup So you just provide the API URL you provide the token and you run the push kernel command that will upload all the binaries to the storage and Then you run the publish kernel metadata Which you will Provide all the metadata and store it in the kernel C8 backend So after that you will be able to run the front end or access the web front end and you should see the builds appearing there next part is to run the tests and again there is a couple of tools That facilities facilitates that so in the kernel CI core repository You can find the lava v2 scripts one of them is jobs from API. That's basically used to generate the Lava jobs based on available builds device the devices and test plans So it will come up with a set of a yaml file ready to be fed to lava the other one is Lava v2 submit jobs which can actually take the files that you've generated with the previous script and Feed them to lava and lava will start the job based on the data provided Run run them take the results run the callback Store the results back at the kernel CI backend So this is basically all you need to run your local environment. You don't really need to have a Full setup with all the components you can run a few Docker containers. You can use tools that will facilitate that and Yeah, let me summarize on that so using the Docker eyes kernel CI and the lava lab Can be a good choice in case you don't really need a fully fledged production environment And it takes a lot less time to set it up It's always a good idea to use the key see a tool from the currency admin not using the kernel kernel CI back and calls Directly it wraps you up the rest API it provides a nice way to do all the Work connected with the token management and all the administrative tasks connected with the kernel CI backend and The kernel CI build is always a nice tool To run the built faces and store The artifacts after the build without running the whole process in Jenkins so if you have the build done you can push the Push the artifacts as many times as you need you can easily rerun the faces that that you need and as you may see It's it's quite easy to set up the Local environment nowadays, however, there are still things to do so the future plans are to develop the KCI test tool, which is Which will be a similar tool to KCA build, but this time for tests so you can You could have one tool to Generate the test jobs fit the test jobs probably trigger the email sending etc. Etc and There is also some work to do with the kernel CI and the lava containers as at the moment they are like two separate instances, so there is a job that will be done to Provide something like the fully integrated test environment so you don't to get rid of the manual steps of token exchange between the lava and and kernel CI and This is basically it Thank you For attention. I think that we have some time for for questions Yes, sure Well, it really depends on the on the developers so basically the vast majority of the tests are the boot tests, so we just Check if the if the kernel boots with the on a certain hardware configuration but there are also tests connected with video for Linux and Probably some more that I don't remember yes, yeah, like the The labs that are distributed like in several places in the where they have like their sets of real boards so the chemo is is just the one that you may need in your local environment to like Do some dry runs, but if you need tests on the on the real hardware They there are like several lava dispatchers and with connected like several Devices of different types so they they can run the tests even even the test specific for the particle platform Yeah, I'm not a computer this guy. So probably I Don't know too well, but yeah, if it if it if it can automate the whole process. I'm absolutely up for that Sorry, I didn't get it. Sorry alright, well Basically, it's meant to test the kernel So we have just a few root file systems packaged for the for for the specific Purposes like for the specific type of tests. So basically it's either build root-based or it is a debian based root file system, but as long as As far as I'm concerned, I think you can replace it with whatever you need But there are no more questions. So thank you very much for your attention