 I don't know how many people have joined already. But it's time to start, so let's get started anyway. So first, I'll start with explaining how this is going to work. So I have a few slides to explain what presents some content to get started about what KernelCI has been doing in the past year. And then we plan to have time to discuss things. I'm not sure what's going to be the best way to interact with people. We'll see what the platform can do from my point of view. It's just a Zoom call. I don't know if it's going to be completely interactive from your point of view or not. But hopefully, the chat will work as well. Can everybody see the chat messages? Yeah, I see geese here. OK, so that seems to be working. So if you have a question, then for some reason, you don't appear on the video. Then please, that's 17. Yes, there's a bit of inflation going on here. So if someone has a question and can't show up on the video, then feel free to ask a question on the chat and they'll take it from there. OK, so now I'll start sharing my screen. I have a few slides here. OK, can you see this? Yeah, I think it's working. OK, hello, everyone. So if you remember, about one year ago at ELC Europe 2019 in Lyon, a Kernel CI project was announced as a Linux Foundation project. Now, a lot of things have happened since then. So first, I'll go through a quick review of all these things that has happened, things that have happened. So a lot of things have happened in Kernel CI in general. This slide is more about the things that are related to the ELF project or what we've done thanks to the members that have joined via the ELF project. So to start with, we had to set a few things up. So we've defined the role of the project. We came up with a mission statement. We elected some members for the board and the technical steering committee. So that's why during the first three months there was no completely visible results. But we had to go through all that stuff and set up any projects. Then at first then, we had quite a good understanding already of what we were doing. So we could talk about the mission statement, what we were trying to do, especially common reporting. That's a new thing we started doing with Red Hat. So KCIDB is the name of the project that does common reporting for gathering results from various test systems. And Red Hat is a member of the project. So that's why it's one of the things that may not have happened if we were not in this ELF project. Then we started improving our infrastructure. So Microsoft, who is also a member, has given us access to some other servers. And that has made a lot of things easier for us for hosting websites and these kinds of things. Then we also started moving towards functional testing and the front end was redone. I think that was not really Linux Foundation thing, but very much I think it happened thanks to the momentum that was given to the project to really open up for doing a lot more tests. Then we did a community survey in coordination with several people in the board of the project, the governing board. And we got some good feedback, some interesting feedback from the general community. I'll go through this in another slide. Then we also started using Kubernetes. We have clusters running in Microsoft Azure and also in Google compute clusters. We are using this now to build kernels. So we have much bigger capacity now. And we'll probably be using that to run all the tests as well, things that don't need to run on a particular hardware platform. Then at Linux Plumbers, there was a lot of discussions around kernel CI. And I think you could really see the impact that the fact that the project was really taking off. We still have a lot of things to do, but if we look at all the things we've achieved, I think we're doing quite well from that point of view. And one of the things that was the accent was put a bit on common reporting and trying to get more people to send results. So we got Google Sysbot with Dimitri. He worked with us to get the first Sysbot results sent to the common reporting. So that's another great step into having one email report that will contain results from the current kernel CI things we're running, Red Hat, CKI, and Google Sysbot. And maybe we'll have a few more down the path as well. So on this slide, I've put a very quick summary of what the community survey told us. So that's already kind of a first set of lessons learned. So we need to do more to test patches. Well, we need to actually start testing patches that are sent to mailing lists because the native, the kernel CI tests right now only apply to Git branches. Also, we should be doing deeper tests with longer running tests that look for more things to have a bigger coverage. So testing patches is good for quick feedback, but we also need, especially for stable releases, we can run much longer tests. And that's really some extra value we don't have right now. Also, we need to improve the web dashboard. Not everybody uses web dashboard. Well, not everybody relies on web dashboard, but almost everybody will be using it a little bit at one point or another. And if we have a good UI, then it can also simplify people's work a lot. Here, I have a quick summary of what happened at Linux Plumbers and what we learned from the Linux Plumbers experience. So first, I have just some links to the recorded talks which you can watch again on YouTube, the two talks that were really just about kernel CI. You'll see actually other talks that touch on kernel CI, like in real time, micro-conference, and in the clang as well, and maybe a few more as well. So actually, kernel CI was brought up in many different areas of the conference that was really positive. But what we realized is that because of this interest in kernel CI directly coming from maintainers who want specific tests to be running, and they don't have a CI system at hand, they want kernel CI to do that. That's what we call now the kernel CI native tests, what kernel CI does directly. By contrast with the common reporting. So when we gather reports from other CI systems, like sysbot and ARM have some test systems as well, we talked with Sony, who have Fuego, and also Gen2, and some people from Intel who work on Yachter. They do a lot of tests there as well. So we can gather all these. Then if a maintainer says, I have a special branch, I want to test, I want to run these tests, the other projects might or might not be interested or might not be easy for them to integrate that into their system because they're focused on testing their products or testing one thing and they don't want to diverge. So that's the kind of thing that native kernel CI tests are about. So if it's completely for the upstream kernel, a need from coming directly from the community makes sense to have it there. So they all complement each other really. Now, I've gathered a few ideas on this slide as maybe things that I think we can discuss here. But of course, people might have already some ideas. So one of the first items is what I've just talked about. So how we can have native tests alongside common reporting. Well, the native tests will join. The results will be in the common database as well as other results. And then I think we can simply say that the purpose of having Linux Foundation project as opposed to not having one like we were kernel CI started in 2014, so or maybe even a bit before that. So for quite a few years, it was a bit ad hoc project. And now we have the Linux Foundation framework around us. So for the project itself, it means we have much more resources. We have a budget, we have access to clusters, and we have a lot of expertise from the members. And for the members themselves, it's a great way for them to improving into the upstream kernel quality. Because all the CI systems that run for products or distros, they can feedback fixes, but they don't work directly on the upstream. So the common reporting is a way of bridging that link. And also native test is to work directly first hand on it. So if you want to invest into improving upstream kernel quality, I think kernel CI basically is for that. And of course, we have some issues around scalability as we keep growing. Some bits are growing faster than others. Like we can build a lot of things now. We can start running more tests, but we don't necessarily have the capacity or the expertise or the people available to deal with all that as we keep growing. So this is maybe where we need more help. Now let's see. After that, so I've just put a summary here how to get in touch with a mailing list. We have all the code on GitHub. There's an ISC channel. You can see all the members. So we have Bailey, Brands, Civil Infrastructure Platform, which is another in-explanation project. We have Collabora, where I work. We have franderis.io and Google and Microsoft and Red Hat. If you download the PDFs, you can see I've put a few more things after that, but I won't go into them right now unless someone has any specific questions. So does anybody have any questions right now to raise? Let's see if I can stop sharing. So like I said in the beginning of the talk, I don't know if you will hear, but I'm not sure if you can just show up by the video from your end. So in doubt, you can type a question in the chat if you can't reach in video form so I can hear someone. Seems to be working. Oh yeah, Helogie. Bonjour. I have nothing to contribute. I just wanted to make sure this worked if somebody else wanted to chime in. And thank you very much for all the work you've been doing for the project. It's been wonderful to watch. There's infinitely more work to be done, but I think that the summary was really encouraging already. It looked like more when you presented this way than the things that we struggled with on a weekly basis amongst ourselves. So thank you for the overview. It was much appreciated by me. We already knew it, but needed to see it summarized again. So that was good. There must be questions. I see the list of 28 participants. There are some people I've not met before here. So I'd love to get some feedback, questions, comments, concerns, input that can be contributed. A lot of the members of the project are here, but it would be great to hear from the others, people that have contributed as well. If not, I have a question. Would you like Colonel Si to be doing for you if there was one thing to pick? That's the question I did for Plumbers. People said, I want everything, but if there's the most important thing in your mind from whatever role you have if you're a developer or maintainer or working on a product or something, what would you expect as the main important thing to be coming from Colonel Si? Here's a question. So I'll read it out loud just in case that's not recorded. I wonder if you plan to add any subsystem-specific Si? Are there any plans or ideas? For example, like for SCSI drivers. We already have some subsystem-specific things, especially, well, not especially because we insist on it, but more because we've started by doing things for video for Linux and also for DRM, KMS. We also have a few more, well, we're testing things in USB a little bit, but for video for Linux, we run V4L2 compliance on a handful of drivers, like UVC on many devices. And we're testing the main branch from the media tree, but we also test, we run the same tests on Linux Next and Mainline and Stable. And the results from that are sent to the main mailing lists and also to the subsystem mailing list. And I actually worked with Hans initially to get that to work. There are still a few things to improve. So we didn't really have to change anything in the infrastructure to work with the video for Linux subsystem. And I think that was normally that shouldn't be needed. So basically what we need to do is add specific tests. So recently we started adding tests for real-time Linux. So we just add some test definitions or reuse some test definitions from other projects, like the Elina or LKFT, they have some definitions already which we can reuse. And we run that so we can configure which kernel branches to test on which platforms and what to do with the results. So for SCSI, well, if we have some tests to test the, maybe, I don't know if you want to test blog devices or more low-level, like directly talk to the SCSI drives, if you have some tests for that, we can make a user-5 system or whatever works best and have some devices with SCSI drives and run things on them. You can, I mean, when I see we, kernel CI as a project doesn't directly manage the hardware. Some members have the test lab, like Colabra has a test lab, Bailey Braz, a test lab and CIP and many others, also some that are not members. So wherever you're coming from, you can have your own test lab and run tests and submit results. You can use the kernel CI builds, run it on your platform and send the results to kernel CI. And if someone else wants to also run this because it tests, they can do that in their own test lab. Okay, now I have another question from Chris, maybe one question. Some time ago, there was a way to search for test results in a specific lab on the dashboard, the other web UI. It seems this feature is gone now. There was a change in the web front end to move the focus from boot testing to functional testing so we could have a lot more results. However, the way the web front end was designed was very idiosyncratic, I should say like everything was very special. So when we made that move, we simplified things as well. So the plan now is to have a better web dashboard which would be more flexible. So we don't have to handcraft every single possible search view and stuff like that. So the short-term plan, if something like that is really needed, we can probably add a feature for it, but it's basically infinite, the number of ways you can search the data. So we don't want to spend endless time trying to implement everything. So right now it's a bit more simplified. But maybe you can send an email to the mailing list like kernelci.groups.io. And if you can explain why that's important to you, also we're trying to gather some user stories for making a new web dashboard. So if you explain how that works in your workflow like you want to see just the results from the platforms in your lab, maybe maybe that's what you want to do. It would be great if you can explain that and just put it in a small email. I hope that helped. Tim, what is the relationship between kernelci project and lava project? And does kernelci have non-upstream changes to lava? Do lava people participate in kernelci? So kernelci is independent from lava. Lava is a project run by Elinaro to basically provision some platforms and run some tests on these platforms. It has an API so you can remotely send a definition of what you want to run like here's the URL to kernel, URL to file system and some YAML definition of the tests you want to run. They try to run it and send you the results. So you can use it like that as a service. That's exactly what kernelci is doing. There are other labs that are not lava. So every labs are managed by people on outside of the kernelci team. The kernelci team only deals with the core pipeline sort of thing that will trigger builds and trigger tests to run in labs and put the results on the web front end and in a database and send email notifications. But when a lab goes down or a device goes offline, that's not kernelci for some responsibility. That's the responsibility of the people running the labs. If you go on the kernelci.org web front end, you can see that there's like eight or nine, maybe 10 labs. On stable there's more people testing stable. So you have like 10 different labs. Most of them are lava, but some are not lava. You can submit your own results if you want to. Of course, some people contribute to both, like I'm a co-maintainer of lava as well, although I don't contribute too much right now. And there are people who do only lava, people who do only kernelci. You don't have to do both. There are separate things really. Lava could disappear one day and we could still have kernelci, which is be doing things differently. Now, again, other projects beyond Linux are also graduating and doing more CI and automated testing themselves, which really helps exercise this tech more completely miss SCI comes to mind. Yeah, okay. Is there any documentation on how to write those custom tests and how to... Okay, so yeah, actually, we've just almost finished writing a guide for that. It's already, some part of it has been merged. Let me put the link here. So it's kind of maybe easiest to do this in lava, but again, if you already have your own test system, you can use your own test system instead. And we'll work out the plumbing to make it work. So I'll just put the link to the documentation. Yeah, so there's an example there. That's based on lava and DevOps, which is a tool to create a Debian-based file system. So that's to run on like small hardware, well, not really small, not necessarily small, but on development boards, where you create a file system image and it will be running there. If you run on a more virtual platform, you could use VM image or Docker. It depends on what test you want to run. Okay, narrow open hardware projects for SD card multiplexers. Yeah, that's a good idea. Sounds like we have a new kind of tests being designed here because it drives. Well, there have been some talks about MFD and memory devices, like flash memory, basically. Maybe some tests will be relevant to both. I'm not sure, but slightly higher level ones will probably be like block layer tests. Considering that now KCI DB is storing a lot of data, is there a plan to explore that? That's the first question, so quick answer is yes. What are kernel CI plans for this volume of data, machine learning, data analysis? Do you already have some ideas about which kind of information could be useful to extract? So in the short term, we want to use that to generate a common email to report all the testing that is being done on the upstream kernel. You'll see on kernel mailing lists for one kernel revision, you might get four or five different emails. They'll be testing things in a slightly different way. And that's, if it keeps growing like that, it's gonna be counterproductive and causing more headaches than can be a bit of a disservice if you have to compare all these things and try to assemble them to get the bigger picture. So having a single database and a single email makes everybody's lives a lot easier. So that's more like the short term thing. And longer term, we can do things exactly like you've described. So we can do, it's really like big data analysis. We can do machine learning to maybe detect when a patch is more likely to cause a problem or there's a lot of things that can be done. We're not quite there yet. We can also of course do things in the middle, things that are not too complicated but slightly more computation intensive, such as looking for trends. So if we want to make a, if we capture collect data for a long time, like for two or three kernel releases, we can see how things evolve, see if there's more bugs in RC1 than RC2, that's kind of thing you would expect. And then, but see really how that evolves and see if kernel CI has an impact on it. So that's the kind of thing we want to find out. But understanding what we can do with the data is part of dealing with the data. So we don't have all the answers yet. If you have some ideas, it would be great to share them. Okay, Vin says, yeah, there's another talk, exactly. Yeah, thanks Kevin for mentioning that. It's a talk on Wednesday about how to test things with kernel CI. Is there an alternative to Lara to bring your hardware? Yes, so you can use, in principle, you can use any test system you want. So normally, the first thing to do would be to detect when a new kernel CI build, well, a new kernel CI build binary is available. So if you have an x86 board or a num64 board or something with a specific dev config you're looking for, you need to monitor for when kernel CI produces one. We're working on a way to notify labs, so they don't have to keep looking. But right now it's possible to, what labs are doing now, labs that are not level, they are checking every now and again if there's a new build, if there's a build, they download it and run some tests and do whatever they want with it and then produce some results. And then there's an API and a tool called KCI data, which you can use now to submit your results in a simple formal format. There is a KCI REST API can be used to submit results. Yes, exactly. Also, we have shared Google doc where we can put some notes. I've put the link in the schedule sites, description for this talk. Well, I'll copy it here again. I haven't put anything there yet, but I'm planning to copy some of the things we've discussed here. And feel free to add things there as well. Yeah, we have the link there. Any more questions? I wonder how many people here are already familiar with kernel CI, whether you read email reports regularly? No, we have a question. Is it also possible to use LabGrid for hardware testing? Yes, it's possible. I don't think anybody is using LabGrid for kernel CI right now, but it's a discussion I've had with a few people before. And yes, so you know what I explained a few minutes ago about detecting when there's a new kernel CI build available and downloading it and running your tests, you could be doing that with LabGrid. So as long as you know when the build is available, then you feed that into your LabGrid system and then produce some results. And you need to have some way of forwarding these results with the KCI data tool to kernel CI. You can also set up your complete own CI system, make your own builds from the kernel versions you care about and submit the results to the common database if you want to be more autonomous. That's another way of doing it. So it depends on how integrated you want to be. So yeah, the glue is basically... So like I said, there's two ways. The more integrated way, like if you want to be like a lava lab, you need to detect when the build is available, when there's a new binary. And you feed that into your system and then you need to have a way to send the results. There's already the tool in the API to send a result. So as long as you have a handler, whenever your test finishes, then you can forward it. If you want to be less integrated and if you don't want to wait for the kernel CI builds, if you want to make your own builds at your own pace, then you can do that and submit the results to the common reporting database to KCI. The kernel builds are not specific to a lab. Of course, with ARM, you have a lot of DevConfig that are tailored to one family, like you have Exynos DevConfig for Exynos platforms. But the kernels are, apart from these special cases, the kernels are really generic ones. So if you build ARM64 DevConfig, that should work on any ARM64 platform. Supported in mainline Linux and same thing for X86. And with ARM, you have ML TV7. So these are the main ones we built. If you look on the kernel CI website, you can see easily that some, with a single build, sometimes you can have, say like an ARM64 build, you can have really like hundreds of tests. So I'm picking one here, that's from MLogic. So I just picked an ARM64 build here. So you can see the details of the build. And also there's a table at the bottom with all the tests that were run. So you can see it was run on, well, there's a bunch of KMU, but it was run on Raspberry Pi and a couple of Chromebooks and a PyNate, which is a lot of Chromebooks. And a PyNate 64 platform as well. And a Rockchip platform. So these are all completely different kinds of hardware, but they all are on 64 and they're all supported in mainline and they all work with the same DevConfig. They have a different device tree, but that's coming from the same build. Oh, I see, sorry, I was typing. Okay. And like, actually, I think that is one particularity of kernel CI because it's really just building upstream kernel. It's trying to be as generic as possible. Whereas when you have CI systems that test products, then it will be building a very specific DevConfig that's tailored for the product to have just the things that are needed to optimize it for the product, optimize it in size and everything. And it's useful to gather these results as well, because if you combine all that, all the products being tested, all the distros being tested, all every use case, every slightly more vertical integration or even complete vertical integration. If you combine all these together, you have a huge covering, a huge test coverage because the ARM64 DevConfig is not all, not config, it's not only Sconfig, it's just the default one, which on some platforms it boots and you have a login prompt, but maybe maybe a GPU driver will not be turned on or maybe some other feature will not be enabled and then you don't execute all the code paths. And trying to be able for all these things is difficult and there's some other projects trying to do that we're working with as well. The project that Elix kernel is a big project and testing it is also a big project. We need to come up with a way of doing it that matches the complexity of the kernel and that can only be done by having the same number of people who work on the kernel as the number of people who test the kernel. So for like, there's a main line kernel, we have the native test for testing it in kernel CI. And if you're a product, an OEM, you make your own products, you have your test system for your product and you can contribute the result to kernel CI and that's the way it scales basically. Okay, thank you. I don't know how much time I have left, like 10 minutes left, I think. Yeah. How long are the kernel CI test image artifacts kept for? Yeah, sorry, I've missed another question before. Yeah, so, yeah, the build results are currently kept for, I think four weeks. We could leave it for longer if we had more storage, there's one thing we could do, but in practice it hasn't really been very useful to keep them for more than that because normally if there's a problem, a bug that people want to fix having versions all over then four weeks would mean it's a bug that's been there for more than four weeks and then you can just rebuild all the kernels if you need something more ancient. However, but the test results, the metadata, everything that's in the database, we're not erasing it at all right now. We might have to archive some of it or do something else for performance reasons, but we're not planning to delete that. So all the test results to know when builds failed, all the warnings and the build logs might get discarded, but at least you know the warnings that were there and all the test results which test passed and failed and regressions, everything like that is kept in the database forever as long as we can normally, we can keep that forever. What are the benefits for project members? Yeah, I've put this on one slide. I think the main benefit from Linux Foundation project member is that it's a way to concentrate the efforts and get the best chances of having a central system to improve the quality of the upstream Linux kernel. So it's like if every member have their own test system and they try to work together, kernel CI provides a framework around that and we provide infrastructure and we provide a coordination for it. So I think that's the best incentive. If you're not a member, you can also contribute, you can send your own test results, you can discuss, you take part in discussions, you can contribute to the code, you can do a lot of things. But of course, we need resources like servers, we need some budget as well. To not many things we, well, so far we haven't spent too much money to be fair but that's because we've just started. But this is, some things come with the cost. So if we don't have services, if we don't have like, for example, servers, if they were not provided to us directly by members, we would need to pay for them. So having a budget for it is also a solution. That's just a very basic example. So having members benefits the project and by benefiting the project, you get normally, you get improved quality of the upstream kernel. If you rely on the upstream kernel for your own products, then it means you have less things to worry about downstream and that's the big win. So all the tests you're doing on your downstream kernel, if you have them running in kernel CI, or if you test stable or mainline and you submit your results, then they will be reported to the community and get fixed. So you don't have to deal with so many issues downstream. I hope that was clear enough. Okay, I'll copy a few more things to the document. And if you think of a question after this session, you can always add it there actually. It's more of a brainstorming document. And maybe we'll make a blog post on the kernel CI website with it at the end, or at least a few highlights. Yeah, thanks, Guy. I'll read out Guiz and such in case it's not recorded. The project's ability to deal with all the data being collected and making sure it's enabling and supporting subsystem maintainers really depends on more member companies joining. There's a slide in my slide deck with the members we have now. We don't have that many, we still only have the founding members, which is great. But yeah, of course, there's a lot more people around in a lot more companies and organizations in the kernel ecosystem. The project will love to be able to invest into its big data. And Guiz also says, improving the web UI is a project that is being explored. Someone mentioned some ways to search. Having gone away, yeah. So that's how to improve search on the web front end. And also, of course, improve mailing lists, everything, well, improve email reports, improve the way people improve users' experience, basically. Are there any risk five hardware in kernel CR? Yes, yes, we have, let's see, we have Bay Libre have one board. Now we've been building for risk five for a while. So let me get results from next. The basic test in kernel CI is called baseline. So it's a bit like a boot test, but it does some checks to see whether the kernel had any error and stuff like that. So we have put a link to some baseline results. And oh, the risk five board is not there. Maybe the build was broken in that next revision. But yes, basically, it's possible to search it via the SUC tab. So we have CI five, okay. And it was tested in mainline. Yeah, we have high five unleashed here. And I think it's not booting and there's some hardware. So if you know how to fix risk five, maybe you can take a look. So that last link you can have a full log in HTML. It's not booting at all, but it used to boot, I'm pretty sure of that. I'm not fooling every day what's going on in that, but yes. The quick answer is yes, there is risk five hardware. And if you have some risk five hardware, you can, you know, if you have a test lab or if you don't have a test lab, you can create a test lab and connect it to kernel CI and then all these kernel builds could be run on your, on your board and we could run all the tests on it like suspend and resume. Maybe if we want to do LTP and KSL test are being added right now. That Kevin says it's offline right now. So, okay. And it should be back shortly. Thanks Kevin. Looks like he has found something here. Yeah, he has found a passing job. Of course, table is more likely to work. I wasn't sure whether support for risk five was all matched in this table. Oh, I thought there was a link on a website. Okay, we need to improve that. Let me show you the slide. So the quick answer is yeah, you have kernel CI at groups.io, okay. And then you have put a slide together with some information like that. Why is it slow now? Okay. Yeah, there's a IOC channel, kernel CI in one word on free node. There's a lot of more general mailing list as well. I don't remember the email Tim Bird might know. There's another, yeah. There's another email that's used for all the testing systems around Linux kernel. So this one, kernel CI groups.io is really just about this project. And if you're interested in testing upstream kernel in a wider way, then there's another one, which I don't know. And then there's another one, which I don't remember right now. It's the automated testing. Yeah, it's a Yocto project mailing list, but it's really not specific to Yocto. That's more like all-encompassing upstream kernel test mailing list. Thanks, Kisa. I'll read this out as well. If you work for a company that should be a kernel CI member, please reach out internally. The project could really benefit in 2021 for more member companies to achieve its mission and objectives. And that's something you can see on the main website. Okay, I think we're done now. So I don't know when this is going to disconnect. But again, if you have any questions, you can reach out with all the things we've mentioned. Yeah, okay. So thank you very much everybody for being here. It's been a really good discussion.