 So we're here at the Lunar Connect, and who are you? I'm Jenschen from Khaiselikan. Hi, and who are you? I'm Pyao, I'm Khoopyao. I'm from Khaiselikan. I'm from Khaiselikan. And who are you? I'm Pyao from Huawei. And who are you? I'm Nareesh Kamboju, Linux Kernel Validation Engineer. I'm demonstrating this one setup. The question is… This is the LKFG stuff? Linux Kernel functional validation. We are validating the Linux Kernels. So that's the way we are able to find the bugs and report to the screen. So this is a complete framework. I will explain you part by part. Let's say you have a problem. How do you validate your Linux Kernel every day because there are a lot of patches that are coming in every day, every hour and every minute. So there is a framework to be in a place to validate the stuff. And a lot of people are using this now? Not only for Linaro. Not only for Linaro. So we are able to initiate to create a framework and ensuring that the long-term stable kernels are validated every time. So there is a great cage. So he's pushing the patches every week or twice a week. And whenever there is a commit happening on the stable RCS, currently we have 4.4, 4.9, and 4.14 and 4.18 maintenance per gig. So whenever there is a new patch that is coming in, we have a build system in place. Jenkins in place. Jenkins are validating the build test. So once the build test is getting through, so currently we are building for X86 and X8664. ARM 32 and ARM 64. So these are the main archages we are currently focusing on. Ensuring that the build test is complete, we'll be doing a functional validation. Functional validation includes a boot test and a functional tester of LTP, case self-test, and leap huge pages. So what do you do with this? You can come down here. So this is the architecture I could explain to you. Currently we are managing kernel branches of 4.4, 4.9, and 4.14 and 4.18. These are increased branches, long-term stable branches, then followed by mainline and the next. This is Jenkins. Jenkins will be taking care of a build. We are currently going to build ARM 32, ARM 64, X8664, and ARM AT86. This is our build test. We are managing that the build is not breaking. So if there is a break, we are able to find out the breakages and reporting back to the maintenance of that particular kernel and subsystem. So once the build is done, currently we are using open embedded as a user space for a pre-building of LTP and case self-test and leap huge pages. We have 24 test jobs for each device, I could say. Lava is a framework which has a pool of devices. Currently I could show that how many devices we have. So these are the list of devices you could see. Your IK, Juno, Bigelboard, and X8664 and 32, one KMU configured. This is a Dragonboard when Bigelboard is black. These are the number of test cases. You could see like 20,000 test cases per kernel. Commit you could call it as per kernel. So what is a test? What how do you do a test? So you could see here we have a case self-test around 100 test cases. We have a LTP of 3000 test cases and leap huge pages of 90. So these are like proper functional test cases I could say. So we have a bunch of 10 environments of device pools when we are testing on those. That's the way we validate every Linux kernel and going back to the slide. So there is a spot in the place. Scott will ensure like if you pull the results from the lava. Once we have a result from the lava, we will compare the previous and the current results. If you find like new passes, new fails, so we go through the triage of the box and we'll find out which kernel is causing the problem. I mean which kernel committee is causing the problem. We have a setup of the things to do a triage there and we ensure that we ensure that the function test is happening there and if there is a new failure, we'll go through the failure of the stack dump and the kernel trace logs and the test outputs and all. So we go through that output and we will find out which kernel commit ID is causing that problem. Accordingly we report to that particular kernel commit author or the maintainer of that particular subsystem saying that this commit is causing the problem. Could you revert the patch? Could you fix the patch? And we'll be sending out two emails to the open kernel mailing list or the stable mailing list. Accordingly the maintainers will come up and fix those issues. So we have a little turnaround time to do all this stuff with the 20,000 test cases and attend devices in place. So that's the way we have been evaluating the Linux kernel in a functional format. So we hope we'll be optimizing the turnaround time and we recently also we are able to find many bugs in terms of memory, slabs and all. So that's the way we are able to find a couple of bugs and recording them and the maintainers of the particular subsystem are happy to get to know what's happening there and they are able to catch those bugs in a less turnaround time. So that's the way we have been doing. So what do you do? What is your job? My job is to ensuring that the system is in place and everything goes in smooth way. And I will be validating the reports from the squad ensuring that what is the new failures are happening and go through that, do a good bisect. Whenever you find a new failure and do a good bisect search for the bad commit and get that bad commit and report to the maintainer and mailing list and getting those reports back to the mailing list. It's very busy, right? It's not easy. It's very busy. Yeah, it's very busy. So you have the whole world is interested in this? Yes. We have six people of team who are responsible for building boots and maintaining the lava and triaging the stuff and reporting the stuff. So I will be involving into the report to the great bad or good, so I have to report. And every company in the world who works in Linux is using this? Not now, but most of the people nowadays they are adapting this framework and as well as they are also following it we are able to see like new people are coming up and asking the questions and trying to incorporate those I mean the member boards into our systems they are getting this. Is this the successor of lava or is this the accumulation of the lava? Lava is a subset I could say out of this framework because lava is a device of pool so it just valuates the stuff on a given board. So testing is one part but valuating and comparing results getting a report out of it is pretty time consuming stuff I could say. So I could take you here. So let's say you have a case and test and you have a net underscore vif.node.sh from case and test. So I would like to go to the history of this test case so how this case is behaving on multiple trees Linux mainline, Linux next and stable 4.4 in 4.8 in 4.4 and 4.9 so I have a new dashboard kind of stuff here this dashboard will show you on the Linux mainline it's green means like getting passed so let's move on to the next so there are still violations happening there because there are no results but there are a few jobs are finished already you could say like pass. If you go down this test case is getting failed on 4.4 in we have a reason because this is a new test case which is supposed to work on the 4.8 that's where we could see like 4.8 is working but 4.4 in it's not working because the test case is intended to work on the latest and the greatest trees so the test case has to be or the really it has to be back ported and is it just automatic you can test if it's working it's just automatic most of the cases are automatic but when you have to try why it's not passed on the old kernels why it's failing on the new kernels you have to do a comparison thing you have to go through the kernels stack drum and the kernel crash logs and do a git by set so these are all we are doing like it's an automation versus manual efforts we cannot do everything in the automation because it may send out a false positive report also so that's the way we get and ensuring that we are not sending out the false reports out so we are ensuring that what happening there with our sign only we are sending out the reports to delay mailing lists