 All right, well, why don't we go ahead and get started? So thanks, everybody, for coming today. So this is the Keeping Up with LTS Linux Kernel Functional Testing on Devices. And just to introduce myself quickly, my name is Tom Gull. I'm the director of the Lunaro mobile group. And so what I want to just kind of start to talk about very, very quickly touching on a little bit about what is Lunaro and sort of who we are. So we're sort of like a consortium of companies that get together. And we do collaborative engineering. So kind of the idea with Lunaro is you have these member companies, they basically decide, hey, this is a priority in the open source world. They throw in some of their employees. We have some of our own Lunaro employees. And we work on various projects and kind of solve problems, if you will. We're primarily involved with the ARM ecosystem. That's really our big area of focus. But we work all across a wide variety of open source projects. And so that's everything from Linux kernel, as you'd known expect, and Android, and kind of the smaller name stuff too that would be strategic towards various kinds of things. And so this particular project here, what I'm going to talk about is LKFT and something called Project Sharp. And starting out with a little bit of the background information. So when it comes to Linux kernel on devices, and specifically I'm going to talk about the Android space in particular, of course we've got the Android Common Kernel. And the Android Common Kernel these days largely tracks to LTS kernel releases. So that's 4.4.4.9 and now 4.14. It also tracks mainline as well. And so upstream of that, of course, you've got the LTS community, which is working on everything from the current 4.15 stable to the 4.4 long term. And then of course mainline itself and Linux next itself. So you have enough having these sort of almost waterfall of past kernels that are still getting fixes applied to them in their LTS versions, as well as the mainline, which of course is everybody's main focus. So all that said, if you go in, you take a look at Android devices today. So I've got a couple of screenshots here from a couple of devices. And I'm not going to name names or shame anybody. That's really not the purpose of the talk. But one of these, and it's a bit of an eye test, is running a 4.4.13 kernel. And if you look really, really closely, they built that kernel on January 25th of this year. So 4.13, if you don't remember, came out on June 8th of 2016. That was a long time ago. Now certainly, this is a responsible company. They've been cherry picking security fixes and pulling down some things which are relevant. But they've left a lot of the LTS fixes behind. That's not great news. Now the other one here is better. It's a 4.478, but still, that's all going all the way back to June, July 21st of 2017. So that's almost nine months ago. That's still missing nine months of LTS fixes. And the thing about LTS in particular is the LTS project, the long-term support project, it doesn't do anything to label security fixes. So if there's a cert or something like that, sure. That's an obvious thing. But there can be fixes that go into LTS that it fixes a security issue. It also fixes a bug. And it may not be labeled as such. So unless you're really, really smart in picking up some of that stuff, you can be missing things. The other thing that's very important to keep in mind is when LTS as a project releases, it doesn't test anything but all of the patches included in one tar ball as of one release. It doesn't test anything having to do with cherry picking. So when you're taking a patch out of the context of LTS, you're really losing out on all of the testing that's gone in and around the kernel engineering that's happened for that particular patch. So that's an area of risk. OK, so all that said, let's get into now the point of the project and what I really want to talk about today. And that's something called Project Sharp and something called LKFT. And the goal of these two activities together is to catch kernel regressions. What we want to do is across the architectures that are using LTS kernels and then their direct downstreams like the Android common kernel is essentially make a promise to say that we're not breaking the kernel. We're not introducing any regressions and that kernel is just as good as the prior LTS release. So this goes across 4.4, 4.9, 4.14, current stable, mainline, that we want to be vigilant in watching for regressions. It also spans architectures. So while Lenaro is primarily interested in ARM architecture, we knew when we took on this project that we couldn't just test on ARM to be taken seriously in what we were trying to do. We also had to include x86, and we also have to be open to other architectures as well. And this project is essentially it was slideware one year ago. So in this past year, what I'm going to be talking about is what's been accomplished in that period of time. The other thing that we know is very important, too, is the use of compilers. Now, for the most part, across the Linux universe, GCC is the reigning King compiler that's used. But in the Android space, as it turns out, the switchover to Clang has begun quite a while ago for applications, and now it's happening for kernel development as well. So when it comes to finding kernel regressions, we need to take into account multiple different compiler versions as well as multiple different compilers themselves between Clang and GCC. The other thing that we set as a goal is we have to keep up with the LTS community as a project. And so what will happen is when an RC gets introduced, so this is a set of candidate patches that will constitute the next LTS release, we basically have about a 48-hour window to pick up all of those patches, build them for all the architecture and OS combinations that we're going to test, submit those tests, get the results, and then do some level of initial triage on the data that comes back. And if there are issues, we need to have a pretty bright line to report what the regression is to be effective in the LTS community. So that was a very key thing that we needed to do. Now by taking on this kind of activity, what this does is this helps make LTS kernels. I like to use the word more viable, but it also makes them more realistic for long-term projects and products and those kinds of things. So there's been a bit of trade press stories about the 4.4 LTS kernel lasting for six years. We don't want to see that just for the 4.4 kernel. We also want to see that 4.9. We also want to see that for 14, so that when you have a company that is either putting out a phone or something else in this space, they know that they can rely on a fixed stream for any one of those LTS kernel releases to be around and seeing fixes for some time. Okay, and then when you have a set of data that's coming from the testing activity, the other thing that we need to do as a project, as I kind of alluded to earlier, we need to be able to triage that data and we want to keep track of that data for historical purposes and we'll see why in a minute why that's interesting and important and empowering for developers. But it also means that we ourselves have to, at a minimum, do the triage. We have to sign ourselves up for that on a week-to-week basis and while we may be looking for help from the community, we know that we may or may not get it. Okay, now, as with science, there's the common saying you've probably heard where we all stand on the shoulders of giants and there are numerous other kernel testing activities that are out there. So when we started LKFT, one of the things that we had to ask ourselves is, do we start from kernel CI? Do we build that capability in kernel CI or do we start with something new? We're kind of a set of pieces are we gonna do. So we kind of had to go through this initial sort of exercise as far as what was the right thing to get the project off the ground, get results and be effective as quickly as possible. And so for us, unfortunately, that meant that we needed to do an initial sort of break from kernel CI but knew that going on in the future that we initially wanted these two projects to kind of come back together again at some point in time as that made sense. So kernel CI for those who may not be fully up to speed on what that does, primarily what they're doing is they're doing boot testing. And so they have a minimal amount of user space or operating system that is sitting on top of the kernels that they're testing. Now they have gone so far as to start to enable case self-test, but when we're running our test suites like LTP and stuff like that, these are things that tend to have a lot of software requirements in order to run. So that means we need a substantial OS that's gonna be able to support those test suites and be able to work efficiently as opposed to something when kernel CI which is fairly cut down. The other thing about kernel CI is, of course, it is a community. And so they have community-driven goals at community-driven consensus. And for us, we wanted to just hit the ground running as quickly as possible. So we figured if we were gonna be working through kernel CI, we might have a little bit of a slower track and maybe that was the wrong assumption on our part, but that was something that was a bit of a concern for us. Another thing that was important to us as well was the hardware side of things. So kernel CI has a very vast set of boards within their farms. They really have a very incredible amount of hardware at their beck and call. We knew because of what we wanted to test and how we wanted to test and the operating systems involved that it was probably gonna be a small subset of boards. And so, again, that's a little bit different than what kernel CI does. Okay, so moving quite along then, so LKFT, and I'm not gonna read this chart here, but it's certainly available on the slides that are attached to the schedule, is we really wanted to make sure that we would set up a system that was gonna go from Git repositories to emailed results and kind of set up a whole infrastructure. And so that infrastructure looks largely like this, where you have a Git tree, which is ultimately monitored. So in the case of the LTS trees, that's Greg Hartman's trees, in the case of the Android Common Kernel trees that we work with, that's of course, comes from the AOSP project. As those things change, we detect that change, we kick off builds inside of Jenkins. That then results in a bunch of test jobs, which are then shipped off to Lava. And then Lava will dispatch this all across our board farm. Tests are often, in the case of larger, more complicated test suites like LTP, they're sharded, so that we're running subsections of LTP at a time. We're not running all of LTP on one board. We chunk it up into pieces. And then those results, of course, get pulled into a database. That database then is made available to something called squad, and then squad puts out both a web UI, as well as an email report that then ultimately goes up to the upstream community. We don't generally, I should say, in the early days, we didn't just kick out the report. We would kind of hold it back and sort of not trust it and really kind of look out for things like false positives and flaky test cases, and just kind of all sorts of things that were a little distrusting of the system until we had put a lot of time under our belt. So here we are a year later. We're very confident in our results, and so from an RC out to reporting results, we're looking at about an eight-hour turnaround time. And that's taken quite a bit to get there. This infrastructure that I just talked about is largely completely open. It's all open-source pieces. And the individual parts of the system are all out on the web. So if you go into KernelCI, or I should say ci.lanaro.org, you can see the jobs as they're coming in. You can watch the board farm in LKFT.validation.lanaro.org, and then the reports that are kicked out. You can go directly to the web interface, or if you watch the Linux stable project, you'll see those things get reported by the Lenaro folks. So there's a few people that'll post them by hand, even though they're kicked out by the system automatically. Okay, so just to put this in statistical terms here, so anytime a 4.9, 4.14, 4.15, mainline or next set of changes lands, that's gonna kick off some set of builds. And those builds depend on the board and the operating system combinations, and of course the Kernel version, as well as the test case version that you're going to ultimately run. So you got several different things that you don't want changed, or you wanna minimize change, and you have to put all those things together into a build. And then once those builds are all done, or as they seriously get completed, then we'll kick them off into the lava farm, so that's ultimately about 20 lava jobs per Kernel version. And then that's about 5,500 individual tests that are ultimately going to be run, again per Kernel version. So it's a lot of activity when you see a new LTS release happen, or a new RC out on mainline or what have you, it generates some activity. So that activity shows up in this, so this is LKFT. It's a bit of an eye chart here, but down here at the bottom there's a list of boards that our data's getting kicked off of. So this is a screenshot of an active day that was in February. So at this time, we've got some B2260 boards from ST, we've got DragonBoard 410C from Qualcomm, we've got the high keyboard, which is a high silicon device, a Juno R2, which is a dev board from ARM, they're about, they're fairly expensive, but they're nice boards, we've got QMU, and so in this case we do QMU for x86, but we're about to get both ARM 32 and ARM 64 running on the QMU environment as well. X15 was just a 32 bit ARM device from TI, and then of course good old bog standard x86 devices. And so of course these Q's will fill up as the lava jobs land, things gets dispatched, and ultimately then we'll start to see some results. So now I kinda wanna go into a little bit of the reporting side of the talk, and so what I wanna do is I wanna talk a little bit about some of our experiences along the way, building the system and what we've sort of discovered. So one of the things that hit us initially early on was is okay, across all these different dev boards that are out here, different dev boards with different connectors are actually hard to scale up. We have a standard called 96 boards, and as the nice thing about 96 boards is all of the connectors are in the same place, and so everything is more or less standard. So that makes that particular board design really nice and really easy to kinda set up in a system like this. So that's a really good quality to have when you're trying to build a system like this at scale. Reliability doesn't just happen. You end up having to write some checks for above average hardware. We had a problem because we had low quality USB cables at one time, and how that would show up in the environment is, well, 10% of your test jobs are just falling over and exploding in large mushroom clouds off into the ditch. What's going on? Is this a kernel bug? Is this a test case bug? And trying to ferret that out is really annoying until you realize, oh, this cable here is a piece of junk, throw it away, let's build something, you know, pick something out here that's actually some decent quality. USB hubs was actually another thing that was very important that we found. We have seen variability in USB hubs where low quality stuff will again cause boards to look like they're failing, when in fact it's not the board's problem. It's the hub's fault. So that was something where, again, skipping on hardware really cost us time and energy, and we had to learn through the school of hard knocks that that was something not to skimp on. Firmware updates. So board manufacturer will come along and say, well, we've got a new and updated firmware, maybe it manages power a little bit better than the old one did, or does something better with booting or what have you, and changes in that firmware can just cause us to go crazy because what oftentimes can happen is, is that those firmware changes can include changes to the interfaces. And so those interfaces are something that the system ultimately will integrate with. And so if anything changed in, for instance, how you partition the board or ship something down or ultimately work with that board, the environment now has to catch up to how the firmware just changed, and that takes time. Somebody's gotta come up to speed on that and understand what those changes are and what that means inside of the system itself. So we would love to see standardization in that space in particular, but it's something that clearly we're not there. Okay, another area that we had from a device perspective is, each board that we're going to connect into the system, there's actually four cables that go into that particular device. So you've got serial, you've got OTG, you've got ethernet or power. Ideally, sometimes you just have Wi-Fi, you've got your power brick that you gotta deal with, and that ends up being a mess of spaghetti that is not fun to deal with in a lab environment. Okay, so the other area that I wanna talk a little bit about is some of the test suites that we use. So Cased Health Test was a natural one that we turned to. This is, of course, the Linux kernel's internal test suite. It's a good test suite. It's still not something that all of the Linux kernel sub-maintainers contribute to or necessarily believe in, but that's hopefully something that will get figured out in time. One of the things that we do, though, is we use the latest version of Cased Health Test across the board. So that means we'll take a 415 version of Cased Health Test and we will run that on top of 4.4. We'll run it on top of 4.9. And that was a little bit of a controversial change, and we had gone out to the community and said, tell us what the right thing to do is, and I'll give us some direction here. And it doesn't seem like that, but there's really a good way one way or another because the problem is, is that when it comes to Cased Health Test itself, we've noticed that there aren't necessarily patches that would go into like a 4.4 kernel to update Cased Health Test as patches were landing to fix particular problems in the kernel. So what do you do? You end up sort of using aggressive skip lists for tests that are in Cased Health Test that are just not appropriate for that particular kernel version. It's just the way that it goes. Now, across Cased Health Test itself, there's a lot of inconsistencies as far as how the tests are designed. Some times in the cases of the setup that's required for those Cased Health Test, sometimes it's not necessarily intuitively obvious what kernel configurations you have to turn on. So working with Cased Health Test can be a little bit of an exercise and frustration, but it's still well worth it. And as a body of tests, if you've got time to put effort into putting patches to improve Cased Health Test, please do. You are helping the universe at large when you do so. So take that into account if you've got some spare cycles. Another one that we use is LTP. So LTP is kind of approaching things from the opposite angle. So where Cased Health Test is ingrained and testing from the inside of the kernel, I should say LTP is essentially coming down on user space. So it's running things like syscalls. It's running things like some of the environmental stuff that's on top of Linux. And so it tries to tickle the kernel and find air cases in doing so. So it is a test suite that's reasonably mature. It's updated still about every four months. And as a test environment, we've been fairly happy with it. The one downside to LTP is when we want to be doing things with Android, there are vast swaths of LTP tests that just don't run on any operating system, but traditional Linux itself. And it would be nice if that wasn't the case. Okay, so as I kind of mentioned, when it comes to individual test suites, we kind of see some best practices, things that we'd like to see generally cleaned up. We do run CTS. We do run VTS from the land of Android. And VTS in particular does contain an internal copy of Cased Health Test and LTP highly selective as far as what they run. They don't run all of those test suites, but they pair down to something that's appropriate for the Android environment. And this reflects back into what I sort of had mentioned before with LTP in so much it makes assumptions as far as sometimes the hardware that it's running on. Like for instance, they'll sum up, sometimes they'll go out and they'll make a three gig file. Well, if you're on a small embedded device, making a three gig file may not fit into the size of the EMMC that you have on the device. So you have to sort of run into these tests and throw them out because they're just not appropriate for the environment. There is unfortunately across a lot of these test suites. There's just no unified standard for doing reporting of results and logs and errors and skip lists. And so in designing a system like this, we sort of had to munch things together. If you were to talk earlier today by Tim Bird, for instance, he was complaining that really tests need to have a universal ID so that when you do comparisons, you could do that effectively by knowing, okay, this test is really the same test. You just ran them on two different architectures and that's something that should not be hard. It would be really good if, again, we could find some sort of standard. So on the case of reporting, it's interesting that even like VTS and CTS, even though they're both from the Android universe, they come at reporting completely differently. And so that results in inconsistencies. Case self-test, when it runs, it throws its logs into slash temp. It just, these things layer by layer, you have to kind of tear them apart and work with them. Okay, so let's move on to Android. And so I've mentioned that we work with the Android common kernel. And so of course, this is everything from mainline all the way through the LTS versions from 4.4.4.9.4.14. The thing about the Android common kernel is it's largely in sync with LTS. So when a new LTS release comes out, the Android common kernel rebases to that. But then it has a set of out of tree kernel patches. And the out of tree kernel patches has been largely decreasing in size, which is generally a good thing, but it still results in the Android common kernel is different than the Linux upstream. And what we found in our testing to detect kernel regressions is that open embedded is the operating system that we put our most time and effort into finding problems. And so open embedded, anything that we find there typically isn't going to result in something that's gonna spill over into the Android common kernel. Anything that then is showing up as a regression in the Android common kernel tends to be actually from that set of out of tree kernel patches. So it's kind of a nice little way how things work. But we have a bit of a blind spot because as I mentioned, VTS contains a subset of LTP and it contains a subset of case self-test because it's just not able to run everything. So this kind of doesn't feel right. We'd like to again have parity in our testing so that it doesn't matter if we're on Android or if we're on open embedded, we can run everything. But again, at least today, open embedded is really our lead operating system and it does seem to do a really good job in finding issues. Okay, so as I mentioned before, just when it comes to keeping up with LTS as a project, there's maybe one to two, sometimes three LTS releases on the average week. And we have, as I mentioned, we've got that 48 hour window that we have to deal with and typically our turnaround time is as we'll get results back in about eight hours. So if you think about that, there's really two ways to look at that process. There's building, running and results. That's results if you have everything as green, everything worked out really, really well. And then you have that situation where you have an error and you have to figure out what's going on. Did you have something that went wrong with your infrastructure? Do you have something that went wrong with the board? Do you have something that went wrong for a particular family? So maybe it was across all of the ARM architecture where an error showed up. Maybe it was something that was specific to one board and one particular kernel. So you have to sort of take a look at the data in a number of contexts and figure out what's going on. And ultimately you want to bisect and end up with a fix that then you can report back. That's really, really hard to do in a 40 hour window. Matter of fact, I would say it's for the most part, unless it's something pretty obvious, it tends to be next to impossible. You can bisect to say that this is probably the bad patch, but taking it that next mile and saying, well, here's exactly what you need to do to fix that, that's where the difficulty comes in. Okay, so I want to start going through some examples what LKFT puts out and ultimately what we report. So the very first thing I want to start out with is our most important reporting mechanism. That's by email. So if you're on the stable Linux kernel mailing list, so stable at Veager, what you will see occasionally is you'll see these reports pop up from LKFT. And again, it's a little bit of an eye test, but what we try to do in this email summary is to put up at the very top is to say, everything worked, no regressions, no need to look any further in this email. That's all you need to know, everything just worked. If there is a failure, then we'll pop that in up at the top and we'll try to put in as much detail as far as what's going on to kind of help the community at large figure out what's going on. What's kind of neat here too is if you look down at the list, then that's kind of a breakdown of how we shard tests. So you can see LTPs, file cap tests and FS tests and huge TLB tests and IO and so on and so forth. So that's kind of something that we could probably roll up in report a little bit better, but right now we've kept that broken out just to give everybody an idea of what exactly it is that we're running. And then we break it down by the architecture and the board combinations, which since we've added more and more boards, it's gotten that email to be pretty long. So again, that's something we probably need to look at. Okay, so besides email, we also have a web UI. So this is QA.Lin, QAReports.Linero.org. And so this is an example page specifically from 4.14. And so if we had the whole window here, we'd go down to 4.14.19 and so on and so forth through time. And so we keep a long history of what's been going on. And in one of these, so at the very top of the green one, you can see all the test runs that are there and this is all something that's available on LiveLinks. The other thing that we have here about the top error I should say about in the middle of the page is this is the metadata that goes with this particular build. So we think it's very, very important that we give enough information to have the commit IDs, the exact branches that were in use, and that includes not only the kernel, but the test suites that were involved, the whole kit and caboodle, so that somebody can go and replicate this entire build outside of our own lab. They don't have to pick up our binary images to replicate everything. They can rebuild it specifically from source to if they want to. Reproducibility is just huge in our world. Okay, so now in the case that there's an error, I've gone ahead and pulled out a, this was again from back in February, an environment and time when we had something where eight failures had popped out and so this is something where, if you're a triage engineer and you're looking at this wondering what's going on, you can click on it and this will take you to a report page that's gonna look about like this. And so this is one of those cases where, again, you're looking for trends early on in the stage. You're saying, okay, was this something that's just affecting one board? Is it affecting a whole architecture? Is it affecting all of everybody in which case, maybe this is a generic Linux kernel issue and therefore the importance of trying to get to the bottom of it increases. The other thing, of course, that you can see on this particular page then is you can see, just for the LTP syscalls, you can see how many tests there were and how many pass and you'll also notice that there are skips. And the reason why those skip numbers are so big is because there are tests in LTP which are not appropriate for all architectures. And so this is all sort of conglomerated together in the numbers that you're seeing. So when we look at a particular test run then, so we're looking now at one of the particular failures, this is on the 32-bit version. So the test environment is X15, so that's a, like I mentioned before, that's a 32-bit TI board. We'll see, here's our two tests that failed. Now in this case here, it's actually not two tests, it's just one. The run LTP syscalls, that's actually the name of the shell script that runs that then kicks off a number of activities and it's one of these sub activities that failed. So that's the FA notify06 that actually failed. But the nice thing that we've got in this environment then is we can go back and click on that link off on the right hand side that says show info, you'll see the log. That's there for that particular test run for that individual test case. So you're not looking at some huge file if you don't want to, you can zero down in on the individual data. Another particular view that we consider very, very important is historical information. Because as it turns out, there are test suites that have flaky tests. And it'll be phase of the moon, it'll be different weather conditions, sunspots, whatever you want to call it, that test will work and then it'll fail and then it'll work and then it'll work and it'll work and it'll work for six weeks and then it'll fail. And tracking those things down is really not any fun. It's an exercise in frustration. Now in this case here, we're looking at something where we've got a real bright line. Fails all across the board and it was working prior on at least the ARM architectures but on some of the other ones here it was and skip. Now that probably brings up an interesting situation. So we went from something that was being skipped on a 32-bit ARM device to now something that's failing. We'll get to that in a minute. Okay, so as a triage engineer, what we do is we have our team go out and create bugzilla entries and so we start to collect data and get everything here in preparation for handing off to appropriate teams because if you have something that's a kernel error that is only specific to a particular board, what you want to do is you want to bring in the team that supports that particular board and hand it off to them because it's probably not something that's a generic kernel issue. It's probably something that's more just specific to their environment. So anyways, this is the particular bug entry that went with this particular LTP failure and now this brings together another part of what I wanted to call out was this was a test case issue. So when you notice before back in that trend page, so on this one right here, so it wasn't running on X15 and then suddenly it was running on X15 or at least it had been attempted and it failed. Well, that was because what had happened is that the test had been updated. So our system right now doesn't really have a great way of calling out that oh, by the way, we just upgraded LTP so the test case might have changed so therefore something might have come from that. But in this case right here, there was a small update to the test. This is actually the fix that was on the LTP mailing list and so this one was just a test case issue. It was not something that was involving the Linux kernel in general. It was not a regression for the most part, not a big deal. But that's not always the case. In this case here, this was something that was from the huge TFBFS tests and so we had noticed something going on where you get the failure in the huge TFBFS tests that we run as something that we were able to bisect. We'd go ahead and throw this out on the Linux kernel mailing list and notify the maintainer, hey, thank you so much for the report, we did our jobs and therefore this bad patch was found, it was fixed, everything was kosher, which was great. And that brings me to another thing that I really wanna point out here is while we're testing LTS and we're testing mainline and we're testing Android common, the place that we find where the most regressions are is actually on mainline. So just like you would expect, the most interesting place to do a Linux kernel development is on the mainline kernel, no surprise here. And it's also, you can see with regular, there's the regular cycle, so when RCs open up with RC one, you have the most current regressions that pop out and that gets detected by the system. As you go through the RC cycles and getting it down to RC six and RC seven, those things disappear. So it's kinda, I think, affirming that the system is finding those things it is seeing and that kinda helps solidify in our minds that this is a good way to detect kernel regressions. Now, we don't run every test suite on the planet, we think there are a lot of test suites that are probably need to be written yet, more to be done, but at least this is a good start. Okay, getting involved. The biggest area that I think that people can get involved and help things out is actually in the upstream world. So there's the Linux stable list. So when an RC comes out, taking a crack at those patches, picking them up, compiling them in those kernels and working with them in your world, that's a contribution to the community and that's really meaningful. Working with the mainline RCs, same thing. That's a really valuable thing to go off and do and spend time and effort on. Improving tests and the test suites themselves, that's also a very, very good thing to go off and do. So getting involved with case self-test, getting involved with LTP, helping that kernel maintainer out who has a set of case self-test or maybe they don't even have any case self-test at all. Maybe we can help, you can help convince them to do that and you can just give them tests. That's a really, really good thing to do to make the universe better, which is kind of what this project is really all about. Finding these kernel regressions so that we as a kernel community can stand by our promise that we gave you a reliable kernel and we just gave you some fixes and we didn't regress the kernel so that you can continue to have confidence in what's been given to you. And for us as a project going forward, we do think that more boards and more eyes ultimately are gonna be making a difference. Now in our board farm, we don't have a lot of boards, but we do have plans where we will be talking at Connect next week about how others can basically pick up lava and LKFT and be able to run their own remote labs. We think this is something that we would like to see more of and help out and make that easier to do. Working with lava is a little bit of a challenge. At least it has been, but we've put a lot of time and effort to make that easier. So look for that talk to be online here and I'm guessing probably about two to three weeks. So another piece of this, making the world better story is exercising more tests. And it doesn't necessarily mean exercising kernel tests either, it can be just running user space code as well. Is that necessarily an effective way of finding kernel regressions? Maybe, maybe not. But we have seen instances where for instance, there was an LCS regression, it only got detected because DHCP broke. And it wasn't every version of DHCP clients. So the embedded DHCP clients were just fine. It was actually one that was the DHCP client that was on Ubuntu that found it. So don't just take away from this that it's only case self-test and you have to have kernel tests in order to find things. So that's it. I guess I'll open the floor up to questions if everybody has one and thanks for being here. Yes. So with LTS itself, LTS is really, really stable. It's a really rare day that you'll find a regression there. In the case of again like mainline, like I mentioned before that RC1, you're gonna probably have maybe like 10 regressions that'll pop up from an average RC1 that we've seen. Yes, question. Okay, so the question is is the compiler wasn't listed in the metadata. So there's a little twisty on that particular page. And so if I expanded it out, there's a whole bunch of information that would get included in the compiler is one of those. LKFT is a project we're just about at that point where we're gonna start turning on the ability to run multiple compilers. So Clang-built kernels in particular or something that really, really is interesting to us. Well, if there aren't any other questions, again, thanks for coming and feel free to talk to me later.