 Welcome, everyone. Hope you guys are all excited to learn about some kernel CI. I know I'm pretty excited to show you some neat demos that we have planned. So today, I really want to demonstrate Lenaro's automated validation architecture and how it can make everyone's life easier that's in this room right now by leveraging what we've already done. So from this point on, I'm going to refer to Lenaro's automated validation architecture as Lava just because it's a shorter, nice acronym. So I want to start off, please raise your hand if you've ever used Lava. OK, we got a few. Have you ever heard of Lava? Raise your hand. OK, we got a few people. So what I want to do is I want to provide a brief overview of Lava and how to use it, and then we're going to jump into kernel CI, so specifically how to use it for kernel CI. OK, I'd better start off with an introduction. A lot of you don't know me. I'm pretty new. So I'm Tyler Baker. Spelled my name right up there. A technical architect at Lenaro. If you haven't heard about Lenaro, please check out www.lenaro.org. We do a lot of great things. Really, our mission is to consolidate the efforts surrounding the ARM SOC ecosystem so that we're not fragmenting it. Instead, we're building a common base for everybody to build off of. And that's kind of the idea behind Lava is that everybody has their own test automation framework. And we want to provide a general solution for everybody that they can validate their devices with so we're not fragmenting that. So as far as my background, I'm a developer. So yes, I write code. I'm a core contributor to Lava. I'm an evangelist. I think it's a really good solution. I've worked in product development for many years and found Lava to really satisfy 90% of the needs that you need right out of the box. You have to add that 10% yourself, but we're all developers here, so that's not a problem. Specifically, what I did was I was a platform engineer. I was building rugged Android-based handsets on OMAP3, OMAP4-based chipsets. So I love to hack bootloaders, kernels, drivers. They put a low-level embedded engineer on the Lava team, and here's what you get. You get kernel CI. So that's kind of the background about me. Now, of course, I'm just going to give you an overview of Lava. You can see here it's one of our racks with all the boards in it. We basically have lots of hardware at our headquarters in Cambridge. OK, so at Lunaro, Lava provides metal to developers. Basically, it's an access layer to hardware, providing authentication and scheduling. Lunaro is always on the cutting edge of technology. So a lot of times, there's only a few boards available for developers to use, and Lava enables them to share these resources. So it's a big plus for us to have this at Lunaro. Lava is becoming more mature. Since its inception in 2011, we've run 70,000 jobs through it at Lunaro ourselves. Now, other member companies are using Lava internally for their validation. Currently at Lunaro, we run 800 jobs per week approximately through Lava. So it's being proven in the real world using real hardware and real software that you guys may or may not be using now. So again, Lava is an active project. We have a dedicated team within Lunaro to focus on core development and bug fixes. So it's not just a dead project sitting out there that isn't really moving forward. We release every month. So due to this quick release cycle, we have an automated regression testing system that tests on real hardware. Now, here's the problem with Lava in testing is that Lava's software that talks to hardware. So to ensure that you haven't introduced any regressions, you actually have to test it on real hardware. So when we have our tips and we move them into our production manifest is what we're going to release, we actually run through these functional test cases, which are essentially Lava jobs, so that we can validate that everything still works and that we can fix problems before we release it. So it's harder than you think to actually deal with hardware and software devices in an automation framework, especially testing it and making sure that you have quality software being released every month. So let's get into it. The Lava architecture is a 5,000-foot view of all the Lava components here. We have LavaTool, which is a command line client that you might use at your desk. You have the LavaServer, which is more or less the web front end. And then you have a dispatcher, which talks directly to the hardware. It's a loosely coupled distributed automation framework. When I say loosely coupled, I mean, you can use any one of those pieces by itself or use the whole stack together. It doesn't matter. So you can mix and match just depending on what your deployment environment looks like. So if you've ever used Lava in the past, and I know some of you have, there was 20 or so projects that consisted of Lava. So consolidation was a must. It was just too overwhelming to deal with the projects. And now we're in the process of finishing up Lava server. That'll be done by the end of this month. So we'll have four components to deal with, much simpler to use. And by the way, before I go any further, if anybody has any questions, just throw their hands up. As long as it's not a derailment question, I'd be willing to answer it. OK, so let's talk about Lava server. All the web components use Django. They're just each piece that goes in there like the scheduler and the dashboard. It's just a Django app. You can easily write a Lava server extension. What this is is a Django app that has a predefined interface that you have to implement. So Lava server says, here's these API stubs. You just have to implement them, and then you can write an app that runs on the Lava server. So Lava server also provides an extensive API, user authentication, access restriction, and a lot more. So again, decoupled components. You can run Lava server if you choose to and nothing else. So it's a nice design. So here's our metal. Well over 100 devices in here. We have our production server, which is validation.linaro.org. It has most of these devices connected to it. We have our staging servers with another subset of devices, and then we have special devices for our groups within the linearity use. So we're going to quickly look at this just to show you guys. This is what we get. This is the Lava server. You notice there's the dashboard, the scheduler, there's API, and there's documentation. One thing I want to mention, documentation with Lava in the past has been a little dicey. What we've done is when you install your Lava server, if you click documentation, it generates the documentation based on the source code you have. So that documentation is always in sync with what you are running now. So there's not this fragmented documentation where you read something on the web, which is out of sync with what you're actually running in the code. When you do the install, when you update the code, it's built then, and then it's statically hosted. OK, so let's discuss the scheduler now. Again, a simple Lava server extension. Users submit jobs defined in JSON to the scheduler via the XMLRPC API that I mentioned earlier. The scheduler's a bit special in the sense that it's not only a Django app, but it has a native daemon as well. So it manages a few things. So job scheduling, device state, device health, and streaming console logs. It displays all the jobs that have run on the server. So let me just overview multi-node. People might ask this. We've just added a feature recently that allows the testing of multiple groups of devices. So netperf between n number of devices. We can actually do that in Lava now. We've written a synchronization API that allows you to communicate between nodes in the test. So you could say, give me 10 panda boards, bring them up with these different images, run these different tests, and then have them synchronized together. So it's really, if you want to get into it, Lava can do some very, very complex testing for you if you want to take the time and understand it. I'm going to leave it at that. If you're interested, come up and talk to me later about this. We can go over some of the details. So here's what the scheduler looks like. You can see that we have our devices listed. Now, these are device types. Underneath these, if you click on the links, you'll see the actual devices associated with it. Below, you can see the jobs that are running. And then off to the right there is our scheduler reports. So this tells us about our health of our devices. And it also tells about how many jobs are passing or failing given per day, per week statistics. And the new feature we've added, you can submit a job simply through the web UI. So if you're on your iPad, you can submit a job, we enable that. Some people really like that feature. So what I'm going to do is take another look here. So here is validation.linar.org. This is running live right now. If we go down here, here's jobs that are running. If we click on, let's say, Panda, we can see how many pandas are associated with that device type. So these represent our hardware devices in our lab. If we submit a job, this is where we'll put our JSON. And just for the fun of it, let's submit a job. One moment here, I guess I don't have the, yeah. There'll be plenty of that later. We can submit a job in a few minutes. Let's continue on with the deck here. OK, so let's talk about the dashboard now. Again, another Lava server extension. And really, the purpose of this is to manage test result data. So this is where you're going to visualize all the tests that have been run in Lava. Now, I want to make Bundlestream clear for you guys. It's a very misleading name. I get questions on it all the time. So think of it as a bucket. And when you're doing tests, you put your test results in a bucket, right? It's a container for test results. That's all it is. Just disregard the naming conventions around it. So then a filter does exactly what you think it does. It filters the data. And then we have our image reports, which allows you to have data visualization so that you can plot your data so you can see the trends, those kind of things. What we're working on right now, we need some improvements made to the image reports, namely filter versus filter comparison. So I run tip kernel versus my custom kernel. And I'm running performance benchmarks. And I want to be able to plot the trends, see the intersection points. I want to get notifications when my baseline, where my custom kernel goes below some certain baseline, I want an email notification letting me know. So we're going to enable that very, very shortly. And here we are. This is the Lava dashboard. You can see our leg team with their Java, ARMv8 work. They're actually using Lava to test all of OpenJDK7 and 8 on fast models. And to the right there, you can see some test results from it. And then if you drill down into one of those test results, you can actually see the pass fails here as well. So we can also have a quick look at this going here. So this is up to date. This is happening right now. This view refreshed this morning. So you can see each build number represents a build that we do in Jenkins so that you can actually click that build number. And we can actually dive into these results a little bit. What's that? Yes, absolutely. So you see how you can start build number and build number. You can select the test case that you want to do. And then there's by percentage, by pass fail, and by measurements. So what we're able to do is actually plot a measurement, a benchmark on our graph now. What we're working on is we're coming up this new thing called the image editor, which allows you to work these with a little bit more granularity and then publish the image to the image report thing. So somebody that wants to display the test results will be able to use an editor, edit the test results, and they'd say, this is what I want to show as an image report so that none of this is customizable. Because what happens is people go on there, they start playing with the data, and it doesn't look as good or doesn't represent the actual true data because they don't know what they're looking at. So we're going to kind of abstract that here shortly. So if we go down here, we can have a look. So this is the particular job that ran. And here's our test results listed. And we can drill down into those. Mavre. And if anybody knows Java, those look like Java classes, don't they? So that's some of our test results that ran through Lava this morning. OK, my favorite. Let's talk about the dispatcher. It's really the business end of Lava. It's responsible for the interaction between the target hardware and the software Lava. So the most important thing to realize is that any device with a console can be connected to Lava. So I don't care if it's serial, ADB, SSH, IPMI, all Lava cares about is a console session. So that's why it's so flexible. And that's why I think it can really be this general solution to automation that we can kind of all get around, develop it into what we need it to be as it is an open source project. And we don't have to keep building automation frameworks to do different things. It's general enough that allows you the flexibility to integrate any device. I mean, you could put a switch on Lava as a device and run tests on a switch if you wanted to. So as a user, all you have to do is define a connection command, which is simply a call to a binary, whether it's a screen or it's an SSH call. And that's all Lava cares about. It'll just read standard out from that and parse it. So very simple design for the dispatcher. And then Lava delivers binaries in many different ways. We have web servers. We use fastboot deliver images. We use Pixie. We use TFTP. Bootloaders. UEFI right now, we're focusing on enabling the UEFI testing. We can load up a new UEFI binary on a fast model right now, run through some configuration, and actually boot a kernel with it. We're looking to do some validation tests with it for certification. Also, we have the SDMux, which is a way of muxing SD cards together, so we're planning to do bootloader testing here in the near future. So Lava delivers tests to the target. Here's the best part. Lava doesn't dictate what language you write your test in. It doesn't matter. You could write it in Python. You could write it in C, C++. You could write it in Scheme if you wanted to. As long as it generates some sort of uniform output, that's all you need. So what it does is it reads it in line by line, your test output, and you define, as the user, a regular expression which parses this and groups them into test case ID and a result pass or fail. Now, we're going to go into this a little deeper, so let's just hold that thought. So let's just have a look kind of what the dispatcher looks like here. So this is some of the job files. This is some of the output. I'm going to do a quick demo for you because I think, as kernel developers, you're looking at all this web stuff going lots of bunch of frilly stuff and I have to click through it. And that's not going to work for me. I hear it all the time. So I'm going to show you that that's not entirely true. So this is my command line inside my VM here. And what I'm going to do is run this command. Now, this is LavaDispatch-target. So I tell it what target to run it on. And here's a JSON file. Now, this is the same JSON file I can submit to the server. Doesn't matter because LavaDispatcher is just a piece underneath it. This actually does the work. So if I hit Enter here, what's going to happen is it's going to download the image, the KVM image. And this is a compressed image, so it's going to decompress it. And then I've specified tests within this job file to run. So then it's going to mount the image, load the tests on there, and start the QEMU. So this is a x86 KVM. And this is literally what comes out on the log file, the streaming log file, on the LavaScheduler. It's the same thing. Absolutely. So we can take a look at that while this is running. Let's go back to this. I should lose my mouse here. Let's see if there's something that's... Well, we'll look at that in a second here. I can't deal with this mouse. I might have to turn this a little bit. OK. So what's happening now? It's got a DHCP lease. Now we're starting to run tests. This is DMI decodes test. So you can actually see it says LavaInstallPackages. So within the test framework that we offer, you can specify dependencies for Fedora, for Ubuntu, or Open Embedded where there is no dependency or package manager, and you're just assumed to have that on the image. So it's fairly flexible with the way you can write these tests. And we're going to go into that again in a little bit here. But I just want to show you that all this can be run from the command line. And the really neat part about this demo, that you can run this at your desk and submit the test results to the dashboard. So it's actually going to submit this to my Lava server back in Seattle, Washington. And we're going to view the results here. So running at my desk here, results are on some web server elsewhere, which will make it really nice if you want to just do local testing at your desk and then show your results with other developers. Let me see if I can get a scheduler job up. We'll use my server so we don't disrupt anybody. Oh, I don't want to give it all away, though. Well, we'll do this real quick. No. What I'm doing is I'm actually looking for a job that I can just resubmit. So I can show you guys the streaming console output. Obviously, I run a few jobs here. Here we go. So what you can do is this is, again, you can look at the definition right in the web UI. You can look at the complete log with all the console output. This is basically what it's catching out of serial, right? And then you can also look at the results bundle right from the job. But what I'm going to show you now is a neat little feature we have. I actually can't click over here. This is really difficult. We can click resubmit. Now, whatever that job was and however it was defined is now resubmitted. And the scheduler is going to pick it up here in a moment and start running it. So let's check on this guy. Ah, OK. So what's happened is our tests have run now. And now you can see it was pushed. This dashboard result was pushed to community.validation.linar.org, which is a server that I maintain. So while we're waiting for the KVM job to spin up, what we can do is just have a look to see if she started yet. She is. You got to catch these things because they're pretty quick. So you can see that the output streams. And it's kind of nice for developers to sit there and watch everything. So here's the emulator starting. And my fast ADSL connection at home has to push this back up to the server. So this log put doesn't actually look as slow as it's coming out. Usually on a nicer connection, it's much better. So you can see it's starting to stream. Let's take a look at the test results just got pushed up. So we click on bundle streams here. And I've pushed that bundle stream into anonymous KVM, which is, again, a container for your test results and KVM boot and test. So this is what I actually just ran on my laptop here. We can look at the DMID codes results and have a look. And you can link somebody to this page and say, hey, I ran these tests at my desk. Here's the results. What do you guys think? They have access to your full console log. So they'll be able to go and help you debug it. It's kind of nice. Yep, absolutely. OK, so that's that. Let's get back to the deck here. OK, so let's talk about a utility we provide called LavaTool. Command line utility, it utilizes our XML RPC API. You can schedule jobs, pull job status, or push results. You can retrieve the console output from the command line, which is really helpful if you want to automate like a zero day type thing where you want to boot test a bunch of boards, pull the console log down, run a parser over the console logs to check for any kernel loops or things like that. You have an API to do so. You can also create bundle streams. So we can, again, this is just kind of neat to visualize here. Bear with me one moment. So now I'm issuing a LavaTool command called submit job. I'm giving my username, so I've authenticated with the server. And now I'm going to push this KVM boot and test JSON file. And now I get a job ID back. So the scheduler has validated the JSON and said, here's your job ID. Now the next thing that I want to do is show you how to pull on the job status. So we'll go back here. LavaTool, job status, again, authenticate with the server that you want to pull. And now here's the key. So 73209, you just put your job ID in there. And it tells you that it's running. And so you could use this API to pull on jobs to know their status. And then what we'll do after this finishes, we'll actually pull the console log down. And then you can view the console log of the target that's running. So what I hear from a lot of kernel developers is, I don't want to deal with that web UI. We don't want to deal with any of that stuff. We have the command line tools for you guys to utilize if you don't want to deal with it. So I just want to make that point very clear. We can do this later. We've got to get through these slides. OK, let's get to the meat and potatoes here now. So you basically have an overview of how the components work. So you can visualize how you might start to put this together in your head. Now typically, lava deployment pushes an entire system image to the target. Bootloader, kernel, modules, root file system. While this is a vital part of testing at Lenaro, it's not fast. We're talking best case scenario 15 minutes before you get a target like a panda board booted into a test image before you can even start testing. Now for continuous integration, which we want to talk about here in the next section, we need fast results. If you're a kernel developer, I want to build it on every commit. I want to submit it to a board. And I want to know if it boots. And maybe you want to run some additional tests on there. So it's too heavy to deploy full system images to a board if you want to do kernel CI with the current lava implementation. So what I've done is I've been in discussions with our Lenaro kernel developers and maintainers. And the general consensus is that it's too slow. Kernel deployment has to be quicker. They cannot wait 15 minutes to even get a test image booted to test their kernel. Plus they don't care about the root file system. They're going to give me a RAM disk. That's just how they are. So it's too heavy. They don't want to create a system image or a hardware pack each time they want to test a kernel. They just want to say, here's my kernel, here's my RAM disk, and maybe here's my device tree blob, and go. So next part of this is we're going to investigate how to deploy a kernel to a target using this new lightweight interface we've designed, specifically for kernel developers to do kernel CI with lava. Here we go. So we're going to learn how to define a job. Now this job file is encoded in JSON. It describes the software to deploy. Bootloader configuration, test to run, and bundle stream for results. Now the bundle stream for results, completely optional. You don't want to submit results. You don't have to, to speed things up. Test to runs, completely optional too. You don't have to run any tests. You could just boot test it if you want to. So pretty standard there. Let's just kind of whiz through this here. So in the job file, you want to select a target. Now you can specify a particular device, like I did with lava dispatch, where I said KVM01 was the specific device. I could, in a job file, specify to the scheduler a device type of KVM. So then it would schedule on any available KVMs out there arbitrarily. Now we do have facilities for tagging devices. So you could have some special hardware set up on a particular device, like it's in a Wi-Fi lab. You could put a tag on there and enter that in your job file. So you could say device type KVM with a certain tag. And the scheduler would only schedule the job on that particular device with that tag. And then, again, with multi-node, we've introduced a new schema called group, which allows you to define groups of targets to test on. So you could specify a panda board, KVM, an Arndale, and then specify different images to deploy and different tests to run. And you'll have access to the synchronization API as well. So job name's optional. It's just a string. Debug level, sometimes that's nice. That's more of a lava thing. So you can usually just admit that. Timeout, optional. Some guys run tests for two days or two hours. It's just your timeout. It's just an integer value. So it's pretty straightforward. Now let's talk about deploying a kernel. So we've introduced this new action in lava called deployLanaroColonel. And what it does is it says, well, we don't necessarily need a hardware pack. We don't need a root file system. What we need from you is a kernel, minimum. If you provide an optional RAM disk, that's great too. We'll take a device tree blob as well. I'm working on some server platforms which require different firmware to be loaded. And so we can load that over the network. So we've added some additional schema to support that. Now the URLs that you can use, when we're going to go over our job file here in a second, you'll be able to see this. You can do file colon colon, which is a file on the local lava server. You can use HTTP or HTTPS. If you want to pull something from the web, we support SCP. And something I've been working on is data, which is a base 64 encoded binary in the job file itself. So if you're a kernel developer and you don't want to go place your binary somewhere the lava server can suck them in, you'll be able to actually just submit a job. It's a 12 meg JSON file. But hey, you don't have to go and do anything. You can submit it right from your desk. So I think there might be some people that would like that. Let's talk about booting. So the command to boot in JSON is bootLanaroImage. It boots the target into the test image. If you don't define any boot commands, the default ones that are defined in your device configuration file will be used. Now custom boot commands can be embedded in the job file. This is another feature that we've just added. So as kernel developers, you might want to change your boot arcs. You're working on tip kernel, and something's not going right. You want to put early print k on. You can override the boot commands in the job file remotely. No lab guys got to go touch anything. You don't have to be a server of men. Simply specify the boot commands. And so the boot commands that you can specify are kind of neat. So for UEFI, we had to introduce the sendExpect stanza. Why? Because we have to script the UEFI interface to configure it. So you can say send this string, expect this string. And so you could actually walk over bootloaders that are more complicated than, let's say, uboot. Also, the send line, if you just put a string in there, it's just going to run a send line command and fire away. So we wanted to give the kernel developers the flexibility to change their boot arguments on the fly, device to device. If you want to test NFS, you've enabled the NFS driver in the kernel, and you want to check it out. You could do that with lava. As long as you have an NFS server on the local network, you can make sure that all works. OK, so let's talk about the interactive boot commands for a second. Here's an example. You can see our deployLinr kernel. We're specifying a device tree blob, a kernel, and a RAM disk. And then bootLinr kernel. We have to set a flag of bootLinr value for interactive boot commands to true. And then we basically just encode the boot commands we want to run on the target right in the job file. Fairly simple to understand there. And again, so I don't have send expect in front of any of these strings. So these are just going to be sent directly to the boot loader and just not really expecting anything. So let's take a look at a UEFI job. When the boot commands fail? Well, so lava's pretty good about handling things where it goes wrong. So it's going to try to boot three times. And if it doesn't boot three times because you gave it the wrong kernel arguments, you're going to get a message saying, the boot commands didn't work. So you'll know when it didn't work. And you're going to have all the outputs, so you're going to be able to see, oh, hey, I should have put that into another function where I should be calling DHCP first before I try to network load something. You'll have all that to debug. But the thing we want to do with lava is make it general. So we're going to put the power in the user's hands. So if you give bad boot args, lava can't really do much for you there. It's just running what you tell it to run. So for UEFI, we can send line A. And then we can expect choice. And so you can actually script over a whole bootloader interface just to get something booted. So what we're doing here is we're setting the kernel image name. And we're going to actually set the RAMDIS name. We're going to set the device tree blob. And we're going to set the kernel command line. And so then when we boot with using UEFI, everything should work properly. What's nice about this, if you're working on something where, let's say, Zen, and you want to boot with UEFI, well, the arguments that you give to UEFI are going to be slightly different. So lava can enable you to test different kind of configurations, booting different kind of kernels with these interactive boot commands. So it's really powerful for kernel developers, guys that are doing low-level development. OK, so then typically what happens after you boot is you want to run a test. We're really trying to push everybody towards lava test shell. I think it's the best solution. It's easy to read. You look at it, and you say, I can understand that. It's a test description, just defined in YAML. We're going to have a look at this in a little bit. It's got install steps. It's got run steps. And the important thing to realize is it runs on the target. And again, like I mentioned, parse is standard out. So LTP, what we do for the LTP, we pull it right from source forage. We actually build it on the target, and then we run it on the target, and we have a parser to parse out the results for the LTP. And you're probably thinking, well, what about T pass and T block that comes out of the LTP? We have a dictionary function, so you can map a pass to something else that you might find in your log output. So if the pass fail conditions in your log output aren't the same as what lava expects, you can change those. We're going to go over that in a little bit here. URL. So we can do a file URL for a test definition. So it can be local on the file system. Again, HTTP, and I should also say HTTPS. You can pull it in there. We use Git, and we use Bizarre. So you can actually check in all your test definitions into a Git repository. And then you can reference it in the job file, which you'll see in a moment. And it'll clone the Git repository, and then run the YAML definition that you specify. So everything can be under version control, which is really nice. And then we submit results. So it submits to a bundle stream. You have to add a server in. Bundle streams are just lava results, as I've said before. And it submits them to the dashboard. Simple as that. Is there any questions about this whole thing? I know I'm kind of going fast, but we're running a little low on time. So let's kind of deep dive into a job here. What you can see is we've got our deploy lanaro kernel, which we've gone over. We've got our boot lanaro image with no boot args defined. Then we have lava test shell. So it's just another action in this JSON file. And what we're doing here is we're specifying there's a Git repository, which is our lanaro test definitions repository. And then we're specifying a YAML file path relative to the test definitions repository. So if you cloned it and you seeded into it, that's the path that you would give it to reference the YAML file. Also, you can specify a timeout for the lava test shell. So when it runs, if you have some long running test, you can up the timeout. You also want to up the job timeout, which is on the bottom here, just to make sure that the job timeout's not going to hit first before your test timeout. And then again, we submit the results. So I think what I want to do is let's just go over tests. Excellent. So you get some nice lanaro connect stuff here. I want to just give you a brief overview of how to write a test, because I think it's important. This is what the YAML file looks like. It's got your install, your run, and your parse steps. And so I'm going to step through it. So in the green, I know that's a little hard to read. It's some Git repos that you want to have in your test directory. So you can not only specify where the test definition comes from, but you can pull source and build source on the target using these commands here. You can also use Bizarre if you prefer that. So then the run steps. It's in yellow here, the mustard yellow. It's a shell. That's all it is. It's just tell me what to run on the target side to run your test. So all this is doing is setting up the test to run. And then at the very end, we invoke our test that we want to run. So what's interesting is we have this parse pattern down here, which is just using regular expressions to match the output. So let's say test ID and a result. We're going to go over this. And then you can see fixupdictionary down here, fixupdictionary, fixupdictionary, fixuppdictionary. Pass, pass. You see the mapping now? So you can have different result mapping, and you can fix it up in your test definition. So what I want to do now, let's just go down here. Let me show you how that parsing works. So you write your regular expression, and what it does is I've color coded it to see the matching. So that result is going to match either pass, fail, or skip. And since we've done the fixupdictionary, that's going to map back to Lava's past failure skip. And so anything that gets picked up in yellow here is going to be considered a resultant Lava. Pretty simple. And then what you want to do is you got to remember this is a line by line thing. So then you want to put a regular expression in for the parts that you don't care about in between. And then this test ID, which is in this case a Java class, you say anything else after that is a result. Or excuse me, is a test case. So then when you look at this in the Lava dashboard, you get these highlighted in orange as your test case names. Go ahead. And so it's all dependent on how you write your pattern. So if it's swap the other way, you just move the test case ID to the front. See, it's a Python grouping, right? Yeah, so the dot star all says, just take the rest of the string. And that's why I think Lava's so powerful is because any test out there typically generates standard out. As long as this is somewhat uniform, you can put together a regular expression to generate these results and get them in the Lava test format. OK, what's that? Yep, absolutely. Yeah, and in Lava, we've had a deal with that before. So we've gone down that path. Is everybody clear on how to write jobs? Because this next part's about continuous integration and how to put all the pieces together. So I just want to make sure everybody's clear. So what is happening under the hood when all this is running? So Lava dispatcher, the command line tool, it's all it is. The scheduler's just, the daemon's invoking the command line tool. So it's downloading binaries. It powers on the device, enters the bootloader. And this is specific to the deploy linear kernel. So this is how it's working, network booting, everything. Lava sets these environment variables for you relative to its scratch directory. Now, the TFTP server is running on the scratch directory. So what happens here is a little tricky is that when these get run by Uboot or Grub or whatever, they get set so that you can reference them in your boot commands that you specify. So you can say Lava kernel in your boot args, and it'll just give you the path, like a TFTP path, a file path, a boot file path to the kernel, or to the RAM disk, or to the device tree blob. So it's pretty flexible. It'll work with a lot of bootloaders this way. So that's kind of what's happening under the hood when we boot a kernel. Again, so we run the boot commands. Then the binaries are served over TFTP from the dispatcher itself. Then what happens is it waits for a prompt that you specify. So once the prompt, it finds the prompt, then it deploys the tests, typically over TCPIP. Everybody just wait. Then it invokes the test runner. Then we parse results, and then we pull the results over TCPIP again. Then we submit the results bundled to the server. That's an optional step. Let's talk about the assumptions. I know I'm kind of flying through this, but I want to get through this. So the assumptions for a good RAM disk that will work well in Lava, it's pretty light. It's a POSIX shell. Busybox, we need an HTTP daemon, grepcatoc, ifconfig, network connectivity, some free space to work with. But let me just hit my point here. Network assumptions are bad working on TIP kernel. I'm sure you guys all know. Sometimes you don't have any network connectivity. So this might not always work. And I realize that as a former kernel developer that we need something else that's lighter, even lighter than Lava test shell. So let me just give you a little insight into what we're working on. I have this working now. It's actually pretty neat. It's going to be called Lava command. And what it does is it only assumes console access. So it'll run any commands you specify in the job file over your console session. Standard out is then logged to the server side, not the target. And then this enables a lot of different testing. So what then will happen is the standard out is then moved, logged to the host side, and then the results are processed on the host side. So there's no need to pull results off of the device. So you can use purely something that is just console-based with no network connectivity to do tests. And the reason why this is good not only for kernel CI, but bootloader testing. Because you're not going to have a network in bootloaders a lot of times. So you need a way around this. And this is something that we've been wanting to do at Lenaro for a long time, is really validate bootloaders. And so this is going to enable that in Lava, something that's ultralight. Go ahead. Yep. And then it uses the same logic Lava test shell does. So you can define a parser, a dictionary, and you can even specify, well, if there is a prompt with the return codes, then get the return codes. But that's all optional, because obviously bootloaders aren't going to have return codes. So we don't want to enforce that on a user. So it's currently under development. It's going to land this month. We're going to have this in the trunk. You'll be able to use it. So I think that's kind of exciting. We can just take a quick look at what this might look like. So you can see if you put something on your image, like a RAM disk, and you know your kernel is not going to have any network connectivity, you can put your unit test or your functional test or whatever you're going to run on the RAM disk. And when you load it and it boots, you can CD into that directory, run some binary, and then provide a parser for that output. And then you can even do fix a dictionary. So all the same capabilities that LavaTestShell does, it's just less assumption. So I think that's going to be really powerful for people that want to test bootloaders, or they even want to test kernels that are really, really close to tip or at tip that might not be fully functioning. So we've talked about this already. Let's get to the meat and potatoes. Let's talk about continuous integration. That's what we're all here for. So you now know that Lava can deal with the hardware. It can deploy your binaries onto target hardware. How do we get the whole thing hooked up? And how is that going to work? So that's what I want to talk about here, is we want to build, boot, and test a kernel. And you're actually going to see a full live demo of this happening. So don't fret. Build. This is the best part about Lava. Lava doesn't care what you build in. Whether you build in Jenkins, Hudson, you got homebrew scripts that build your kernel, it doesn't matter. What's really nice to have is metadata. So I'm going to show you an example of this. You want the Git revision that you just built. You want the kernel version that you just built, typically. You can define these as a user. Whatever metadata you want to track. So what I'm going to do in my demo next is going to log the Git revision of the build, going to log the kernel version, and I'm also going to log the build number for Jenkins so that people that look at the test results can link back to the actual Jenkins build job and see what got built. So to create a Lava job, you can do two things. You can probably do more than two things, but the easiest to use the JSON library in Python and dynamically generate the JSON that you need to submit the job, that's pretty easy. If you don't want to do that, what I do that's quick and dirty that I'm going to show you is use a sed template. And what I mean is that when you embed macros inside of a JSON file that's predefined, and then you just run set on it to replace all the macros. It's quick and dirty at works. We've run it in automation at previous companies for many, many years, and it's been fine. So either way you go, you'll probably be OK there. Now submitting a job, you can use LavaTool as a command line interface. You can use the XML RPC API. And really what you want, if you're going to do continuous integration, is that job ID. So I showed you how we were pulling the job ID. That's something that you want to capture. And I'm going to show you how to do that in a moment here. Let's actually see if this guy is, apparently I can't type. There we go. OK, so let's pull it again. So now it's complete, right? So what I want to show you now, because this is going to be really helpful for you kernel guys, oops. And it was 7.3.209. OK, so what I just did there, I used LavaTool to pull the console output. So if we cat, 7.3, blah, blah, blah, that's everything. So if you scroll up here, you can see, I mean, they're kernel in it. There's all the test stuff right there. So you can actually grep this for kernel oopses, things like that. Really powerful tool. So I am seeing that we don't have a whole lot of time left, and I'm not through my deck. So let's kind of win over this. Tell you what. Let's have a look at the demo of this all working together. So let me kind of explain what's about to happen. So what I have here, I've got Arndale mainline CI job setup. It's going to build the Xenos 5 def config and the appropriate device tree blob. And then what it's going to do, it's going to set that template, a job file template. I can show you that in a second. It's going to replace all of the macros with the appropriate links so that Lava can pull binaries right out of Jenkins, load them on a target, boot it up, and then check to make sure the target's booted. So then it's got a downstream job called Arndale mainline boot test. So after the main job submits the test, or the Lava job, then the downstream job is going to be invoked. It's going to be passed down with that job ID, and it's going to sit there and pull on it. And it'll exit zero if the job completes. It'll exit one if it fails so that you can look at the dashboard and see, hey, is Arndale booting on the mainline? So let's take this off and have a look. And while we're doing this, is there any questions anybody has? We're just about to wrap up, so I don't want to make sure we get everything answered. Straightforward? That's good. Did I not click on it? Oh, did it log me out? Good old Jenkins. OK, let's actually try to build it now. There we go. So let's have a look, because I've designed it so you can see. So we're cleaning it. This is TIP. This is 312. So yeah, we could log the GCC compiler errors if we wanted to. This particular job I did it last night, so I just didn't have the time to do it. But what it will do at the end is it'll replace the template, and then it'll cat it, so you can actually see what the template looks like after all the variables have been replaced, and then it'll say, OK, it was submitted and here's the job ID. And then the downstream job will run. Go ahead. So you could definitely do that. Absolutely. This demo doesn't really demonstrate it. But what I wanted to get across to everybody here is that Lava provides all the tools that you need to get access to this data to build something around it. And we didn't want to put something out there that was already done, because everybody works differently wherever they're at. So it makes sense to just say, here's everything you need. And go ahead and do whatever needs to get it to work. So let's go back here. You can see the downstream job is now working. Just the built. Exactly. So now what's happening is we've submitted it, and now I'm polling on it. So I can tell that it's running. And if we go back to my server here, and this is back in Washington state, you can actually see the Arndale's running. So let's have a look at what this looks like. OK, let me just hit my last slide, and I'll stop. So while this is running, let's get back here. So I just wanted to go over some Lava resources. If you guys are interested in obtaining this project, we have IRC on FreeNode. We have a mailing list that's pretty active. We also have a wiki. So again, if you haven't checked out Lenaro, and you're kind of interested, I urge you to check out Lenaro.org, see all the cool stuff we're doing, not just with Lava, but all of our groups. So thank you very much for your time. I hope it was helpful.