 So, yeah, welcome to Antonio, we will talk about full system testing on real hardware. So, please. Thank you. So, as you can see, it's something you need now, and we now have missions to improve Linux and related technologies on ARM, and then it needs our age-validated output of the engineering work that's done by the linear engineers. Then we come to Lava, which is a platform for system testing on real hardware. So, the goals of these presentations, first, discuss what Lava is and how it works. I will talk briefly about how Lava is used as linear, and then I hope you can discuss how we could use Lava to better test data on cases where we need to test in actual hardware, not in virtual machines or stuff like that. So, about Lava, Lava stands for linear automated validation architecture, and then it's a platform that receives job submissions. It allocates those jobs to available devices. It deploys an OS image there, so Linaro right now deploys Debian, Ubuntu, Fedora, Open Embedded and Android to test Linaro builds to do kernel continuous integration. And then Lava will boot the device using that image that was deployed and then run tests on the device and then gets the test out of the device so that you can analyze the results in a dashboard. And then, basically, tests can do anything. So, if you were in the previous presentation in the other room by Hogar, where the test jobs in Jenkins are just shell scripts, in Lava it's the same thing, so you can write any arbitrary command, install it in an FCM repository and have it run on the device under test. So, we support quite a lot of devices. So, we have development boards and all of those we have in the Linaro Lab in Cambridge. We also support some consumer devices, so phones and tablets. We are able to run arbitrary tests on those devices as well. Also, we already have support for high bank servers, the Calxida high bank nodes, which is one of the first servers in the market. We also support virtual and emulator devices, so we have support for the ARM FEST models, which are the proprietary emulators for non-released hardware by ARM. And FEST models is the most misnamed product name in the history because they are not FEST at all. And then you also support QMU and QKVM, so we can support both ARM and QMU and x86 KVM, which is useful for instance doing kernel CI on x86 and make sure that the ARM work doesn't break x86 because that makes the upstream kernel developers very angry. So, we don't want the ARM work to break x86 in any way. So, we have some requirements for our device in Lava for being able to... So, in Lava we have the problem of you have to always be able to recover a device. So, even if you deploy an image that has a broken kernel, a kernel that panics it doesn't boot or has a broken bootloader, you need to always come back to a reasonable good state. So, for that you have to have at least two features, at least one of those two features. You have to try to have some out-of-band management that you can control the device from outside. For instance, a bootloader that supports FEST boot protocol, which is the one used by most Android devices, or IPMI or any other form of managing the device from outside. So, you can switch the device to boot from another boot source or boot from the network or stuff like that. Or, in the worst case, you also need serial access, so you can have access to early boot logs. Or, in the case you can't get out-of-band management, if you still have serial access, then you can work around bad kernels and stuff like that. The requirements for OS images are very low, so it must boot to a shell. It has to be an image that doesn't present a boot prompt because we didn't want to mess with challenging the log-in prompt. It has to have a POSIX-ish shell. It kind of works with the Android shell, but if you have a POSIX like Bash, it's way better. Currently, we have been using images for Debian and Ubuntu, Fedora, OpenEmbedden, and Android, as I said. The workflow usually is like this. So, you have a VCS system where you store both the system source and the tests. And then, you have some CI system, the continuous integration systems. Lina will use Jenkins just as we are starting to do in Debian, so the Jenkins service will do the builds in a periodic way, like every 24 hours, using triggers to build every commit. Jenkins can do any of those, and then it submits a job to Lava, so it gets the artifacts produced by the build, generates a Lava job dynamically and then submits. The Lava will provide a provisioning device, run the tests, and then provide the results back to Jenkins. So, a Lava job contains quite some parts. First is a JSON job definition, where you specify which device type you want. So, you can say, I want a panda board, or I want a high-bank device, or ARM FES model, or KVM, or anything. You specify which OS image you want to deploy on that device, and then you specify which tests you want to run. And then the test definition is reusable across different jobs. So, you can have jobs for different devices that run the same tests. You can have one job that has all the tests available. And then in the test definition, you specify which are the shell commands that you actually want to run. So, both of them can be auto-generated. In fact, we are working on a new command line tool that will auto-generate all this stuff for you, so you only have to care about how to use the Lava API to specify test case results. So, you can specify test cases using either the Lava API, which are shell commands that are available in the device when you are running the test, or using pattern matching on the output of the shell command. So, I'm going to show examples of both ways. So, when your test is running on the device, Lava puts some binaries in the path that are available, then you can just do stuff like this. So, Lava test case highlighted part of the name of the test, and then after is what the test does. So, in this case, you can run shell commands, and then if the shell command succeeds, the test is considered as passed. If the test fails, then the test is considered as failing. You can also declare explicitly the result without running anything. So, if you have some commands before that, you can test some condition and declare test as failed or passed. You can also attach measurements, so you can use these to store results of benchmarks and stuff like that, and you can after visualize how your results evolved as your run tests. So, the other way of doing it is, for instance, if you are running some upstream test suite that already exists and you don't want to change it, then you can specify a regular expression to pass the output, and then auto-detect the test results from the output of the upstream test suite. So, if you do something like this, this is a Python regular expression, then it will match lines in the output, like test name equals result. So, you also have three possible results for a test. They pass, fail, or skip, and then you can also specify if the upstream test suite uses different names for pass and fail, you can specify a mapping from success to pass, for instance. So, I have a quick demo here. I hope it works and doesn't break anything. So, here it's a JSON lava job definition, so you have here which device you want to submit to. Here, you specify a list of actions, then you can design any kind of test you need. So, in this case, this is the most simple test. You just deploy an image. You say where the image is in the web server, and then the next action is the lava test shell action, which is the action that enables writing those arbitrary tests in shell script. Here, you say where you want to pull the test from. In this case, it's a Bazaar repository. And then you specify where in the repository is the test definition. So, there is the full value for that, but this is the place in that repository where the test definition is. Then there's the test definition. Now it's a YAML file. It has some metadata in the top. You can specify dependencies, and that's going to be handled by the underlying implementation depending on the distribution of the OS. So, it will auto-detect if it's Fedora, if it's Debian, and then do the right thing. So, you can specify either generic dependence names or destroy specific names here, and then it will install in the way that distribution is supposed to install packages. And then in the run steps, you just put the shell commands you want. So, you can either put all the shell commands in the YAML file, or you can just put a single command that's calling an external script in the same repository. And then that's going to be run. And that script is also going to have access to the API calls here. So, you can make those calls in the external script as well. And you can also specify a matching pattern to get results from the output. So, you can also reuse existing test suites, and in fact, now we do that a lot. There are upstream test suites for performance tests or functional tests. There is the Linux test project that has several test suites, and then you can just reuse those tests and then write a matching pattern to extract the test results. Then we can also submit a test here. So, I can get this JSON file. So, here is the web interface for Lava. You can see we have quite some devices in the linear lab. Right now there are some eight devices running tests. And then you can submit jobs. So, when you submit a job, you get a job ID, and then you can follow the results. And you can even watch the results live. So, here we have the test is already running. First, download the image to deploy to the device. In a few seconds you will be able to see. Then here you can watch the console output live. So, it's already booting. So, if you scroll down, you can always watch what's happening. Now it's installing the dependencies of the test. So, the test finishes successfully. Of course, I'm making an example here using KVM which is way faster than everything else. But with a real device it takes a little longer because you have to actually write to the SD card or some other type of storage that's in the device. Okay, I'll talk a little about the architecture. So, you have three main components. The first is the web UI that you saw where you have a dashboard where you can visualize the test results. You can now submit tests and manage the devices. Put devices offline for maintenance. For instance, if you need to take a device out, you can mark it offline. Then there will be no jobs submitted there. Then you have a relational database where the data is stored. You have a scheduler given which is the component that keeps polling for jobs. And then we'll choose which device will run any incoming jobs. Then that scheduler given will spawn a dispatcher process which is the backend implementation for talking to the device. So, in the dispatcher you have specific backends for each type of device. So, depending on the device capabilities, you need to talk to it in a different way. So, if the device has out-of-management that backend has to talk to that mechanism, it has a serial console and it uses any type of communication. It's going to use that. Also, the fact that the dispatcher is separated means that you also can run it directly without having the entire architecture. So, you can have just the dispatcher. You can connect the device to the serial port and then use that for development or for early testing of the test suite. So, talk a little about Lava at Linaro. We have some use cases there. Not only those, but those are the main ones. So, we do kernel CI for both ARM and X86. Hardware enablement testing for member hardware. So, Linaro members, some of them have their own Lava installations with their hardware that's confidential. Some others put their hardware in the Linaro lab. And then we have enablement testing for those hardware, which is Wi-Fi testing. It is power management testing, scheduling testing, and all those kinds of tests. We also test the Linaro engineering OS builds. So, Linaro produces both Android, Fedora, Open Embedded and Ubuntu builds. Those are tests under Lava. We are also starting to use it for bootloaders. So, for the new classes of bootloaders, we have tests for UEFI, for instance. So, we are working on integrating the official UEFI test for certifying the UEFI produced by Linaro. And in general, testing software provides for ARM. So, as I said, Linaro's focus is on ARM, but there's nothing stopping people using Lava for any other kind of device. Now, here's the point where we can discuss how we could use this for improving Debian. So, first, we have Debian packaging progress. So, we had some initial packaging done by the Collabora guys. They sent us the packages and we are working on them. And then, pretty soon, we should have those packages ready. So, you can just up to get install Lava 7. You are done. You just have to configure your devices on Lava. And then, here's where we discuss how we could use this for Debian. So, I don't think we should use it for everything. I mean, a lot of test cases we can already do with Jenkins and the stuff that Hogan is working on. So, I wanted to discuss what we could do that we could only do with Lava and then integrate that somehow into Jenkins. And that's it. This is my last slide. I would appreciate if you guys have some ideas on that. Anybody wants the microphone? It doesn't seem to be the case. So, at least, let's thank the speaker for the presentation. Maybe there comes up a discussion when we get out and sit in the sun. So, great. Thank you.