 Hello everyone, thanks for joining my talk about testing your automotive software and hardware. Let me introduce myself quickly. My name is Jan Simon Müller. I'm the release manager for AGL. I'm also leading the Continuous Integration and Automated Testing Expert Group. You can easily reach me by email or on IRC on the automotive channel there. The Seat Expert Group is a group of people interested in continuous integration and automated testing. You don't need to be an expert to join us. We do meet every other Tuesday and you can find the dates and times on the calendar of the Automotive Linux project. Also, you will find the meeting information and meeting minutes in the week. I invite you to join the conversation going forward there. The goals are that we ensure the stability of the AGL Unified Code Base. And we provide fast feedback to the developers for each code change, for the platform libraries, for the applications, for each supported hardware platform and for each supported image variant. So going forward that are multiple boards. These are multiple images. So the matrix will grow. Let me introduce AGL's test infrastructure and explain what components we use there. We do build for multiple target machines and multiple images. The build happens in the Jenkins server, which is our build tool or which is our scheduler to execute the builds. The builds are then done with BitBake and we host the artifacts then on a simple web server. As targets, we do have QMU in x86, ARM32 and ARM64 bit. We do have Renesha's Rcar Gen3 in form of an H3 plus Kingfisher. We do have the AGL reference hardware, which is derived from Rcar Gen3. We do have builds for the Raspberry Pi 4. We do have builds for the SandCloud Beaglebone Enhanced and we also do builds for the up square, which is the same meanwhile as QMU x86, 64, build-wise. So once the build is done, we submit a job, actually one job per board, to our Lava Master. The Lava Master will then execute those jobs on the requested device under test and we will send feedback to our Garrett server. Each code submission, those will have a report on a series of tests on various hardware platforms. To do this, as I said, we use Lava. So Lava is a project started by Linaro. It's the Linaro Automated Validation Architecture. Essentially, it's a scheduler that runs tasks on real hardware or on virtual QMU-based hardware. It is a board farm and board lab management tool. Once distributed, we have a central scheduler and we have multiple remote workers and multiple devices under test connected to each worker. Essentially, what it does for you, it takes away the board maintenance from your developers, from your actual testers. So in the end, you basically have a rack of boards in a lab that are there to run tests. The developer or the tester can simply submit tests for execution there. We do use network boot in our setup, so we do not require the SD cards being exchanged every time. This speeds up the cycle time a lot. Here's how the architecture looks like. We have a master for AGL, this is lava.automotivelinux.org and we have multiple devices under test that are connected to workers. Here's an example how such labs could look like. One option with multiple devices under test in a box is shown here or you can have stacks of boards in a rack and have them maintained in this form. The input for lava are so-called test definitions. A test definition is one test case and we can then combine them to test suits. In short, these are a YAML file that describes the test, a.k.a. the cover letter and a test script or executable that is being then executed on the target. There are multiple sources there out in the open that we reuse or that we added ourselves. The glue between lava and Jenkins in our case are the so-called relang scripts. These are simple, this is a simple Python tool that we use for templating. It is used for submitting jobs to the lava server. They consist out of what is booted, what board, what tests are run and so on. This is the glue essentially between Jenkins, the QA test definitions and lava. What do we reach with this setup? We can test all system software like SystemD. We can test all libraries like XML2. Essentially everything where we have a test suit that can be executed. A lot is already available by reusing the YoctoProject's P test framework. The P tests can output the data in lava readable format. So we have a lot of system software, core libraries already covered. We just need to execute those tests. A word on, take a deeper look into those tests and make sure they are relevant to you. Not just enable them, you need to know what's going on. Now the limitations, what we cannot do yet. There are cases where this setup cannot execute certain actions yet. One is firmware updates or early boot interactions. They tend to require early keyboard input and brackets without network being up. So we cannot SSH in or we cannot drive this only through serial terminal. File uploads is a topic there as well. Some of these procedures even mean we have to toggle dip switches, which is hard to automate. Essentially you have to unsolder the dip switch and flip it in a different way. Also, HDMI recording is a topic and in the end, for making things fully transparent to the tester and developer, you want full control exposed over network for debugging purposes. Essentially, once things fail, you want to be able to preserve the state and expose it to a developer. In the next chapter, I want to highlight a few problems and solutions that we use or plan to use in our setup and give you a few ideas for your own usage in your lab. So one common problem is juggling SD cards. So this is for each developer a pretty big time sink. So it's not just a problem in the lab. It's actually a problem for everyone. To the rescue, there are so-called SD card multiplexers. What do they do? They basically can disconnect the SD card from one system and attach it to another in a rather safe manner. There are special circuits for this to not damage the SD card. Just think of it. A write process could be going on on the card right now and this might even not be visible on the outside. So you have to use special circuits, have some timings, timeouts for this to work safely. To our rescue, there are meanwhile quite a few available even commercially. One is shown here. This is the USB SD Max and these are really helpful even on the developer desk because you don't have to juggle SD cards. You just execute one command. You can write the SD card, you execute a second command and there you go, no juggling required. Essentially, the developer would just need an extra way to power cycle the board and there you go. It's essential once we start testing bootloaders and need to rewrite them on the SD cards a lot or if we want to test full image boots without using a network boot as I said we do in our lab right now. Also, one problem that you will face at some point is that you need keyboard and mouse emulated for things like early interaction with firmware and BIOS or firmware up breaths or situations where there is no serial terminal yet or no network. This happens rather early or when you cannot command the board with other means like the network is not up or the serial is not exposed. There are a few options. One is USB on the go. USB on the go means that a USB port can act as host and client. The problem is we can emulate a keyboard in this way, no problem but USB on the go ports are rather rare. So this is hard to come by and the few devices that have them usually just have one port. So this is not something you can use in a lab with a lot of boards. Bluetooth. Yeah, but the pairing and so on what if you power cycle the boards. Problematic. Rather complex to set up the automation. Yeah, so this is problematic and especially in a lab set up with a lot of devices. So one idea is to use an external device. Keep it simple. So let's see, we need a rather simple device that can emulate a keyboard. What's the simplest way we can easily steer? Well, serial, right? So the idea was born to use 280 tiny 85 chips. One emulates a USB keyboard. The serial acts as essentially as a serial device proxy. These tiny little boards with the built-in USB plug on the PCB are quite easy to get, quite easy to program and we just need three pins, two plus ground to make this happen. I show this in the next picture. Essentially, we connect pin one of board A to pin two of board B and then vice versa with port two and port one. This is essentially the serial bridge. We have a ground pin as well. Of course, this is not opto isolated and whatnot. Yes, patches and extensions accepted. Now, site one essentially works as a serial terminal over USB and the host can send characters. We read from the USB serial and write out over the pins. If you look at the code at the right side, so the DGCDC, that's the USB serial and the soft serial, that is the serial port on the pins. So this is rather simple. Here we read is sent out and echoed as well done. So the keyboard side, there is also an Arduino sketch available for a keyboard. We just need to feed it. So here, well, we read over serial and write out through the keyboard emulation. Essentially, whenever we have a character on the serial pins available, we do read and send it out over the keyboard. It's just a few lines of code in the end and two tiny boards for a dollar each or $2 each. And we now can emulate an HID keyboard, which we can easily command over serial. Same we can in the end do for a mouse. So this HID emulation can also do mouse movements. This is now the simplest case with a fixed offset for each move, but it shows that you can emulate this. You could even enhance this and tell it values it should jump to and whatnot. So this is all possible once you change a few lines of code in here. One problem you will face is that you want to know, simplest case, that the graphical system is up. In a lot of cases, this is the kind of your first barrier and you want to know is my graphical system coming up or is anything in the way from libraries to display drivers to what not. So what we did for AGL, we added a snapshot mechanism, a screenshot mechanism to AGL Compositor and we have a known good still image that you can enable with writing home screen demo CI into ETC default home screen. And then you restart Western service and you will get a still image. The full procedure is in our screenshot test. Essentially we do write this file, we restart the service, we wait for it to be fully up and then we take the screenshot. This works quite well actually and we know that the graphic comes up, which as a quick first pass test is good. Now going further, you might want to record the output of the HDMI port. Thankfully there are meanwhile a lot of devices available for recording HDMI streams. Just Google for gaming and capture, you will find a lot of devices. These tend to show up as USB camera, so you can just use any video grabber or any camera application to record that video feed. It's rather nifty watch out the simpler USB 2 devices. They tend to be smaller as well. They have A, no audio and B, they use a lot of CPU for the actual USB transfer. So you might kind of be able to connect one, two or three of those to your worker, but then it will max out the worker. The USB 3 devices are a little more expensive but they do use less CPU on the host and these tend to have audio then as well. So now let's put the pieces together again. With the extensions and the tips and tricks I just showed you, we can plan a few next steps. So now we have the building blocks to remotely control keyboard and mouse. This you can rather easy expose with serial proxies like Z2Net or others. We can capture the HDMI and expose as a camera. That's rather easy to export with command line FFM pack. Look up the command lines, they are rather long and require some tinkering, but it tends to work or if you just want to go simple, you can expose this with web based webcam front ends. That will just work as the feed is shown as a webcam essentially. So you can use a lot of webcam softwares already. With these two, you actually have full remote control over your board in the end. What that means, we have now a few new ways. A, we can work on procedures for firmware updates if no deep switches are in the way. If the procedure just requires very low level commands, we can do so. We can expose the video feed for debugging purposes. We can expose a low level serial terminal for debugging purposes. And we can send characters or mouse commands via HID emulated mouse. And we can record the real video feed. I also want to add some references to other work that's going on. So there is an integration between Lava and OpenQA that was presented during the last open source summit. I want to mention this. This is a different type of integration, much more high level than what I presented with the tiny keyboard emulators. This emulation requires network to be up, requires the serial to be up. It requires SSH and it requires VNC for the radio. It's a quite powerful integration once everything is set up. So I hope this was useful for you and provided some interesting ideas for your automation. I will be available now for the Q&A session and hope you have a nice conference and enjoyed the talks, previous and upcoming. Thank you.