 Okay. Let's start. Thanks for joining. And we swapped topics. The presenter couldn't make it. So I'm here to talk about how we build and test the Automative Great Linux. My name is Jan Simon Müller. I'm doing the release manager work for AGL and taking care of the infrastructure side. So, oh, should turn it on. What's AGL? So, in principle, it's a Linux distribution for Automotive. We use the Octo project and open embedded. And we create a platform which can be used for multiple device profiles. IVI right now, telematics in the queue. So a platform for Linux in Automotive. Our goal is open source and code first. And we support multiple architectures like for x86. The miniboard is a reference platform for ARM32, the renaissance porter. And ARM64, Dragon board is a nice candidate here. And I've seen Metporter posted on G plus Raspberry Pi 3 with 64 bits. So, yeah, that might be also an interesting target here. So, why we do what we do, what tools we use, how we combine them, and what we want to achieve with that, well, call it stack. So, our contributors, our developers are distributed around the globe. Yeah, we have participators from Japan, yeah, so Asia, we have in Europe contributors in the US. So, basically, all the time zones are involved. We have a public code review, as many projects do. Well, what we find out now, well, in the code review process, yeah, I can stare at the code, right? But, and say, yeah, it looks perfectly good. I do that kind of every day. And say, yeah, it works. And, yeah, it builds fine. But it breaks on board N. And we need to find that early. And as you have seen during the conference, that actually must be a quite common problem. There were a lot of good talks during ELCE. There are multiple ways to skin the cat here, which is good, because we can have different use cases and, well, special needs here. So, in the next slides, I will describe what we have set up, which kind of we think does the trick for our requirements or for what we want to achieve. If you have ideas or say, yeah, wait a minute, but that would be better like that, please. Patches welcome. So, what tools do we use? Well, pretty much a standard set. Garrett, yes. We pick Garrett. Sorry, Greg. For CI builds, we use Jenkins. There are other systems there. Yeah. For example, Go CD, pick your weapon. Yeah. For the tests on the hardware, actually, right now we use two. And I will tell you for which part of the game we use which. One is HGLJTA, which is basically Fuego and Lava. And kind of the big, well, work in progress target is how do we kind of post-process all the data we basically gather then. Yeah. So that's, in most cases, still a question mark or being worked on. So, well, Garrett, you can quickly take a look at git automotive linux.org. Over here, number two. And we have our main project all within slash HGL. So, hopefully the Wi-Fi works. Yep. So, everything that's prefix with HGL, that's our main project. And let me switch back. We have a few reposts where we are the upstream that's in source and we have, well, some scratch space where we want to try stuff out. Poker patches that's in staging. For the main part, we use repo to pull down the repositories. Yeah. Now, basically, if we want to support all of these boards as we want to, so we have the reference platforms, which are right now the blue ones that you can see, ARM x86 hardware and an emulator target, which is good to have for quick boot tests or for tests which we can do on an emulator in the automation. And the green ones, which are our community platforms, you see that matrix starts to grow. And if we want to see that it works on each board, then you basically have to test all these. That's nothing I can do on my desk, because, well, number of boards, times number of tests, times cups of coffee I drink in that would maybe work a day, but not on the second day because I didn't sleep all night. The CI builds happen in adrenkins and we have, we use standard plugins here. So Gary trigger plugin is used to pull our Git repos and we use the OpenStack cloud plugin. So we are in a cloud for spinning up the minions and all of the minions run off an identical base image. So we get reproducible, reproducible build environments, always the same build environment. And to keep track of our CI jobs, we do that with Jenkins job builder. Now, again, there are multiple ways to skin a cat here. You could also use, for example, the Jenkins job DSL. That's what Alexander described in his talk, like a couple of hours ago. So there is no single way here to set this up. Now Jenkins will give us the verified flag back to Garrett. So either verified plus one, verified minus one. And in our setup, no bill, no change goes in if we don't have the verified. So basically, whatever we depend to that chain for the CI build, which can be initially quite simple. Yeah, it builds up to, it builds, boots, runs, passes, all tests, updates, cleanly, and shutdowns properly. That's an interesting test which quite often fails, as I've seen on a couple of boards. So pick your criteria here at the moment. We are at the stage where it basically builds, boots, and to some extent passes tests. So we are basically in the middle of that chain. Well, tests on hardware. Well, are they hard? You need the hardware. Yeah. For our community boards, that's relatively easy. Yeah, you can get raspberry for a couple of bucks or big, big boards. Now for the hardware, in automotive, it might be a little harder. Usually you need it on your desk and in the lab. You need to have it close to you and be able to juggle the SD cards. That's quite time consuming. You need to deploy the firmware. You need to reboot the board and run the tests, collect the results, well, interpret the results, right? And then rinse, repeat, for either the next board, for the next change. That is quite tedious and time consuming. So how can we automate that? Here you see an example that's set up with six BB bones, actually. And they can all be powered. They get their file system over network. So that's how we can automate that in the end. And as we have seen today and yesterday, I just picked a couple of pictures from slides that, from talks of this conference. That is, there's a pattern. We see more and more of those automated labs here. And which is good because that drives the process that helps us to ensure the quality of our code. If you are interested in such a lab set up, I'm in the process of documenting the set up that I use. And we have also documentation for JTA and Fuego. Just watch our wiki page that's here in the end. Speaking about links, the slides are not online yet. They were just finished ten minutes ago because it was really on short notice. They will be on the website soon. Yeah. So what tests and frameworks do we use? We use AGL JTA, which is in principle a modified Fuego. And we modified it to run tests on one of our reference boards, the Porter. And we are building up the whole chain. There's an instant live, JTA.automotivelinux.org. Over here, this one. And that instance here is able to run a battery of tests, battery of benchmarks, functional tests. That's all built in. And we have our test sequence encoded in here. It produces quite a lot of results. It embeds a large number of tests. And we collect right now the results in a Git repo. And push up for each revision, for each change set, we get a commit in the Git repo. What's good about that, we have the large set of tests that come with JTA. It's basically a Jenkins under the hood. So we have quite a few post-processing capabilities. We can use all the plugins that exist for Jenkins. So that gives us quite some power to post-process the results or do things based on the tests. A little hard is the installation. Well, I have a note here. It comes as basically a container, a docker file, so you basically set up a container. You still have to inject the board information, the tool train and such. So it's not completely turnkey yet. We still have to modify things which means we have to modify the Jenkins job XML files because the jobs are basically encoded right in Jenkins XML config files. So I'm not really good at encoding stuff in XML. That's not how my brain works. That might be just me and my blinders here. Looking forward to it. So that's where we can improve. One assumption is that the board is close to the Jenkins. Basically Jenkins wants SSH access to the board. And Jenkins needs to be able to power the board on and off. So that means basically I have, well, right now there are two sides of the story. In company, well, fine. You just put your boards close to your small machine that runs the Jenkins. Now with the project had on for the project, we would host that somewhere. How do we get the boards there? Problem. Logistics and problem wise. So that's a little bit problematic for kind of the project side. To use it in-house, that's perfectly fine. You set it up. Whatever trigger is used. So basically JTI would be triggered either it would pull Garrett again or it could be triggered by your CI build. That's what we do right now. We let our CI build trigger a test run. Also the boards are basically set up as Jenkins slaves. So at the moment I would only be able to use one port a board. Or I would have to set up a new slave called port a one. Yes, that's possible. But then I have to clone the configuration files again. So that's not there out of the box. There is no notion of a board type versus a single board of that type. Now we also use lava. You might ask what's not a difference? Well lava is now very good at managing your board farm. So it basically knows a class of devices and you can add multiple boards to this type device type. So we can manage multiple boards which helps us then also to run the tests in parallel once we get multiple builds at the same time. It basically grabs a board from the pool, will power it up, boot it, test it, it can deal with the boot loader and so on. It can also execute tests. But the tests do not come as part of lava. There's a test repo. Basically you have a kind of a test shell in here. Whatever you run there, well, let's say a different story. There's a git repo which we can reuse. So we can also run tests there. As I said, we can do multiple boards per board type. A remote lab is working progress with the rewrite. So we can have kind of satellite dispatchers. So we have one hosted instance, the master, and we can have dispatchers where the boards are connected. Yeah. So if you install that should be a little easier than with V1? Yeah, yeah. Yes, that's true. I just didn't find the documentation for the remote sites. The installation procedures. Okay. Then reference these, please. So remote lab, that's what's especially interesting because in the end basically each developer has his one or two boards and it would be nice if we can pool them to run tests on them. What even works is that you can stick it onto a Raspberry Pi face on top so you have your relays for the powering serial connectors on maybe a network switch and that will be already enough to run two boards. The setup, the initial setup if you follow the instructions is pretty straightforward. You have to have a Debian and then you can just pull it down from the packages. The configuration files, the templates for the boards probably start from one of the existing ones and modify it until it matches your boards. And I didn't find the dock for the satellite, but yeah, true. I pulled down the slides but I didn't have time to look at them, so Colonel C. I will look at it. Yeah. So that's what we have active right now. That's Porter, AutomotiveLinux.org over here. You can go to it. Basically that is the setup as described Raspberry Plus Relay connected to one of the boards. So the docks are here. I keep track of an example installation as I redo it now with V2. So thanks for the hint. And I will update the document and the first link as I go through it then probably Saturday. And for FUEGO we have for the AGL JTA and FUEGO we have a document in the docks folder. So there's a complete PDF how to install the container and then configure it. And if you have questions about setting that one up, just send it to the mailing list we will jump in. For the whole thing we have also a wiki page. So basically a simple setup is basically less than 100 euros or bucks. Depending if you count the device under test in or not plus minus. So that's the minimal setup actually. Well, you can even, you can go bigger. So post processing. As I said right now we collect the test reports as they fall out of the tools as they fall out of AGL JTA. Can we then post process them, well better create trends, graphs. Because the raw data might not be meaningful per se. Unless we look at them with a pair of eyes and say yeah that's interesting. So how do we find out of this data set the hotspots or can we find trends. So that's an interesting topic to work on. And also what data do we want to track. Yeah. One example, kernel CI. You guys do great work. You track a lot of boot reports, right. For us in our case we might want to track well can the boards and can messages. So on top of that. So what's our key indicators that you want to track. In the end we want to give our developers a fast feedback. So in our Garrett we added beside the code review and the verified that comes from Jenkins we added a couple of fields which will then get there tick as the process passes. Well image build as simplest case we have for example a simple boot test some small short test runs attached and we have the full test pass that is in our case HGLJTA or FUEGO basically. And that is the goal is that the developers need to see what's going on. Does it pass on all devices because well in reality you test it you test one build for example QMO you test it for the device that you have in front of your nose and yeah it works but more often we have seen that well it for sure it will fail on one of the other targets especially if you manipulate recipes around graphics or if you manipulate recipes around well system D or whatever else yeah if we have to do tweaks there. So how we combine them at the moment we have this setup over here as I said Garrett Jenkins for the builds JTA for running the tests and at the moment we get the board through lava so Jenkins will basically push a job to lava request the board and access it then it runs the battery of tests and we get the messages back in Garrett yeah plan is to have those remote labs which we should be able to do now and also we have in mind to look into adding SPDX to the build but that's mainly a build issue that doesn't have to do that much with testing now long term we might just brainstorm and say let's see if we look at how the SDKs develop if you look at the Yocto project with the extensible SDK with the CROPS system basically toolchain in a container remote API to the toolchain if you look at Eclipse G kind of your IDE in the browser well you end up with kind of environment where the developer sits on the web browser doesn't have the compiler on his machine at all and writes the code and then code safe build so the idea is that we can build the application whatever for example in such a web IDE build click another button and we for example get a job in lava that will run the binary and test it on the board another idea is what about the UI UI testing is another interesting topic but probably a big one on its own there are systems out there one of them is OpenQA in principle they run virtual machines they take snapshots of the VNC and you pre-define well targets that need to be reached that's basically how it works you can imagine if we work with sort of snapshots that's a lot of work to create those snapshots well maybe let's see so what we want to achieve in the end stable and test the platform where we can build upon on a wide range of devices we want to give fast feedback to our developers we work remotely and we are at conferences like ELCE and oh wait a minute does it work yes no and that will help to speed this up and if we look ahead how the SDKs develop the testing should keep pace here otherwise well everything goes on over here and we have to stop and put it through all the tests again alright questions alright thanks for joining and if you are interested in those topics send an email to our mailing list automotive discussions and if you have ideas please thank you