 Hello everyone and welcome to this presentation. We will be talking today about continuous testing in a cloud-based infrastructure using virtualization and real hardware in the loop. Our agenda will be the following. I will start by your first introduction about QA and or QA goals. Then we will see the concept of RTM and how they are already implemented inside the community stack. After that, we will talk about real RTM implementation and all the challenges that are behind it. Finally, we will talk about test reporting and we'll finish by our own map and a short conclusion. Let's start first by a short introduction about myself. I'm Armand Benito. I'm a software and QA engineer at IoT.busydate. IoT.busydate is the company located in Brittany, in France. We are near the sea, we are 30 engineers and our main product is Red Pesk that is composed of a software factory and an operating system. So, why a QA system is needed? There are several points. I'll just say three of them here. The code complexity is increasing in systems and it's increasing really fast. For example, in 2004, Boeing 787 was made of 14 million lines of code. When in 2012, an average high-end car is made of 100 million lines of code. The second point is about long-term support. Long-term support is really mandatory for this year's industrial system. Indeed, the average age of a European car is 12 years old. It's approximately the same in the US and this is why industrial systems and more specifically cars are made to last. Therefore, the code inside these kind of objects needs to be made to last as well. And the last point, but not the least, is cybersecurity. It's not an option anymore, really. And to give you a little example, in the first six months of 2021, more than 1.5 billion attacks have been seen on IoT devices. So, because of that, an automatic QA infrastructure is needed and it's a must-have because it tests systematically for every commit the possible failure configuration and then reduce the risk of bug hidden in the complexity of the system. It allows to maintain for 10 to 15 years. This is basically our goal in IoT.base.ditch by checking automatically that no code deviation is injected through the code maintenance. And finally, it enforces security checks early in the development cycle. Because of all of that, it keeps costs and time under control. Let's start this presentation by an overview of our QA system. The QA system never comes along since we are not doing that for entertainment but in order to enhance the quality of the resulting code. In no case, the QA is done on binary packages coming out of our build system. Once these packages have passed the QA, they can keep progressing in the factory process. And here is the goals for QA, the workflow. At the beginning of the chain, we have two developers here and managers. The developers are committing changes on the Git server, for example. Once these commits are done, the source server will initiate a build on the build system. If this build pass, it can move on to the unit test. If you know quite well spec files in the RedPest factories, the unit tests are actually corresponding to the check section of the spec file. So once these unit tests are passed, we can move on to the virtual integration tests. These virtual integration tests are run inside the virtual target, of course. Once these virtual integration tests are passed, we can move on to the real integration test on the real target. What we can say about this diagram is that the tests should be written and run early in the CI-CT infrastructure. It is therefore easier to run them on a virtual target. The tests should be run, of course, every time a developer pushes a commit in the system. And, of course, real hardware needs to be integrated to avoid deviation between virtualization and reality. And as soon as a build or test fails, the developers and the managers are notified in order to shorten the time between the bug being introduced and it's fixed. And, of course, the main output of the QA needs to allow anybody to understand in the blink of an eye what is passing the QA and what is not. Like it's in the case here, we have a little dashboard with statistics and latest build, latest test results, etc. So, a QA system needs to contain a continuous test system. At iot.bz.h, in the hearth of our continuous testing system, we have the Rackable Test Modules. The Rackable Test Modules. They are all sufficient to run integration tests within the RedPest infrastructure. They are intended to meet user requirements regarding qualifications, certifications, continuous integration. They can be dynamically started by the infrastructure or by the developer to run integration tests. And they are the hearth of the continuous integration and continuous testing inside the RedPest factory. In this infrastructure, we are having both types of RTIM, virtual and real. Virtual RTIM are completely isolated. It is basically a QMU inside the LXC with really strict IP table loops. On the left side of the slide, you can see that packages and images created in the CI infrastructure can be accessed by the RTIM in order to test them. And as we can see on the left side of the slide, completely left side of the slide, the developer has a direct access to the target through a VPN connection between their PC and the CI infrastructure. A first version of this RTIM is already available in a community stack. So basically, it is already available for anybody for free. Let's see how they are working and what is available. In order to make it work, the developers need to integrate few things inside its packaging. And these things correspond to the RedTest definition. One of these things is to have a RedTest sub-package from its main packages. Inside this package, you need to have a runRedTest script where actually, this runRedTest script will be run by the CI infrastructure to run the test. And this test needs to test anything protocol file. And this file will be passed by the infrastructure to get the results of the test. Test anything protocol is kind of a simple protocol. So in application, this gives that on the screen. From sources to spec file, you build it. From that, you get at least one main RPM. Here it's called error world binding RPM. And the second one that ends with RedTest.RPM. The continuous testing infrastructure takes both of these packages, install it on a device in the test, run the test. And from that, output the console logs, the TAP files, and the ZIP containing all of that. And as we can see in this diagram, virtual RTM are available in our community stack. They allow us to run application tests through the web UI or through the command line interface RPCLI. But for the moment, it's only virtual on our community stack. And now it's time to see what it looks like to run an integration test on our community stack. So let's watch this. So let's go on the community stack. Okay, so this is the main dashboard of community. And we are looking for an application test to run. So we go for error world binding, the famous one. And so we choose a build to test for. And let's start the tests. So the tests are running. There's no pending for an available virtual RTM, which is started in the background, in the infrastructure. The tests are being deployed. So particularly the community is starting and we are installing the packages. Yeah, so the error binding is installed. The red test packages is installed and the tests are passing. This is the end. And we have the resulting TAP files concatenated because there were several ones. We have a little sent this on the left side where you can find the results, the number of tests passed, failed. You can download the files or the results file with the test anything protocol results, the console logs, so std out and std error. And all that. And you can as well directly download the test anything protocol results. So we have seen the RTM overview and the virtual RTM, and it's no really time to talk about real RTM implementation. And we will see that it's bringing up a few challenges. One of the challenges is to share boards between users. Indeed, most of the time in project, you do not have one board per user because of prices, because of availability of the board. So the real RTM system really needs to manage the user's access. Another challenge is how you supply the boards in power. Indeed, you need to be able to stop and start the board correctly before and after the test. Another challenge is to manage the RedPest OS image loading. Indeed, loading a kernel image is one thing really, but the kernel is around 10 megabytes and it is easy to load in RAM. But it's not the same really when it comes to a full distribution image bigger than 2 gigabytes with several partitions. And finally, the last but not the least, you need to manage the board's boot. You need to manage grub, uboot, the prompt, etc. We will not focus our attention on how these challenges will be addressed. In this case, lava is really the missing links for us to integrate the real RTM systems in our RedPest OS system. Indeed, it solves a few problems for us. Lava stands for linear automated validation architecture. It's a continuous integration system for deploying OS onto physical and virtual hardware for running tests. It is used a lot in kernel validation, for example in kernel CI. It is fully open source and is really interesting for us especially because it has already existing board definitions and it manages uboot, grub and fastboot out of the box. But of course, some work is needed to integrate lava as our real RTM system. All the part where the work is needed is illustrated by a working penguin. So the RTM from Clients is a new asynchronous microservice that we integrate in our real RTM system. It uses XML, RPC and zero MQ to communicate with the lava master. And on the other side, it allows an homogeneous communication with the backend using REST API and a web socket. Some refactoring is needed as well on our backend. As you can see on the bottom of the slide, to be able to start a real test live. And finally, of course, on the top part of the slide, you have little penguins to say that we need to work to integrate the great pesky boards we want to support in our QA environment. And this is what we'll be talking about for the next slide. We'll be talking about how we integrate this board inside our QA system. So when a board is integrated inside the real RTM, it is linked physically to one dispatcher, to one lava dispatcher. This configuration can be seen as a bit overkill, as we said, but it allows us to be able to run more complex tests needing, for example, to grab an HDMI signal or an audio signal, things like that. So we need really to have a dispatcher as closer as possible to the DUT. In this part of the architecture, we can see as well that the lava master completely controls the lava dispatchers through the zero MQ protocol. Therefore, all the job submissions, the test results and the test logs are fully managed by lava, which suits us really, really well, actually. And so therefore, the communication between the lava master and the lava dispatcher is not the best business, and this is great for us really. So we have seen how lava system integration gives answers to several of our challenges. Let's see now how the other challenges can be resolved. And the first one of these challenges is how to control the power supply. We have two solutions here. The first easy one, a second more complex, but more complex one. The first easy one is simply a remotely controlled multi socket. This socket, I gave you the reference for. This socket is controlled through Ethernet. Binary is available in Linux distribution. It's called EGCTL. It's really simple. It's off and on, but it does the job. A second solution more complex is to use a laboratory power supply. I gave you the reference as well. This one is controlled through RS485 through the Modbus protocol. And it is able to simulate low or high voltages, situations to test both in the limits. So this is actually really interesting in some, in few cases. And now let's face maybe the hardest couple of challenges we've met for this integration. The RedPest image load in both and the boot on this image. The first option is actually a network boot using PXE. And inside this solution, two solutions are available for the file system. The first one is NFS, but we will forget this one right now because it does not propagate the SELinux and SMAC levels. Since we want RedPest to be kind of a reference in cybersecurity, it's really a problem for us. So no NFS. The second solution is NVD network block devices file system. It works quite well really. It's really fast if you have a good network. Moreover, it's true for all the network boot actually solution, but you do not need to flash the RedPest image locally on the device and do test. So you do not wear out the memory. But there is one inconvenience. It's either slower or faster. So you do not have the exact same behavior as in production. So it can be a bit of a problem to get a few little errors that you do not have with a network boot. Another really interesting solution is using fastboots. Fastboots is coming from Android. In this case, the DUTBFs as the USB storage where the image will be flashed. To enter in fastboot, the Uboot mode needs to be stopped. And this is great because it is managed by Lava. And this fastboot solution allows us to integrate three boards out of four. These boards are solid run, Renaissance, Gen3, and Raspberry Pi 4. And this is free out of the board that we want to support for the RedPest community. If the fastboot is not supported, and this is the case for last board, we find out another solution that is USB gadgets. In this case, USB gadgets need to be enabled in the dispatcher. And of course, the dispatchers need to have USB OTG enabled as well. In this case, the dispatcher behave as a USB storage. So basically the DUT puts on USB in this case. In this case, you do not need to flash the board EMMC. So you do not wear out the board memory, but of course you wear out the dispatcher memory. So it's kind of great trade-off. But another thing that can be great with USB gadgets is that it allows us to simulate USB devices like a mouse, a keyboard, and other things that I don't think have. So later, for the later test, for more advanced test, we can reuse that. And this is how USB gadgets allow us to integrate the last board, which is the Intel Upboard. No, few words on reporting, because it can save time to user when a problem appeared during test run. Really, you can save a lot of time with that. So once the integration tests have passed or failed, the reporting needs to be done under results. You can retrieve the boot logs if it's relevant. So more if you want to test the image. Really, you have to send the STD out and STD out. Basically it's the base to debug. And then, of course, the test-anything protocol file needs to be retrieved as well. If the tests are successful, virtual or real, the package can go to the next step. That can be a vulnerability scanner, a license analysis, things like that. And now an example of test reporting in our RedPest Community Stack. You can see really quickly that you have the test-anything protocol file in the middle. And you have a quick synthesis that you can see really quick, where you can see really quick the number of tests passed, the number of tests failed, and you can download the tab file and the logs file. And that's all for test reporting. So the real-time implementation is in progress and should be available soon. But in order to go further, we gave us a little roadmap for the next implementation in the QA system. The first point of this roadmap is to be able to do more advanced tests in the QA system. By advanced tests, I mean tests that need more external processes. For example, they need an HDMI grab, an audio grab. And typically this kind of processes would need to be run on the dispatcher side. Another really interesting point is developers being able to access, to remote access the board through VPN, for example, to do little development tests. And actually in the Lava ecosystem, it already exists and it corresponds to the hacking session. Another point really interesting is adding test libraries shared between projects. And we can go even further like sharing test plans between projects. And finally, a last point is the integration of external modules in order to go further in the QA to integrate more and more modules to be able to scan the code for cybersecurity concern, to be able to output a flow chart generator, because it's always useful for certification, etc. And in order to finish this presentation with style, here's the conclusion. In conclusion, continuous testing is a must-have. Really, it's a must-have because of the increasing code complexity, the long-term support needs for industrial systems and the cybersecurity concern. Both virtual and real boards must be in the continuous integration loop, really. Because for a lot of tests, a virtual target is enough. But when your tests are passing on a virtual target, you really want to be sure that they are passing on a production target. In IoT.business, we give answer for that. The first one is the virtual RTM that are already available in the community. And the second one is the real RTM. We are working on it and it will be available soon. And finally, a continuous testing system is a part of a full QA system. And the full QA system is never ending, basically. It has to be able to integrate external modules to be more and more efficient. So, this is the end of this presentation. I thank you a lot for your attention and I'm not ready to answer your question if you have any. Thank you. Thank you a lot. Have a good day.