 Hello everyone, my name is Paweł Wieczorek, I work at Samsung R&D Institute Poland and I'm currently a member of Global Open Source Group. I took part in setting up an automated testing laboratory for Tizen GNULINUX distribution and this Tizen GNULINX distribution is targeted at embedded devices, TVs, wearables and home appliances. These activities resulted in creating SLAV, a solution for automated testing of embedded devices, which I would like to discuss with you today. I would like to share with you what issues we faced, what steps we took to resolve them, but before we can go through all of these, let me start with a short note on motivation, why was this event started. Then we will move on to describing testing laboratory layers and next I will do a case study on the testing laboratory that was developed at Samsung Research Poland. Next, I will summarize it and then we will have time for Q&A session. Let's start with something that we face every day, so the most common use cases for automated testing. That would be the continuous integration systems and having direct access for hacking devices or rather, I should say, issue investigation. Both these approaches require allocating resources, which most sometimes is just acquisition of device under test for performing some actions on it, then releasing no longer useful resources and either analyzing the results that you got or already solving the issue that you began with. Although these use cases are often treated differently in various automated testing laboratories, they are pretty similar. It can be even better seen once you make the abstractions of the layers that are often seen in such solutions. So from the bottom up, we've got the devices under tests or that's for short, which are most common the boards that you are working on or sometimes something more like TV or fridge or wearable device. For that, you also need a way of controlling power, providing network and ensuring communication with your device. For having the ability to acquire access to those devices, you also need a test scheduler and for managing all of these actions, a test manager could prove itself helpful. This taxonomy was borrowed from the test standards page on the elinux.org. And once you have these abstractions in place in your testing laboratory, it is often seen that some of them are wrapped around with other tools. So once you've got this way of controlling your data, you wrapped it around with a scheduler and next you've got a test manager. And if you want to make some adjustments in one of those layers, you have to go through all of the upper ones in order to be able to do some modifications. But if you go back to the initial abstraction layer structure, you can see that dividing these building blocks could prove itself useful if you need to be able to swap those blocks easily. And since at some research point we got a lot of devices that we could test on, we moved to the next layer from the bottom. So we tried to provide better control over the devices under test. That resulted in a few custom boards. And from the left-hand side on the top, we've got the initial SD card demultiplexer that we used for quite some time. And underneath the initial one is the one that served us for a couple of years. From the SD Max boards, the two new ones were developed. And these are on the right-hand side and on the top is the SD wire board, which does not allow full control over the device under test, while the one on the bottom, the MaxPy board, can serve as test scheduler and even test manager if you want. All of these devices were designed for specific needs and served them well, but since we already got so much of custom hardware, we wanted to make the software solutions as generic as possible. If you would be interested in any specifics on all of those devices, the issues that we faced were described on the Tizen.org wiki pages together with files for you to be able to build those devices yourself. So for the SD Max final, SD Max board, as well as the SD wire, which did not allow to have the full control over the test board, it only provided the way of sharing the storage between test server and device under test up to the MaxPy board, which allowed us to unify testing laboratory that we used, but had several other issues that I will describe more closely further in the presentation. So, as I said, we had some custom hardware, but we still wanted to make the software solution as generic as possible. So we asked ourselves a few questions. For example, as it comes to knowledge on which actions are necessary to be performed once the new issue arises or new version of software is being published or where these actions could actually be performed. By that, I mean the device under test that would be assigned for the specific task. Finally, how to do it? What actual actions on the device are required to perform relevant test plans? But being able to say who knows what is not everything, responsibilities should also be divided between several layers and finally, once we've got the responsibilities set up, we also thought whether sharing devices within testing laboratory is different from being able to share device between several developers. And if there is no difference, then who could use given device under test and how it actually could be used. With these in mind, we tried to implement all of the layers I mentioned earlier and we focused on a single feature for each of them. So for the test manager, we tried to keep it minimal. For us, test manager should act as if it was a person who actually wanted to be able to access the device under test and be able to interact with it on a normal basis. So test manager in our case would be the one who initiates all the actions on the device and the service for that would only allow us to list currently performed actions or cancel the ones that we are no longer interested in. The test scheduler part would be the one that we wanted to keep as generic as possible. So listing all the available resources and by resources, I not only mean all of the devices under test that are connected in your testing laboratories, but also specific features of those devices. And with that, test scheduler should allow us to request these specific ones. Once the requests are made, we should be also able to acquire assigned resources and if we need them for a longer period, we should be able to request prongation as well. And once it comes to the controlling devices under test, this was the layer that we had the most issues with. This is because once you decide to keep all of the things generic, you have to be able to create another abstraction layers because there is no devices that would allow you to perform all of the actions like flashing new firmware, power cycling, the device or even command execution and file transfer in the same way. So this is why we tried to provide the API for the dot control that would cover the specific commands that would be needed to perform all of these actions. And we had to introduce another layer in the implementation of testing laboratory. So with these layers in mind, let's move on to the strengths and weaknesses of the solution that was developed for Tizen needs. As for the test manager, I mentioned we tried to keep it minimal. So it only required preparing test plan that would be further executed on the assigned devices. And we tried to maintain a compatibility layer with the facto standard for automated testing laboratories. So the test plans were semi compatible with the ones used in lava. And that was the feature that caused most issues because keeping compliance with lava test plans and actually catching up with the most recent features introduced in these solutions were too hard for a small group of people. So as for the test scheduler, we decided to treat both automated systems and actual users equally and to keep all of the resources types similar one to another. But that also required us to declare upfront what features we were able to request in our testing laboratory. This could be overcame with some way of autodetecting what is available. But defining available capabilities was the way we chose because it was simpler at the time. Also, being able to tell in what state the actual resources are required some additional agent that would take care of keeping this info. As for the data control, we were happy with the fact that only some knowledge was required to perform actions on the devices. So if you knew what the API was available, so that there is a command that you can use to boot the device or there is command just to execute a command, you did not have to care about the internals of those. And with custom hardware, we also had the possibility to unify the testing laboratory for all of the devices, for all of the device types that had to be supported in our testing laboratory. But the initial setup for such testing laboratories might have been too hard and we faced many issues with that while setting up new instances in overseas centers. Having the custom boards often poses a threat of having snowflakes in testing laboratories. By that, I mean unique configuration with unique hardware and probably not being able to reproduce it anywhere else. And with that, I would like to sum up what we had. With some specific hardware, we were unable to create demonstrations of the testing stack without presence of the people who would be able to set up the custom hardware and even bring it to the demo site. What was also an issue for us was that the large scale deployments were too risky and not everyone was convinced that some issues that might occur under a lot of load on testing laboratory will be easy to overcome on large scale deployments. However, the vision of responsibilities with all those layers allowed easier onboarding for new people who had to take care of testing laboratory and also it allowed to lower the initial barrier of getting to know testing laboratory and all of its internals. I would like to leave you with the idea that the user-centric approach and by that I mean treating both automated testing systems and interactive users the same way resulted in smaller building blocks. By that, I mean the overall testing laboratory structure. So no wrap-ups from the dot control over that scheduler and then manager. Because we replaced it with smaller building blocks that were easily swapable and they could be even used independently since some people only needed remote access to the devices under test and for them just maxed by boards were enough. For those who wanted to share the access only a test scheduler was needed and the test manager came into place only if someone wanted to perform predefined test plans. And one more thing, I mentioned that for a small group of people it was pretty hard to catch up with other more advanced projects. So instead of rewriting everything from scratch reusing of already available building blocks could prove itself helpful and that's why all of the resources that were generated for the Slav projects are available at github.com. That's all that I've got for you today and if you have any questions I will be happy to answer them. Sure. We started this. So the question was about the features in Lava whether it did not satisfy our needs. Right? So the question was about meeting our needs by Lava project. The Slav project started around 2016 and one of its main goals was to provide a simple interactive access to the remote testing laboratories and for that Lava has so-called hacking sessions. It's a special type of test plan that can be submitted to Lava server, which allows you to acquire device, sets it up and then gives you back the access info. But the hacking sessions if I'm not mistaken at the time there was no maintainer for those use cases since Lava was focused on continuous integration. There were several other projects which aimed at resolving that issue. For example Lava Bo by Free Electrons back then and now bootlin and it's still available for use. But the approach with putting equal sign between the automated testing laboratory and automated testing system and the interactive user was something that we wanted to try out and that's why Slava was developed. I haven't tried the hacking sessions now. I believe that much has changed and maybe that should be compared in order to be able to tell how it went. So the issues that we expected in large scale deployments were mostly connected to the throughput of network with Moxby boards, all of the communication with boards goes through the Ethernet network. So we did not expect having problems in multiple devices in a single network. Like we did in previous devices. Previous devices like SD-maxes, SD-card, the multiplexers were connected to test server via USB and in large number of USB devices often poses a threat of some bugs in USB subsystems. We came across similar issues and I described that in, sorry for that. And I already mentioned it in other Slava related talks. That's why the USB connection was replaced with network one. And even though we thought that having large number of devices within the same testing server would no longer be an issue. We never moved past around 100 of them in a single testing laboratory, even less. But that we never moved past tens of devices on USB transport. All right. So for the API between layers, that's correct. All of the layers are the software layers are written in Golang. And the API between them is an HTTP rest. So if you want to just call Carol with some specific parameters, you don't need other client. And you can create such requests via postman, insomnia or any other rest API client. But of course, there are provided clients in Golang for internal use. And for the communication between the data controller, there are some shell scripts for the most commonly used actions, like booting, transferring files and executing commands via internal transport between that control layer and the device under test. It's HTTP. It's HTTP. All right. So thank you for your attention. And if these layers could satisfy the needs of your testing laboratories, go ahead and take a look at the sources that were published. Also let me or any other developer of Slav know whether there are some use cases that we haven't think of and should be also available in such setup. That would help us a lot. And I hope that it could help you as well. Thanks.