 My się nazywamy Paweł Wieczorek. Ja pracuję w Samsunga w Polskim Instytutu Karandii. Jestem teraz elementem tajemnicza komunistów. Tajemnicza komunistów jest Gnulinuk's Distribucjon, który spodobuje wsparcie dla jakichś długich stworzeń, jak możemy się znać nasze ręce. Dzisiaj chciałbym dodać wasze pierwsze projekty i przygotować pewne evaluacje, dla twojej własnej laboratury. Zacznę z krótkiej projekty, co to jest, co są te ważne funkcje, co są te główne utwory i jak można się uderzyć z tym. A potem zacznę do actualnej set-upu dla laboratury. Chciałbym też podobać się z tymi tymi tymi funkcjami, które znajdą się ciekawe. I to będzie łatwiej. Poza tym, chciałbym zapomnieć kilka długich pasów, kiedy ta cała laboratury jest już w miejscu i zacznę z krótkiej projekty, kilka finalnych dźwięków i Q&A sesji. Jak już mówiłem, zazwyczaj zazwyczaj z projekty wymagają wiele weryfikacji, próbujący testów na te, aby używać filmy na zespołu Gnorynox. I to ważne, żeby zapytać, co laboratury jest. Akronyma laboratury są dla automatycznej weryfikacji i jest systemem i uderzenia systemu operacji, który w naszym razie jest znacznie ważny dla dzisiaj. To pozwala do uderzenia, do uderzenia na wody, wszystkie elementy potrzebne dla wody, aby ulegać kernel, db, rootfs i tak dalej. To wspomnienie i hodowisko, które dzisiaj will focus on the latter. Zanim te całe systemy operacyjne jest oprowadzone na twój device, to pozwala ci, by zadać dwie rodzaje, różnych testów z boot i bootloader up to system-level test, although some extra hardware might be required for some of the test cases. And the most common starting point for testing on your development board for embedded might be a single board, ARM V7 based, like Bigel Bone Black. Flashing it and communicating with it, it's definitely not hard. It can be quickly learned by any developer and it does not require much time from developer. But it works well only if there are no more than single execution of your test. Lack of parallelism with just a single device per developer is not the only problem with this workflow. Let's suppose that your software has to support some other target devices for ARM V7, like Arctic 10 on the top left or like Odroid on the bottom right. And what about if you had to support completely new architecture like X86 with minoboard turbots? Sooner or later, these are not the only problems that might come up with development. You'll be expected to get test results as quickly as possible. And even with all the knowledge in place, all the procedures known by the whole development team, it will be harder and harder to manage the whole board form. What can we do about it? Preferably some abstraction layer for the whole, for all the boards that you have to manage might be introduced. And that's actually what Lava provides. That's actually why Tizen Common became interested in Lava in the first place. Lava unifies management over all the boards that are available in your board farm. It doesn't matter what are the procedures for flashing devices, communicating with them or executing tests and collecting results from them, from the developer's point of view. Any device will be seen equally. Also, you've got out-of-the-box resource allocation and you don't have to worry about it anymore. As long as your execution of your test cases can be divided onto multiple target devices and test cases are not dependent on each other, resources will be shared across your whole board form. Also, scheduling and dispatching all the tasks is done all for you with no required interaction. And how does Lava do that? Lava do that. It provides a unified environment for any device that you add to your laboratory. It collects and tracks all the test results for future investigation if there is a need for it. And still it supports direct access to your target device if such need occurs either via built-in solution, which is hacking sessions or with some external software like board overseer by free electrons. More on those two features of the Lava can be found in the links on the slides which have been already uploaded both to SCAT and to the events page. So who actually uses Lava currently? Of course, Linaro for both Android and Linux testing on the development boards. Also KernelCI performs its boot tests using the Lava board form. And as for the whole distributions, currently both Automative Grade Linux and Debian performs tests of its operating system images within their Lava laboratories. Now that we know what can we expect from Lava, let's move to the actual laboratory setup. For start we will focus on standalone instance, which means that we will have all the components within a single machine. Although Lava supports distribution of the whole environment and boards are not bound to a single physical location, it will be the best for the initial evaluation environment. We will also focus just on the virtual devices and no actual ones will be used in this case. Also, just the simplest tests will be taken into account and these actually might be more of the health checks than the actual test cases. And why is that? First of all, to reduce initial complexity and to just get a grip on the key concepts of the Lava laboratory and to familiarize with the new workflow, which may differ from your current development plan. Also, although your current test cases might be reused within your Lava laboratory, it might be preferred to postpone eventual migration to the Lava format. So what does Lava require from you? Fortunately, the only strict requirement for now is having a machine with a supported Debian release, which are currently stable testing and unstable. The experimental branch is used only for the freezes in the testing branch. It also might be worth to note that although Lava is already available within Debian main repository, it's preferred to use the backports repositories since there is the most current Lava distribution supported by Linaro. Unfortunately, Ubuntu support was frozen and if you're interested in the reasons for that, you might find more details in the link on the slides. Apart from the strict requirement for the platform, your first Lava laboratory will be based on. There are a few files that you should have prepared. First of all, system image, which can be either built all by yourself or taken directly from Linaro, mainly from the images.validation.linaro.org server, which will provide you with some sample images for various devices. Then you'll have to also prepare a health check job, which also can be taken directly from Linaro, from their Git repositories with test cases, on git.linaro.org, under QA domain. Also, you will need device-type template, but fortunately Lava comes with various types of built-in device-type templates and the only file that you'll have to prepare all by hand will be instance definition of your first device, which in Lava terms is device dictionary. And for QMU device such dictionary might consist just of these three lines, which tells which device template will be extended and specify the only two features that are not made by default in the template. Thanks to the efforts for a packaging team on Lava project, the only two steps that are required on the host machine are setting up your database for the storing both Lava settings and the results, and using the meta package, which will install all the components for the standalone instance. As I mentioned before, the more you'll learn about Lava, the higher your requirements will be as for the environment distribution and having the exact packages installed that you require, but just to evaluate the technology itself, the meta package will suffice. Once this is all in place, you'll also have to set up the web UI for the easiest way to manipulate with your Lava laboratory. And this will just be these five steps in the terminal, which are enabling two additional modules for the Apache server and replacing the default configuration with pre-installed Lava configuration, which comes with Lava meta package. Once you have your web UI, your super user has to be created, and with that in place, all you have to do is to tell your new Lava instance what devices you would like to test on. Adding devices to your Lava laboratory is as easy as performing these three steps. First of all, you have to note which device type your laboratory will support, then you have to declare the actual device instance you would like to use, and finally you have to specify all the features that are not already available from the device template. Juzing, for example, the file, the three line file you saw earlier in the required files slide. Once all of this is done, your first evaluation environment of your own Lava laboratory is ready to use, and although in the whole automated way you'll probably will be more fond of the CLI client, just for the quick and dirty tests, the web UI might be your first place to check the Lava possibilities and how it can be used in your workflow. Once all of this is set up, we might consider a few additional tools that will make your work with Lava laboratory easier in the future. And first of all, the configuration management software. It will be the best to have your environment reproducible, not only in the evaluation environment, but also in the future staging and probably production, so that you know that all of the tests that were made will be executed in the same way, in the corresponding environments as well. All of the currently available tools for configuration management are equally good from Lava's point of view. So choose your favorite. It might be preferred to use the one that is already in place in your infrastructure. As for Tizen Common needs, we use Ansible Playbooks. Unfortunately, due to some formal issues, they were not published yet before the presentation, but I think that all the necessary steps should finish by my trip back, so feel free to ping me directly if you would like some further information. On the configuration management code from the Tizen Common laboratory. I believe that your first Lava laboratory would probably be set up in a virtual environment, and depending on the time you would like to put into the preparation of the laboratory, you've got two main options. Of course, your options are not limited to those solutions, but these might be the good initial solutions for you to use. And if you are limited on time, the Vagrant might be your choice, since new machines can be brought up easily and instantly, and using the Atlas service, wide range of prebuilt boxes is available at your fingertips. But do be careful, since you never know what might be in unofficial sources from Vagrant Atlas. If you have some more time to spare, it Leibfeld might be the better choice, since it is a much more flexible tool, and still comes with a few user friendly, both CLI and GUI tools. Once you make your future deployments easier, let me recommend a few directions for the development of your laboratories. Adding new device types and actual devices, not just the virtual ones that were mentioned in the presentation, is described in the documentation for the Lava Laboratory. The documentation is available at each instance, at your own as well, but the most recent will probably be on the main Linares Lava Laboratory. Also, the documentation is how to write your own tests and how to migrate your current test cases to the Lava Format, is described in detail in the linked chapter. If you'd be interested to use your Lava Laboratory in some open source project, feel free to add your laboratory to the kernel CI. I'm sure that more boards are put into tests on the recent kernel trees, the better. You also might benefit from familiarizing both with AGL test framework setup instructions, which goes into details on how the infrastructure for the AGL test laboratory is, and also the testing initiative from Civil Infrastructure Platform publishes some interesting documentation on the Lava Laboratory and how can it be used in different workflows. If you prefer watching or listening to some lecture, let me recommend three interesting presentations. First of all, much more detailed introduction to the Lava V2 by Bill Fletcher from last year's Linaro Connect. If you'd be more interested in having direct access to your devices via different solutions, you might be interested in free electrons presentations from last year's embedded Linux on Europe. If you'd like to know how currently Lava Laboratories can be used for the full stack distribution testing, you might be interested in tomorrow's presentation by Jan Simon Miller on the integration of Lava with Fuego. Of course, these are not the only materials that you might use. The whole documentation is, as I mentioned, available at each instance of Lava Laboratory. If you would have some more specific questions, both Lava User Mailing List and Linaro Lava IRC Channel on Free Note, we'll be most welcome for all of your questions. To sum it up, thanks to the efforts of the packages of the Lava Project, installation of your first Lava Laboratory for evaluation purposes is as easy as executing a few commands in terminal once you go through the whole documentation on the preparation of the environment. Also, the setup is almost instant once you know what are the requirements and what you have to prepare prior the actual installation. Also, the unification of the environment done by Lava is probably the key feature that might convince you to try it out. With Lava, you've got a parallelization of execution of all of your test cases at no cost. It's out of the box. Also, the division of responsibilities or maybe taking the burden of managing the test farm by developers and moving it to the test farm operators might be something that your developers will be interested in. Although the whole documentation for the Lava might be a little overwhelming at first contact, the exhaustive documentation has actually no downsides. All the common problems are already fixed there and all the solutions are available. There is also absolutely no need to try to create some new board farm management software since the existing one, although might not be as popular as some other CI infrastructure tools, is already in place. Also, although the automation tools might seem to high cost at the beginning, it will pay off in a long-term and with every next deployment you make. That would be all of what I've got prepared for you today and if you have any questions, I will be happy to answer them. The connections that Lava supports are the basic one is serial, but the SSH is also possible. Lava can be also extended with any connection you need. Also, if there is some distribution-specific protocol, like in Tyson case, it's not hard to add any new using the templates that are already available in Lava. So, as for support of the OSes on the devices, Lava out of the box supports few main Linux distributions. Debian, Fedora, Ubuntu, Open Embedded builds, and Adroid as well. As for the rare distributions such as Tyson in my case, the support has to be done by the interested party on their own, but the templates that are already available can be easily extended. So, what about integration with current infrastructure, like GitLab, Garrett or some other review tools? Any test can be submitted to the Lava laboratory. Lava laboratory has no interest in the source of the test that has to be executed. All it does is prepare device, perform tests, collect results and publish it further. So, any combination of the current infrastructure that you have with Lava has to be done on your own. If you have current infrastructure with, for example, Jenkins, and you have it set up on the Garrett event stream or GitLab hooks, is that correct? It will be as easy as replacing the command that has to be executed on the event. So, about Tyson Common and Archives. That's exactly, as you said, the OBS, Open Build Service, is used for rebuild of all the packages. As for the images of the OS itself, we use Meek, which is acronym for Meek Image Creator. And this is actually the tool that prepares the OS images. OBS is used only for the rebuild of the packages. And the setup with Lava is currently under development, but we will be publishing everything as soon as it's on the production environment. So, as for scaling of the whole solution, Lava will schedule the tasks only on the available devices, on the event of the new task that has to be executed. So, there is, as far as I know, there is no mechanism for spinning up dynamically new machines, since it's not the main use case with the actual devices. But I believe that it could be included in your load balancer, performing these three steps for adding new devices. Actually, just told them new instance and the instance specification, because the device type would be already known. Oh, sorry. If any other question occurs or comes to you, feel free to contact me directly. Thank you for coming and thank you for your attention.