 I'm going to do for other architectures maybe other kernels, I don't know yet. We'll have to see what other people interested in doing that, and scaling up from the better and better of kernel testing and basic root of S testing on development boards, scaling up to see what else we can do and what else can be done with this software. So the slides are in the Git-anx for DevConf share, the URL is on there, ac mae'n gobi'r documenta i gael ei fod yn cael ei hunain y llwyddiad, felly mae'n rhaid i gael ei ddweud o gael o'r ffordd, o'r cyffredinol, o'r cwmysgu a'r teimlo i'r ddweud. Llawer, ymddiwch yn ymddiwch ar gyfer y pi pi ar gyfer y gweithio, mae'n rhaid i'r gweithio ar gyfer Debyn, mae'n rhaid i'r gweithio ar gyfer y gweithio ar gyfer y gweithio ar gyfer y gweithio ar gyfer debyn. ac mae'n adon o gyda'r ddweud i Desi. Yn ni ddau'r gwahoshio ar y cwmdeithasol, mae'r ddiannod CIGs mae'n gwahosio roedd cymryd Garchanaeth Llanaro Cymryd Llanaro, ddyn nhw'n ryml yw'r peth. 103 dyfais online at the moment. We have a staging instance, which has another 40 of. There are various developers who have their own setups to try to expand the range of the devices available. It's based currently around a lot of OS deployment testing. That's not just Debian based, a lot of the images are still Ubuntu based. We do a lot of open embedded work. We've recently started to get a lot more work from Yocto. We've asked and sought involvement from people doing Fedora testing, and so far we've only had a little bit of interest. We're not sure what the problem is with that, it's maybe just the Fedora approach to ARM or the particular way the images get built. But if there's any interest in other operating systems outside that, we are more than willing to take it on. The principal thing we've been doing so far is a lot of kernel boot testing. Has this particular kernel build improved over the last build? You start off with a simple boot test, but then you start to track performance data. You're doing LTP tests and you're tracking the failure rate across multiple tests and multiple periods of time. We've recently added multiple node, so multi node allows you to have a single test that runs across multiple devices. Those devices can then be synchronised purely over the serial connection that all these devices share. They can raise their own network, they can do whatever they need to do with their own switches and their own environments, and you can have multi OS deployment as well, so you could have an OE box talking to a Debian box. It's trivial. The main thing that we need to talk about with Lava with a lot of people and a lot of conferences is the idea that we don't write the tests, we don't define or prescribe what tests you can run. So people often ask, well, what tests can Lava run? Whatever you can think of. The tests are written by the users who want the data and the tests are then developed and upgraded and worked on by the people who want the data out of the tests. We have our own tests that try and stress Lava itself, but those are largely irrelevant to the people who actually want the data on the systems because quite often we are testing with a static kernel build and the static root of S and they are submitting jobs that are a new build every time. We do track results over a long period of time. One of the main features is you can go back right through the database, track a particular series of builds and watch how it actually performs. So if I go back to here, that's a simple graph for my local box. It doesn't go back that far in time. There are other ones on the main servers that go back a lot further. So you're just tracking the pass and fail of a particular series of tests. You can see the various things were going on there. At one point or other, my own installation being a development box wasn't performing as well as it should do for productions of the results dipped. Those summary reports are created by users. We put the facility there for people to write those reports, but the reports are written by the people who want the data. So the data can be exported. That main report there is basically a simple overview and a front-end to your data, but you can export the data for further analysis, combine them in new ways and work out what actually happens with the rest of it. The actual deployments you can do, what Lava can do, can be expanded by particular hardware. We've got ideas for, we call it the NMP, it's still in final testing, but it allows you to put a relay between critical parts of the device and the device setup. So it allows us to switch the SD card externally as controlled by the test itself. So suddenly your SD card goes away. You can switch away the network. You can switch off the SATA. You can switch the SATA to a different connection. That's not just useful in terms of being aggressive with the test and seeing how the kernel behaves, but it's also useful to allow us to test bootloaders because you can put a bootloader on one SD card, switch it over so that the device can see it, boot the device from that SD card. Oh, it's bricked. Switch the SD card back to the previous image and boot it again and it's fine. A lot of the automation comes down to being able to automatically recover boards. There will be situations, these are dev boards, there will be situations where it's managed to fry when the controller is on the board. There will be various issues. We've got one particular board can lock the SD controller. In a completely invisible and transparent and undetectable way. We can try and work out in advance. Okay, there are things you can possibly do to indicate that this particular board of that type is suffering this kind of problem. You try and put some diagnostics into the test in advance to see whether you're actually going to like to hit that problem and compensate and work around the problems of particular boards. So what does lava currently support? There's the arm bias, not surprising for instance where we're actually starting from, and Lenaro is concentrated on Arm V7. So we don't look at Arm V5, Arm V6, or Arm V7 are the main pieces of hardware that that's the majority of the lab. Arm V8 is then coming in with new hardware and emulations and models. But the majority of the boards that are available for testing currently are Arm V7. We've got support for where the boards themselves are boarded running a virtual arm system on their physical arm hardware. And that allows you to actually test both sides. You can run tests on the physical hardware whilst it's running a VM. You can run tests inside the VM. And if you've got the right hooks you can actually then communicate across and through the VM. Or you can actually communicate with another node that's set up to interrogate it in another way. There are actually the six systems in the lab at the same time, not just for actual lava deployment but actually as test devices. There are situations where people need that so those devices can be available. Via emulation we can do any of the other architectures that QEMI supports. We haven't got a lot of any physical hardware in the Cambridge lab but with lava endebving you can easily install that and set it up with your own architectures. A very useful feature is the idea of dummy devices and these can be a simple S-troute so an isolated truth environment or SSH and that allows you to connect to a device that hasn't actually had to have a deployment. It's set in there pre-configured and it may be serving a lot of different jobs and you actually have jobs that connect into it and do certain things with it. Contention on that box is your problem as the testwriter. That's not up to lava to try and sort out. If you've got two jobs going in on SSH and both of them try to run deep package sorry, you have to sort that out yourself. A lot of the builds coming through are CI builds for kernel builds. We still do a lot of Android testing and then we are working towards increasing support for testing bootloaders. Obviously U-boot was one that we started with inevitably with on V7 dev boards. UEFI is the next big change and the next big bootloader we need to test and then Grubb is something else that we are looking to support. We don't have explicit support for Grubb in lava at the moment but it's on the development line. We are currently only testing Linux kernels. There may well be hidden assumptions in how the structure works for other kernels. We don't know yet. We will have to find out when the community tells us. Test jobs are written basically using shell and markup. You can address and execute any utility binary anything you can actually build or download or put onto that system. Most of them if you define the right parameters for the kernel you can have a nice networking interface and you can pull down whatever you need. It's up to you, you set it up in the test. We get results based on whether the deployment itself worked, then the test that you wrote and then there's parsers to work out whether there's parcel fails and you can do measurements as well. A lot of the time you're running someone else's test suite LTP or some kind of Python unit test type thing you often have to write your own parsers. Sometimes you can do that in the YAML with a bit of Python Regex code. Sometimes it's actually a lot easier to write a custom script in whatever language you prefer, whatever language you can actually ensure is executable on that platform and run the test inside that custom script. Then the custom script just outputs the test results in the format that is easier to understand. What Lava does is it gets hold of whatever the deployment is going to be with this image, the tables and we overlay a basic set of data and shell scripts. There's a bit more data and shell scripts if you're actually doing it with multiple nodes, but generally you've got a common interface there that's based on on Puzzic Shell compatibility but we don't define what you can run and we like to think that we are making no assumptions of what the device is capable of. Sometimes that lets us down because there are ways where you sometimes have to make some kind of assumption of what the device is able to do and sometimes we do push back at the device manufacturers and say now look, the way that you've actually asked us to test this board and the way it actually boots is not going to work. We need to have something that's a bit more sane and a bit more reliable because we are automated after all. You can't have a board that needs a lot of manual intervention to get the thing to boot in the first place. We're booting it three or four times per test a lot of the times or even more. We sometimes have to push back at the manufacturers and say now look give us a board that we can reasonably automate but other than that and as I said we work hard to make sure that there's an automated way of recovering from a bricked device or what would otherwise be a bricked device without any kind of intervention from the lab admins. They're busy enough as it is. So now we're reaching out to Debian. We've been using Ubuntu to actually run the infrastructure for a while. We've now got it all migrated over to Debian. All the packages are in Debian. We're running the packages as they are in Debian. We are looking to test the RMP kernel. I've been working with a number of people in Debian already on exactly how to do that. So we're looking at where's the overlap between the boot proposal we can access elsewhere the available boards in Lava and the DDPs that are defined in the RMP kernel. I've got some links there for the various information. That's the summary of the boards we've found so far that can easily be tested for the RMP kernel. I've put in a request for extended support for the arm deal because that should add another couple of devices to that list. What we're trying to achieve with the RMP is make sure that not only does the RMP kernel boot but that hardware that is available on the board is actually operational under the RMP kernel. We know that for some of those boards on that list various components, various pieces of hardware are not operational after you've booted the RMP kernel. So this gives us a way of producing repeated tests of a RMP at a time to try and track this and see whether we can improve it without causing regressions elsewhere. Those are just the main devices that we've got at the moment. We're obviously open to having more devices added to the lab or to a different lab and having access. Some of those devices on there are not physically located in Cambridge but we would still be able to use those within Devian. So the particular challenges with RMP we need to be able to put the modules into the init-RAMFS. We can test with the init-RD that the init-RAMFS tools actually make but that is then using K-Lipsy. So we'd like to be able to use other init-RAMFSs with the RMP modules put in and we've got a G-Lipsy environment and you can do more testing. We need to decide on which DTPs we can actually support in the RMP kernel and put the data out there and let people work out what boards they can reliably use. At the moment it's a bit hit or miss as to whether the RMP kernel supports your board. Let's get the data, let's get the test results and find out what's going on. We can submit these jobs over XML-RPC to a variety of existing instances and to a variety of other tests on each one including testing the Debian installer on ARM. There are ways of preceding that and making it work without interaction so we can test that through. That will generally mean that we need to start testing with dual media so we can actually have a booted media and a deployment media so we're looking at some of the boards that support Sata for that. This comes back to what we were talking about in the first talk about what Lava can do in Debian in conjunction with PyU Parts and CIDebin.net and the archive rebuilds. Lava is an opportunity for Debian to start testing outside the idea of just the one package. It allows you to start testing combinations of packages. It allows you to start testing a distribution against a different distribution whether that's a different suite or a completely different distribution. We can test Debian against an open embedded. We can do a whole range of upgrade tests. We can actually work out whether this actually works in a multiple node environment. You have a client on the server with physically separate devices. We didn't expect that necessarily to work right up to where the client does it recover. You can do testing across multiple architectures and as I said earlier across or through the virtualisation barriers depending on what kind of support you compile into the test. The images themselves or the deployments that we support we can always make better tools for those. We can make use of existing tools and we can also increase their availability. Then these the upgrades so that we can actually work out can you go from squeeze all the way up to SID with this particular set of packages? Can you throw some dirty data into there? Can you throw some dirty images with random configuration changes and does it still work? Because you can define all that. You haven't got to just say I just want a basic to bootstrap in each case. You can build the image you want put in whatever dirty configuration changes you want and throw the images into Lava and see what you get. Very useful tip when you're actually developing tests is hacking sessions. Basically you install open SID server. Lava has support for accepting your public key as a parameter and your IRC nickname. It'll boot the device you've chosen with the image you've specified start of open SSH and then notify you on IRC saying that the session is available at this URL in a private message and you just put that into a terminal connect and you're in. So you're then in on the test image that you've chosen on a device that you've selected of a particular type and you're in a full Lava session you've got all of the Lava helpers and scripts and overlays there in front of you in your path and you can see what's going on. But that's only what we've thought of. That's what we've thought of within the Lava team as to what we can offer Debian via Lava structures. Come back to us when you've played with Lava and see what you can do install it one of your own boxes see what it can do see what it can connect up and come back to us with what LC can actually do. Now, after we've done the work on the packaging and Tony will know all about this because we've done all the planning for the refactoring. So this is intended to make it much, much easier to extend and develop inside Lava. So if you're looking at what Lava doesn't support this kind of device, how can I get it to support this particular moment this particular feature? We're not going to make it a lot easier in the code base to actually work your way through that because Lava has developed organically and now the refactoring is urgent. So we're looking at model of components much more identifiable sections of where things go making sure that actions are important and diagnostics making it much clearer and much more obvious and much more common that when Lava spots an error not only is Lava able to say yes that was sorry that was a Lava error philobargon will work on it that was an infrastructure error because a network switch has probably failed something or someone's switched it off or that was a job error you've made a typo in one of the scripts or it was actually genuinely a test failure so working through those kinds of situations and then when we've got Lava errors or device errors something going unexpectedly wrong what can we do whilst we've still got the connection to the device in that particular mode can we actually get reports of data, get it into the logs get it into the actual test results and say well there's what we've tried to work out even if it's just what I have config saying what's root dash n saying what's n map saying just get the data out there so that people have a chance of working out what the state was at the time we don't do enough of that yet the refactoring will also concentrate on allowing you to simulate the job in advance so you'll be able to run through your test definition and see all of the actions all of the primitives, all of the actions then the commands of that thing would actually do on the actual device you can follow it through step by step then we get more of the data coming back at the moment we have the logs and then we have the result set, the result bundle we don't actually have enough data to come back into the results so that's what we're going to fix with that and allowing people to override the one of the common problems is that as lava developed there were a lot of devices that had quite slow reactions to some of the operations so some of the timeouts got longer and longer and longer some of the some of the snow walls some of the v-expresses they took such a long turn to get through all the different firmwares different stages in boot that the timeout for a failed boot got too long and it became a default rather than actually being customised to particular boards so we're going to fix that and make sure that for devices now that boot much more quickly if it fails the boot you're not sitting there for five minutes after it's failed the boot to get the message for lava at the timeout and think actually that didn't boot right, sitting at the back there quietly working on the video is Andy who's working on the idea of putting lava into hardware so it's an open hardware design that will be based on Debbie and Jesse and the idea with this is to allow you to have lava in your bag at a demo at a conference already set up support up to six devices or one little box and go five five volt input and you'll be able to just connect up the devices you'll have a PDU inside the box that'll be controllable remotely you'll have a network switch inside the box you'll have serial connections and the serial server again remotely accessible, remotely controllable and you can turn the serial off completely during a hard reset there are devices there's a QB2 where there's a design floor in the hardware and if you leave the serial connected when the device is hard reset it locks the boot loader in a bad state and it can't actually get back into the test image so that was one of the boards that led us to this situation where we think we have to actually have an easier way of getting our developers set up with a lab on the desk and then working through and actually that can be really useful for conferences and every developer working on lava or just one of the test stuff you can actually put these up as a full rack as a full lab or you can have them as a dedicated unit with a little board inside a nice little starter drive and you've got a whole lab in one unit now let's just show you I'll come back over here so currently this is what lava actually looks like so you can see it's using a local file this test took actually took about 47 seconds so it uses a local image that was built with VMD bootstrap which is a nice little tool that Lars originally brought and which Antonio and I have improved after that you get a nice checksum to make sure that we've downloaded is what you think we've downloaded there's the overlay we come through and then pass that, do the overlay pass it down to Qimium in this case because this is a KVM test normal boot output also output all tracked there's your network address and network information it's a kernel boot time 5 seconds that's automatically tracked and other stuff like that will be tracked across the board you can see here lava, KVM or one so that's where the overlay stuff actually lives nice little check to make sure that there's actually some available space inside the KVM before he's started doing stuff and then the test runner doing stuff like dump the IF config in the output there it is that passed and then get some routing information do a test ping make sure that the actual ethernet is sane and working see if you can install the package and it passed it came down to the end and you've got the result bundle the result bundle is just a way of collecting all the different results you get from the lava tests and your own tests so you can see here the lava test results we deployed we arrived at the test shell the results and the the job came back as complete and then the user tests work out something like that so the ping test actually failed at that point that's mainly because my laptop is not bridged because just because so when the KVM came up it didn't have a it came up behind a knotted address so this is on staging which is our test instance a lot more devices a lot more jobs running so if you look at well just looking through that are there any let's do one of these foundation models let's see what that looks like when we go through so again we work out through the foundation model you're probably better off explaining this than I am but it's it's modelling an arm v8 right in saying it's not actually an emulation is it it's it models the entire CPU this one is running on x86 and modelling an arm v8 so that one so you get various aberrations then when you're expecting various facilities to work it's expecting real hardware and they're not there so you can record that and you can work on what's going on and we're back into a normal test so the point of showing that is that despite the disparity in the actual hardware there you get a very common very similar interface if you go back to a real piece of hardware let's go for a panda that's what the that's what the submission actually looks like we're currently using JSON we are looking at using YAML for future deployments once the refactoring is in place but you can see there that's where the image is obtained from you can see where parameters are inserted into the the YAML file that is actually in the test definition in the repository and there's where you submit results and you're saying that's a panda device type and it's given a job name and a default timeout which each action will use as their timeout unless it has a specific timeout just like the lava test shell so if we look at the complete log this time we have to try and connect to a real device it's not a KVM or an emulation on a model we've got a real bit harder so this is where the serial camera comes in so this is the stick of these for C04 this is a telnet interface our nice familiar U-boot drives us all nuts the panda is using a partitioned SD card so we boot into what lava calls currently the master image this is one of the ways that we ensure that the test image doesn't make our panda unusable for the next test in the line so this one isn't testing bootloaders as such at the moment because it's relying on the same bootloader but you're just allowing a test image to be deployed onto the third and fifth partitions on that SD card so we check that the master image coming up is sane in case something has managed to break it in the previous test and then we get hold of the test image so the panda is doing a lot of this work we then send the modified tar balls with our overlay back to the panda write it out onto the SD card sort out the boot partition check that we've still got a sane system and then we go down for a reboot so this is now expecting to come back up into the test image at this point we are able to control uboot lava uses pxpect a lot so we are interrogating the serial output at each point and lava is stopping uboot and say what? I'm going to set this command now I'm going to set that command and that's how when you reboot into the test image lava is able to make sure that you go into partition 3 because it's telling it you load the test u image from partition 3 not partition 1 where it would have come from to get into the master image so there's uboot of the test image and the other partitions with this own middle interface and now you're in the narrow test so you're inside a test image similar sort of stuff you work through and you've got the same interface again with your test runner inside the image you've defined you're passing in information about which test are passing and failing and that's quite a long test it's one of our functional test but it's a bit longer than some of the others so you can see this test image was a ubuntu raring image so this is one of the ones actually that will cause trouble if I choose to resubmit it because ubuntu have now taken raring off their mirrors and you can't see raring so these kinds of things are things that you need to think about as a test writer what am I basing this on and then in the years to come we've come back actually I want to rerun that test back from there if you're relying on third party sources for the updates and the other bits you're bringing in if you want to run these things long far ahead in the future put the stuff in your own repository take a snapshot of it, keep it so you can run that test in the future this test suffers from that problem that it's an old test that I wouldn't be able to get hold of that data there we go, failed to fetch doesn't exist anymore 404 because there's not a problem it's just one of the tests didn't run so it goes on tries to work out what other tests it can run and reboots into the master image once in the master image yep again checks that it's half sane gathers up the result data from the test image because the test image is on the same SD card so the master image can easily just read the data around and then pass it back to the server you can see there direct update failed, direct install failed curlftp failed simply because the information wasn't there that's the indication of how you use the measurements and units just a little demonstration of that in this particular test and the result bundle is there because we had multiple test definitions inside one test you've got multiple sets of results you can do it so that you have an automatic reboot between each of these sets of tests or you can combine them and have all your tests running in sequence in the sequence you've defined in your own test they're not resorted or that you define everything about how the test actually runs quickly look up this is what a test definition actually looks like currently we're working on this part isn't likely to change with the refactoring the submission format may well change but the structure here is not likely to change much you're defining a little bit of metadata maybe it's not necessarily deterministic we don't rely on the fact that you've marked the OS you want to it's just a clue for the result pausing same with the devices that's that device list is there mainly to help other test writers work out whether they thought of this test supporting their devices it doesn't actually preclude someone trying it on another device and you start with the basic set of run steps that's a very simple if you go up a little bit further this one has installed dependencies so you can start with someone's image and then think right now I actually need to add stuff if it is a Ubuntu or Debian based test image then it's simple to actually install them as packages I know that Fred's test image over there didn't have these things installed I'm going to add them in for this particular test and then you just go through on each one and do a bit more testing so you can see we're calling there a check IP SH script which is part of the git repository that this YAML file lives in because the test definition is cloned directly into your test image and you can run anything that is in your git repo that's where you do all your scripting and do all your definition of things that don't quite fit into a single line of YAML if you start needing to use a lot of pipes and redirects and maybe a bit of bread jacks pass in that goes into a script of whatever language you want to write to in yourself whatever your preferred language is or whatever the test image can support you can compile it from C as long as the test image has a compiler of course some of them don't just before Daniel let me show you the we've written quite a bit of documentation it is all available every time we install Lava you get all this documentation on each of the instances and it is the documentation for that version that you've currently installed naturally a lot of that documentation will be written by the developers so it may well be something that you need to actually come back to us and help us improve Patch is welcome well I wrote a lot of this documentation but I definitely know what needs patching right so that's the link there for the main documentation on the main lab server the staging instance is there if you've got questions after the talk come to us on the mailing list or on the hashlunarolava of TCC channel and there's our gitrepos and you can come to us directly on bugslunarol.org with an account or you can just use the debinvts and you the install command to go and have your packages Daniel so actually is this working? so the first question I had I think you actually answered which was whether there was a way to program how to acquire the serial connection because you stopped saying it's a serial connection it's a serial connection and at one point service that comes under the device type configuration so lava will work out or will assume that pandas have a particular way of being connected and you pass that connection command on a per device basis so you say that panda6 is on securities for port 17 is that generic enough to for example do IPMI connections and that kind of thing? yes there's a whole range of different ones that people are using it is just a connection command so it's the entire command including the whether you use telnet or whether you use screen or minicom or whatever you actually need to use to actually get onto that board so my second question is you mentioned the hacking mode and that sounds like a really really useful thing to have does that work on systems that don't have networks? I'm not sure how you could actually get a concession serial connection the problem of the serial connection is that lava has a requirement that we have access so that we have control of the serial connection because we are running P-expects all the time so if the with the refactoring there will certainly be ways of having secondary connections onto boards which could allow if the board supports a second serial or something like that or some other way of doing it I'm not sure I'll really export the serial connection to a PTY and then let an incoming SSH session to the server talk to it it'd be really useful patches impressive ok third question the lava overlays are they static or are they per test run? there is data that is test run dependent so what do you do if you can't repack the image say for example that what you're trying to deploy to a system is a signed lump we'd have to look at that in terms of that is going to be one of the things we need to look at with refactoring currently we don't actually have a way of doing that if we receive an image we will currently break it up into bits and deploy it onto partitions but there will certainly be need for how we should one of the plans will be that we will deploy it as signed we will use whatever we can structure-wise to verify that it was signed then we'll just blap-lover it you're almost making me forget my last question no you have made me forget it does anyone else want this otherwise I'll remember it in about another 30 seconds oh yeah the tests that you showed that indicated that you could say oh I require OpenSSH server or I require NTP are they declarative enough that for systems which you can't install packages on like say an android system or an otherwise arbitrarily static system you can say ah I can run these tests because the system is exporting these tags in some manner no that would be down to the test writer so for example we get this problem with open embedded open embedded is a problem in that unlike any other distruder that's out there it does not identify itself there's no tag there's nothing in an open embedded image that says I am open embedded so you have to look for everybody else and get to the end and think right it must open embedded because there's nobody else we recognise or even an LSB release no not necessarily it only has LSB release if you compile LSB release into your open embedded image it's too much control so yeah we can't necessarily be accurate enough in working out exactly what system you got on there to work out yes you've got that available available or whatever and it's not really Lava's job so you gave us the test image you gave us the test definition catch if you break it you get to keep both bits and I want you to marry these test technicians to these images we will Lava will do that and marry them together but if it breaks it breaks and a final thought on the packaging thing if Lava were to provide a way to specify pseudo package names and I don't mean the Debian pseudo package then you might find that's going to make the fedora people slightly less unlikely to not come on board because at the moment you're saying this depends on these Debian package names effectively well the test definition might be different but if you can have more test definitions that are generic across distributions then you might get somewhere more but then it's easy for someone to come across with a Lava installer script that is based on yum it is extendable this isn't hard code somewhere in the depths of the python code of Lava it's there in a shell script that's copied onto the board so the Debian and Ubuntu images have a little shell script that is Lava installed packages which calls out to get Antoni, is there a question? So how these dependencies work is that you have both besides having drivers for each type of device you also have something similar for types of test images so you have Debian, Fedora so when your test definitions say it depends on full it will install full in the right way in Debian or Fedora and then for open embedded where you can actually install stuff then your test images needs to have that stuff already installed and then the dependency installation step will just say give a warning you depend on these packages but they don't know how to install I'm assuming your image already has them and then in the case where you have packages with different names across distributions you can specify package names for Debian package names for Fedora package names for full bar that you need so it is possible to keep generic test definitions that you can be used on every single image that's fine thank you