 Hi everybody, I have the pleasure to introduce you and Tonyo Terceiro Debbie and Ruby developer with the talk with the tutorial functional testing of deviant packages Everyone So we heard today about to talk about testing I'd like to get an idea of where you guys are coming from Here's our hand if you maintain on devian package Uh, if your package Now keep your hands up If your package has auto package that supports drop your hand Wow, then You are the right crowd for this talk Uh, automated tests are great and we need more of it Uh, I hope I won't find anyone to disagree with that Of course, it doesn't solve everything So lots of types of applications. You still need manual tests But if you have a safety net safety net with automated tests, you can Spend a lot of time handling other things other than doing all manual testing and then This is the topic are the topics we are going to cover here. So I'll talk about a little bit about the history of this Effort to increase the testing coverage of deviant Uh, we are just going to discuss the specs for Declaring the tests that you Want to have run for your package We are going to discuss the tools you can use to test locally And to reproduce exactly the the same setup as you have in the deviant continuous integration And then I'll give Several examples of packages that have tests and that use Several techniques to test their features So Auto package test is actually not was actually not invented yesterday So it was the first very first upload was made in 2006. So it's going to be 10 years next year And took quite a while to get some coverage in the archive and then in 2012 there was a Proposed depth What was it was that page So it described Automatic as installed package testing So it's in depth others More or less. So there you have a pointer to the actual location of the spec which today is in the In the auto package test which repository And then the the idea behind that the depiate specification is to describe a standard interface So that package can declare their tests Uh In a way that can be tested against a real system So the idea of auto package test in depth h is not to test Not it's not to do build build time tests is to test the packages. It's it's installed. So It is possible to run upstream test suites, but usually you want to test the actual package that Is With its binds in users slash being Which which is library codes if it's python or ruby or whatever installing this in the right place So you don't have to mess with Library load paths and all that kind of stuff you want you want to test the actual thing That's users are going to use And then there was this idea floating around that someone somewhere Should run a auto package test on the whole archive And then in january 2014 so year and a half From now, I decided to It was a little earlier than that, but on january 2014 I had something working Which I called levin continuous integration. It was a very ugly The the htma htma was very ugly. It was very Very basic but then One year and a half Later we had two GSOC students last year. So one of them worked with me in the Enhancing the ui and make it ready for having For testing multiple suites and architectures. So today we have support for that We don't do it yet, but the ui is ready for doing it and then also Got a lot of my hacking time And then today we have ci debiannette Which is It's quite nice. It shows There in the left the latest results. I took this screenshot yesterday And then you can subscribe to the feed Which is actually very quiet if you Take it into consideration the number of packages in the archive So a few entries a day because It only notifies you when a package changes state So if you use it Use it to pass and now fails then you get an entry in the in the feed But if it's Always failing or always always passing you don't get Noise saying that pass pass pass or fail fail fail So it's quite nice to watch this feed to Get a sense of what's going on so when When I break something ruby then I get lots of ruby-full fail and so on And now we also have today The same thing for ubuntu So there is this ubuntu actually they run AMD 64 and i386 and then A few of their suites Really Yes, yes, they Yeah, the question was they really use this as a gateway to the archive So yeah, they they do use that data from there to Drive their britney instance And then this is the The numbers of Number of packages that we have Being tested by by dev NCI. So in the very beginning was 190 I think so it was very Very small number and today we are at 4,000 and 300 something So you notice that we cheated a little bit there in in may 2015 when we uh We actually white listed a bunch of ruby and perl packages Which have all of them have a very similar structure and you can run The tests in them with Common code so we you don't have to we don't have to update every single package. So We white listed a bunch of them. So we have a great spike there On packages tested and you you see that most of them pass their tests So it seems that since the very beginning there was a small percentage that uh always failed but Uh, we increased a lot more of that of Passing packages than failing ones. So Is that so if if if there's anyone here that works in a packaging team for programming language packages and They probably can be also white listed to run all the tests with a single command I I would imagine that python ones at least could you could have a A reasonable way of running tests on all of them I have no idea about other language So we have today a coverage of 18.5 Percent of the source packages in the archive. So the the test suites are source source package base Although they are supposed to test the installed binaries Uh, and then we started with less than one percent in general. So we It was reasonable Increased given the the amount of effort that takes to I mean, it's not so difficult as I hope to show you guys here, but It's not trivial And this is how your package page will look like for a given architecture. So you have The whole history there of test runs with logs and everything You have the artifact which are more details about the the environment where the test was run And then the the logs also have lots of information Oh, ci data today is also available on the devm maintainer dashboard page on udd It's available on the dpo. So if you want to see the ci status for your package, you can just go to the dpo page and also the package trackers list the ci results I'm pretty sure the new one does I don't remember about the one So in the future, we uh, we are almost ready to migrate to a distributed setup But we'll be able to put a lot of cpu power into running tests. So it will reduce the The lag between an upload to unstable and actually getting the run the test run And uh, we also This will also Enable us to test run for testing and maybe other architectures So today we just run amd64 Now talking about the specs again dep dot debon dot net your friend. There's a link there So there are two basic elements of of the specification one Is in the source tensor of your debon control file declaring that the package has a test switch So you do that but adding a test suite field with Auto package test value you can use other values. So the The program ruby package that I mentioned they use auto package test dash pkg dash language That's what uh Helps us identify how exactly to run those tests But if you are writing your tests specifically for a package you want, uh This This header here. So use it to be xs dash test suite, but you don't need that today. You can just use the suite The package ingest you already supposed that so It should be good by now And then you you have an extra control file in debon tests control in which you list your tests So I That's what the head the rest of the spec is about So you you can just declare tests So in this case you are declaring three three tests and each one has to be an executable inside debon tests So it's just like that. So it can be anything you can You can build c binaries with those names if you want you can but the most common case is just writing shell scripts or In another interpreted language And then it has to be executable and if Exit with zero the test pass and if exit with no zero the test fails It's simple like that You if you if you have uh A simple way of running your tests you don't have to write a one line shell script So you can just declare it directly so you use test command And then whatever you want Then you can also have extra fields you can use depends So in this case that symbol means All the binaries of the source package So when the test bed is prepared All those binaries will be installed plus the ones you declare. So in this case, I want uh, I want to have all my binaries installed and also some tests too. I'm going to use to run the tests Uh, you can also Have multiple tests with different, uh, characteristics. So you just use multiple Stances in the control file. So in this case, I have One test program that needs a given test too, and then I have some smoke test That doesn't need anything besides the binders. So if you don't say anything The depends is assumed to be all the binaries. So the at depends Column at is the default. You don't if you just need your binary. You don't need to say anything You can also need your build dependencies to run tests So in this case in the case where you want to run an upstream test switch, which uses x unit frameworks or something You can just Use that and use your by your build dependencies And then you can space five restrictions on the tests. So it's uh, additional requirements I'm gonna give you a sample here. You can say that this test breaks the test bed I don't know something you do there in the test script What's the season in interest state that's not going to work when you run a second test on that on there? So For instance, if in this case you have you only have one test But if you have two and you have breaks test bed there You you would instantiate a new test bed for each test. Otherwise, you just reuse the same one I was wondering if this means that the test bed has to break or that there's a big chance that the test bed, uh, breaks Uh, it means it might mean both So the point is when this test is done You don't need to reuse that you don't want to reuse the test bed And also if you are running these tests against your real system, you want we want to skip it So if you if you run anything outside of a virtual machine or a container The visualization driver will skip that test So if you're running on your main system as root, especially Then that's going to this test going to be skip it. So you don't break your working machine One more question on that topic How are those Machines reused so Is there a chance that there's clutter from some previous one in the one for another package later? No No, so in I will get there, but auto package test supports virtualization backends So you can choose if you want to run that test in a kvm machine or in a lxc container Or in a sch root session so usually unless If you say three tests with the same dependence is everything equal they will all run the same test bed But if if you if you use different Stances then each one will get its test bed and if you Declare breaks test bed and if you reset after each test, so you always get a fresh environment You you can also say that your test needs root So since the test scripts are just anything You can you can even test preceding so you you can Declare declare a dependency line saying that to not stall your binaries and then you can call Apps from the test to install those binaries and you can proceed before and test that they proceed in work Works so you you can use test as root if you need By default this is a common Gotcha, you if your test outputs anything on the standard error Swim then it's considered a failure So if your tests Uh Sends anything to File the script or tool you want to declare a low standard error Or you want to redirect standard error somehow somehow Then you you can specify Which level of of isolation you want from your host system? So if you declare isolation container Then only things as isolated as a container or more We'll be able to run those tests So if you if you want to mess with system service like stopping services restarting services and stuff like that You don't want that to run in a ch root Because that's going to cause problems so you can use isolation container. It can also Use isolation machine To say that I can this can only be run on a virtual machine or something even more isolated So you want to do that like if you want to load kernel modules if you want to I mean test things you usually things that are related to to the kernel You can also say that you need your recommends Which will not be installed by default, but you can say that you need it Now talking about tools The very simple tool that probably everyone has already installed is sag t which is part of dev scripts It will assume that it's being run from the root of the source package and you just run the test there So it has a few limitations. So if if it finds any restriction that it doesn't Know yet it will skip your tests But it's useful as a first step. So You if you maintain packages you already have it installed you just Create the test definition and run s ad t from your source package and it works And then you you have the The full thing which is ad t dash run from auto package test It's a little more complicated So you you first say the input options For ad t run. So you can run the test from the current directory You can run from dsc You can you can run from changed file. You can pass additional binary dabs You can do lots of stuff. So It's the main pages are useful read and then you use yes three dashes And then the virtualization arguments which says Which kind of virtual environment are going to use to run those tests? So there that you can use So the most common case which is equivalent to what s ad t does is Dot slash and the so run the test from the current source package And on the new virtualization Driver, so that's going to run only a real machine It's not going to be it's not going to run as root, but it will run on your real system. So So in my opinion, this should be the default. So if you just say ad t run, it should should just do that I reported a bug Briefly before flying here I hope We are going to sort that out And then you have And you can have the more complicated use case so you can run This is how more or less how how more or less how the psi does things so it It uses s s ch root and then you say the name of your s ch root doesn't have to be this it can be Anything you want and then in this case, I'm passing a changes file. So it will both run Read the test definition from the source package there and then it will use the binaries there to test So if if you want to test an upload before Build before actually doing the upload you can just pass the changes file it to Use your binaries from there Over the ones in the in the archive You can also use this do the same thing with lxc And q emu and ssh. So the ssh driver Assumes that you have some mechanism to instantiate virtual machines on the cloud or in some Somewhere that's going to be magically creating virtual machines for you and then to ssh into them and run the test there So there are requests for p builder and docker support. So if you care about those things you Can write help write the drivers for them Now, let's look at some examples and cheap tips and tricks So I will show some examples here. I forgot to open the terminal in advance. So let's Think white backgrounds better to see right So this is spin point, which is The presentation program I'm using here. So it has a very simple test definition Can anybody see that is that the font is big enough? Okay So it has a simple test script called smoke test and it uses its own binaries file and ssh unit 2 This is my first chip ssh unit Is very useful. So it's a testing framework like you find in any language, but for shell scripts It's very useful. You can do everything that you do with other tools And then the actual script Uh, what I used to overcome that uh, standard error restrictions just I always just redirect sunday or two Uh standard output and I don't need to care about uh that restriction So you have here the test functions like you would do in any other language So pinpoint since pinpoint is a graphical application. It's not very practical yet to actual test the actual To test the actual user interface. So I'm here. I'm testing the pdf output feature So I just create a pdf and then test that it's a valid pdf. So here I'm testing the output of file dash dash my type dash dash brief On the pdf actually returns application slash pdf. So I'm sure that if this test pass the pdf output function, an it is not completely broken And here is the test below is just a a test for a corner case When you have an empty background definition, it's used to crash And then uh to use test unit you just source it from the bottom of your script So you create the Functions which begin with test Something and then you just source s h unit at the end and then it runs. So if You can run with s ad t here s ad t hides the output from you. It just keeps that spinning Thingy there and just gives you the result Or if you want to run with auto package test, it will give you the full output of everything It does a few a few checks in the beginning. So it's checking if that I have the dependencies so since since I'm running the test on my host machine It will not be able to install Things because it's not being run as root. So it it checks that the dependencies are actually available So if you're running on the in your local machine You have to make sure that you actually have the binaries the corresponding binaries installed Otherwise your tests are going to run against the binaries from the archive So it's pasted everything is nice so Tip one s h unit two very nice if you Want to write a test with shell scripts Now rails tip two I already said so just head directing the standard error And then there is So the test definition is also very simple just a single script called new app So this script is going to create a new rails application and do some test some basic rails functionality like Adding a new table and in the corresponding model object and then running the test on that To make sure that everything works And in this case, I'm I'm saying a low standard error It needs recommends because by the full rails applications need actually need the The the full application that rails generate the template one actually needs the recommended packages And then the script so the tip here is how you Interface with auto package test. So you you always have an ad ttmp variable Which is a temporary directory that's private to that single test. So you can use that to To change directories into for instance, if you want to make sure that you are running your test in an empty directory Or you can store things there and And so on so I just create a new application Change directory into it Run some commands And if everything Passes then I'm fine. So in this case it already helped me a lot with rails has a has a huge Dependency chain. So if something in the middle breaks this script already It's very simple just creating very basic applications running the test Already helped me to detect problems with the dependency chain so Another thing I want to show you is this handling here. So if you Check for the existing existence of ad ttmp you can know whether you are running inside auto package test or not So in this case if I don't have one already I create one So I can just Run this test script quickly from inside the source package If you run everything for me And I don't need anything else So this is useful if you have uh, I have 10 minutes okay, it's useful if you have uh Several test scripts and you run you just want to want to run one of them So if you handle these things you can just do it uh red mine Is another package That has an interesting setup The control file is actually a little more complicated ah So I test uh all the cases for all the uh the supported uh databases So I have one test for In this case I use test command because I use a single test script with arguments So I in this in this first in the first tensor there, I'm using s sqlite 3 and I'm using Apache 2 Passenger as uh as the The web application connection How how the the web server is going to interact with the actual application And then I have one for postgres one for my sequel And then another one we ask you a lot again, but using a different apache integration module So I can test all these cases so uh, this means that uh Before I upload a red mine. I need to I need to wait a little bit for these things to run So each one is going to install a full red mine salation from scratch on a new backend and Last time I look at took like seven minutes to run because it's everything from the beginning I won't bother with the details of the script but the point that you can also Past parameters so you can have the same script that does things slightly different with different backends and so uh, I need to show you the script because So you you can restart services This is So I hear I'm here changing the apache configuration and then just reloading apache And then checking that it This is very basic, but also helps a lot. It just gets the The address where red mine is supposed to be and you'll test that the hml returns something sensible So if that was a 404 a 403 or something else or uh Uh 500 error then the test would fail Random ruby package. So if you want to know how your Package that's handled by common you will notice that There is no control file here But the source package declares auto package test package ruby. So The ci environment knows how to handle that and How will it do it it will do it using the output of autodep page? Which is a helper tool that auto package test will use It it's the tool that you will you want to patch if you want to add support for new packages. So In a ruby package it will create automatically a control file for that and auto package that will consume that file Indeed run the test like that. So it will just Do its thing without having to explicitly declare It failed nice I love Live demos All right, I'm running the test against the old binary. That's what happens when you have that so But I do run the test before uploading on my clean environment Just just for you to know So autodep page has support for four types of packages. So there is ruby pearl node js and Kernel modules that use dkms So if you want to add support for new packages, you can you can just look for the commits that added these two Because they are very simple. You just You don't have to touch anything that already exists. You just have to create two new files one to detect whether the The package is of that type and then another to create the actual output for the control file You you can also run look at debc i itself In this case debc i runs its own build time test suite against the installed version so Just call this script here Just a little uh It just makes sure that you are running against Not against the build tree the the source tree because Lots of ruby packages have that so if you are running from inside the ruby tree, it will modify the load path The library load path to load libraries from the source package instead of loading the ones from the system Which is what you want when you are building Otherwise, it's going to run its unit test against the old version that you might have installed but when you're using Autopac says you don't want that so in this case I just Copy every test to a temporary directory Change there and run the test from there. So i'm sure that's not going to load the code in the source tree This test takes a little bit. So i'm not going to run them But then this is one way of running upstream test suites when you If you If that's possible at all, I know lots of test suites Especially for stuff reading c and c plus plus actually require building the whole thing. So That might get complicated, but at least for Interpreted language is usually doable I'm going to skip this one and let's leave some time for questions I'm also going to skip this one this chip is very helpful So if you want to inspect the environment after the test finishes Especially if you're running against a veto machine or a container or a ch route you can pass Dash dash shell which will always run a shell after the test or you can pass dash dash shell dash fail Which will only run a test if the test fail It will only run a shell if the test fails. So you can inspect What's going on there? So, uh, please join the movement to increase even more the test coverage in debian So you can add tests to your package. You can now you can add generic tests for a packaging chain to auto depth 8. It's very easy. You can Talk to me. I'm here until the end of depth conf We can sit together and do it very quickly And you can also talk to me if you want to help improve in maintaining ci dot debian dot net Uh a few acknowledgements. So the injection for curating Auto package test marching pit currently maintaining It it he's doing a very good job with uh several improvements and very responsive Brandon for child and lucas canashiro where the gsoc students from last year. So Brandon did a lot of the ui work and uh lucas did a lot of Since sent a lot of bug repulsive patches for packages that have uh broken tests mostly That's it. I think I remember seeing Just a check that you actually built a package before you run the test. Is that logical needed or Why so If you are running in uh, so suppose you are preparing a new upload And you want to run the test. So you want to run the test against the The bindings you just built, right? So you either Have to install them if you are running on your current host system Or you you need to pass the changes file for instance to get those installed into the ch root container or what have you but What I mean is in the ci? Ah, no, right. No, we don't build the package we test whatever is in the archive So auto package test can use uh the source package as input. So if actually if you run, uh If you do this it to just it to actually download the The source package from the archive in run the test from there So that's what we do in ci. We just we don't build package and uh That that ways is much faster and you are uh testing exactly what you are shipping to users I wanted to know if the test switch field can have multiple values I mean there is this auto package test dash pkg pearl and can we have that Together with auto package test alone for a custom test of Auto functionality I I'm not sure I have to check so what, um What auto package dash does if it if it doesn't find debent test control Then it calls autodep age So if you do have debent test control you want to include what Whatever autodep age would generate for your package there as well You can also disable using the auto tests So if you there's a parameter in an adg run that you say you can say I I don't want to The resource there will be I'm not sure yet, but uh, you can say I don't want This crap that autodep age generates Have you ever tested? dpkg config scripts like config or pre-rm Post install something like that I didn't but it's possible. So what happens is if you If you if you use the default dependency your package will be installed before Your tests have a chance to run. So if you want to do that you can Put like depends dpkg the package So the depends will be satisfied just from the beginning and then from your test you can say I need to test to run as root and then you can there Make preceding or everything else and then Co-app to install your package and then this way you can have code before and after the installation And then you can test whatever you want So is there any existing back practice to integrate this automated testing into an s-build environment? Sorry say that again the last part in the into a whatever integrates into an s-build environment In an s-build you mean automatically running that after s-build for example, I'm just asking of course It's somehow possible, but is there any existing bad practice of Yeah, I'm not sure what I do is that I always run tests against the changes file I just built After I built of the build, but I have a wrapper script that does that for me Okay So I'm just asking because we are actually building on an existing s-build environment and I'm just thinking well Well, what do you mean reusing the ch root that s-build just use it? Maybe could be an approach I think there is a wish list bug against s-build to call Autopackage set after the build, but I'm not sure Okay, so it's still an unresolved question without the speaker. Yeah, I I would say it's doable. I think yes, okay So as the page is out there should be a problem Is there a restriction available to have an x11 server actually a real one? actually running Outside of the test and especially can the ci machines provide this at least for for the tests that require it I don't think so. I I know ubuntu has They do run tests on bare metal. So they do have dedicated Real harder to run like Video drivers testing that kind of stuff, but I don't know exactly how that works And No, what there there is what's there in the the bci is that you can You can have tags associated with a test run and then you can use tags to identify a Back end where you want to run that but that's We don't have a good solution. You can use X vfb dash run which creates a kind of a fake Sometimes not sufficient. Yeah, I know Yeah, I don't have a solution for that yet Is there a possibility to to run tests against well kernels or Tools providing virtual machines within nested virtual machines. So let's say we run one The auto test is run in a virtual machine and inside something is tested which Runs or tests virtual machines Yeah, I don't I don't see why not As far as kvm and kvm. It's actually supported on The host you are running I don't see why not. Is it possible to run these tests before A package gets in the archive Uh That's a good question it would be It would be possible. I think what ubuntu does it they have Proposed suite and then Only after a few checks it goes into the Into the suite that they they use that use the actual development version Get so it's possible I mean We have more questions Okay, so thanks a lot. Antonio. Thank you