 I'm going to talk about the BCI and the Davion Continuous Integration Project. In the beginning it was auto-package test. So Ian Jackson created that in 2006, so it's quite a while ago, and it's currently maintained by Martin Pete as part of his job at Canonical. And then for a very long time, we expected that somehow, somewhere, someone would be running auto-package tests for every package all the time. And then it took a while to happen, and then I decided to bite the bullet and try it. So during the Christmas breaks last year, I started to hack together a solution. It was very crude at the beginning, very... using the more naive solution for everything just to make sure that I could get something that would work. And then I decided to make that public in January in the MiniDepth Conference. The UI was a mess and everything was very suboptimal, but then it was fine, people got interested and I got excited about the project. And then we got to the point that on April to have two GSOC students working on the BCI, which was very cool. I will show some of the results here. It was very nice to have them, and they finished the work on August, but I think both are very excited and are probably going to stick with them and work on the BCI and all the stuff. So that's very nice. And then this is one of the first results of one of the students is Brandon Fairchild. He worked on the web interface, so the people knew the earliest versions saw that you depend on JavaScript for everything. The UI wouldn't work at all without JavaScript. And also there are several limitations like hard coding, unstable slash AMD64 as like the only known sweet architecture set. And then Brandon worked on this to make sure the UI is scalable for multiple architectures and multiple suites, and that it can also be used without JavaScript. So there you have the initial page where you can browse the package by name. You can use the search in the right. There's also a new section on the left, so everything that breaks and everything that unbreaks is presented right there, the one page. So you can use the search box on the right. And then you can also look at the history of a package on a given architecture. So you said there are lots of data and everything. The maintainer needs to, or at least most of the things, and the maintainer needs to know what happened with the test. And then you have a status page with like graphs showing the evolution of this system. So when the FCI started to run, we had less than 200 packages with test suites as far as auto package tests understand. So obviously more packages than that have test suites, but they were not integrated into the system. And then now like eight months later, we have close to 600 packages. So like 400 packages in a little more than six months is very nice. And I hope we can go close to the 20,000 packs of packages. So this is source packages. So the auto package test is by source package. And then here's the other GSOC projects by Lucas, which is there in the back, which is finding the package that have broken test suites and fixing them. So he ported until last night more than 20 bugs. 12 of those were already closed. And another one is pending. So it was a very interesting work where we could understand what types of things were broken in test suites and be able to fix them so that they keep working and only break when the actual functionality breaks and not because the test doesn't match what this is next expect. And then it's important to have in mind the distinction between Debis CI and Debian CI. So Deb CI is packaged in Debian as a solution to have a continuous integration solution integrated with your Debian archive. It will process dependencies and know when to run tests based on when which package got uploaded since the last time it checked. And so if your package has dependencies that got uploaded, then your package will be retested again to make sure everything still works. And then Debian CI is the Debian instance of Deb CI running on ci.debian.net. So to be honest, Deb CI still has some things that are hardcoded for CI.debian.net but the idea is to remove those hardcoded bits and be able to make that general. So the team until now is myself marching pitch. He's a Debian developer working for Canonico. And then he just talks to this Brandon and Lucas which will hopefully stick with us for the future. So there are several ways to help. So you can, there's the obvious ways like sending bug reports and sending patches. You can also fix broken test suites and make sure the test suite is okay and it's only going to fail if there is an actual problem with the package. You can add test suites to your package so that we will know if the package breaks in the future. And also if you have hardware to spare, especially known like 86 hardware, it's probably going to be very useful. And then you can talk to me and we can coordinate with DSA to get the hardware in Debian and maintained by Debian CI admin team. And then we can have different architectures than AMD64 which is everyone has all the time. So speaking a little about the Debian CI architecture, so technology we use is auto package test plus it's backend. So Debian CI doesn't deal with the helm of auto package test. So everything, the test run and everything is done by auto package test and Debian CI just coordinates reading from the archive and knowing when to run tests and then collecting the results and presenting it in the web UI. So it's written in Ruby and Shell. So I started with Shell but then things got complicated and then you're not able to keep programming stuff in Shell at some point. It has a test suite itself, so dog footing for the wing. So we know if Debian CI itself breaks. And then how it works now. So everything was done like in the very simplest way to make sure, just to make it work. So it's all happens in a single node. So you have Debian CI batch which is the process that runs. It runs every six hours to match the install runs. So it runs like three hours after the install. And then it will check which package need to be run, put that in the list and then call Debian CI test for each package. Debian CI test will run the package, the test with auto package test and then store the results in that store. So the interesting thing is that we have, since the beginning, we changed the store to be append only. So you can always just append new tests. And then Debian CI generate index will take those results and generate what you see in the web UI. So that's the HTML content, the JSON data files you can use for any type of automation and all kinds of stuff. But then this architecture obviously doesn't scale because it's all in a single node and we need something better. So for the future we have very nice opportunities. First incoming.debian.org is now public. So you can have the package that just built. It was just built on a built disk installed. So I will include reading package from incoming. So we don't need to wait for the install. We can run like every 10 minutes and know when packages got uploaded. We need the distributed worker nodes to be able to scale out with more CPU power. And then I also have started a conversation with DSA to move this into Debian CI structure to not depend on myself forever. So the future looks good. So my idea is to move to something like this. So the gray boxes are nodes. So the idea is to have a controller node that will run Debian CI batch. I still read from the archive but this time all the time because we are running using incoming.debian.org packages. So we don't have to wait for the install. And then every package that needs to be run will be put in a queue. And then we have several worker nodes reading from that queue and running the test. And sending the results to the pending only storage will to be synced to the controller node back. And then the controller can just keep in a loop updating the data. So ideally we will be able to have test results very shortly after the package was uploaded instead of waiting a couple of days at its now. And then I also plan to do more stuff like adding more suites. So running tests, some experimental testing, stable plus back ports and node stable plus LTS. And also there is work on running functional upgrade tests. So like few parts already handles upgrade tests but then it just tests that the upgrade itself worked but you have no way of knowing that the application is actually going to work after the upgrade. So there's a patch I wrote for AutoPackage test supporting this so it will install whatever package you want. You say, for instance, easy. Then upgrade to Jessie and then run test suite script. Then you can test if your upgrade actually works and leaves the system in the desired state. Then there's all types of wishlist. Email notifications. People want it, people don't want it. So we have to find a solution that works for everyone. Another interesting idea is a news feed per maintainer. So I didn't comment on that but each package has an RSS feed of state changes. So if your package was passing and then it failed you get a new RSS item and the other way around. If your package always fails you don't be spamming with fail, fail, fail all the time. And if it always passes also you won't be notified. And then that's now for either all packages or for each package and the idea is to have a per maintainer and news feed is more useful so you can just subscribe to a single feed and receive everything that should be of your interest. And then all types of requests. If you are using ci.dev.net on a daily basis you can talk to me and we can put stuff in the to-do list. So now that I talked about ci itself so I decided to put together a mini tutorial on writing tests for your package. I hope it's going to be useful. We can also schedule an ad hoc session in the following days if you guys want to follow up with that and look into actual packages. So there's a couple of things you can read so auto package test has a lot of readme files with documentation on how to specify tests the actual specification of the test control file format and all kinds of how to run tests against different types of test beds. You have CH routes, you have KVM, you have containers, you have running tests against a c-sync that uses SSH to connect. So there's lots of stuff there. And then there's also the ci.dev.net documentation which will tell you a small fact in the beginning then how to reproduce the tests as they run on the BCI. So that's useful. And then two important points to keep in mind the goal of auto package test is to test the package as they are installed. So you should not use code from the source tree except the test suite itself so if AppStream has a test suite you can run that but you have to make sure it's not gonna use this local copy of the files in the source directory but inside it we will use the installed files. Also please avoid full build if possible because it is possible to specify that your test suite requires a full build of the package if you do that you will deal with the infrastructure so let's leave the builds with the build this and the test with the test infrastructure. So the basic structure is to have a Debian test directory with a control file which is very similar to the Debian control file so you have one paragraph for each set of tests you want so the simplest form is just to list the name of the tests and then you have binaries or scripts or everything that is executable inside Debian test with that name and it will be executed so this one will pass, this one will fail so the entire test run in this case will fail because this script here will fail in the bottom so and then you see the test suite can be anything if there's a program it can be a share script it can be a Ruby script, Perl, Python it can be something that you build during the build of the package so you can specify it can be a C binary there also so there are a couple ways of running tests the simplest one is to use SAGT from that script so you have to run that from your source directory it will run the tests but SAGT is not up to date right now with the new features of auto package test and the depth age specification so the next thing you want probably to do is to use the actual ADT run runner which is provided by auto package test so you pass the current directory and then that's three dashes yes and the new means don't use any virtualization so that will run the tests on your local system so that assumes that the package you just built is installed on your system and then it will just run the tests if you don't have the package installed it will fail because you don't the test dependencies are not satisfied so you can also and you probably want to run the test against a clean system so you can run, use the SCH root virtualization platform also and then just pretty much the same thing so important to note you want an apt proxy otherwise it will be downloading stuff from the net every time and ways to test as testbed that's not your local system so the easiest way is you just install the fci and run the fci setup as root it will create the ch root exactly the same way it's created on the server so you have the exact same ch root that runs on the server and then you just add yourself to the dpci group to have permissions to run that and then you just run adt run-test-user-dpci here's the simplest form passing the local directory but if you look at the auto package test documentation there are several other ways you can pass binary packages you can pass source packages you can pass change files and then it will do the right thing with each one and then you have the dpci-setup-commands creates a ch root called dpci-stable-your-architecture you can also run tests without those trivial wrapper scripts so instead of specifying a list of test scripts you can just use test command and then call whatever you want if that returns zero your test passes if that returns non-zero your test fails you can specify dependencies for your tests so if you don't say anything it will default to the add symbol which means all the binary packages built by this source package so if you don't say anything the test-bed will get all binary packages installed and then the test will run otherwise you can specify explicit list of dependencies so for instance if you're using an external test runner you can edit that to your dependencies you can also specify restrictions on the environment the test expects so you can say that the test needs to be run as root and then each test-bed will support that or not but most of them do you can specify that the test also needs the recommends so the recommended package will be installed together with the binary built from that source package and you can also say that you allow output on standard error so by default if there's anything on standard error the test is assumed to be failed which doesn't make sense most of the time because usually standard error is very abused by all types of programs so usually you won't either specifying a low standard error or head-directing standard error to standard output and then in the GSOC program in project we found some common problems that you might want to avoid first one is missing dependencies so that's why you want to always run your test on a clean environment using a ch-root at least also missing restrictions so sometimes I guess people still build packages as root and then when they run the test they don't they just assume the test is running as root and that's not the case on most situations especially in automation scenarios so then assuming root is a common mistake and sometimes it's assuming root is just a matter of assuming the right path environment variable so calling stuff in user as being without a full path with a regular user will probably fail but then also sometimes there's stuff like permissions if you need to change system configuration files then you need root you don't have to you don't have an escape for that also there's a couple of simple programming errors like capitalization issues and all kinds of silly stuff also some locale assumptions the clean system is usually using the C locale and some tests depend on UTF-H or something so if you need UTF-H make sure you export that in the test environment so looking at a real example the Ruby FFI package so right now it has two tests which is a simple smoke test which is a script in the event test and then we just released a new version of the Gen2Dab package helper for Ruby which adds the option of auto package test so it will run the test for that package without any of the local code so it will move away the local Ruby code and make sure the test runs against the installed version of the package and then we'll be able to enable test suites for all 500 Ruby packages with a new source approach and then you'll see there the dependency so the test depends on all the binary packages plus the stuff I need to run the tests and then the smoke test is a very simple test I just test the most basic functionality of the package which is useful because Ruby FFI is a very complicated library so if people who deal with bootstrapping know that Ruby and FFI separated are usually a problem but then Ruby and FFI together is even more so this is just binding a function from the libc and calling that from Ruby if that works then you are pretty sure that your system is not completely broken and that's all I had maybe I went too fast 25 minutes so we have some time to discuss and to for people to make questions and if there is interest we can schedule a doc session to look at actual packages and do whatever it needs so I have two questions first of all thanks for your talk so as a package maintainer of a library that does network communication I have two things that I need from a continuous integration environment I need to be able to bind ports so that I can test that my network communications are working and as it's a library I also need to be notified if a change in my library breaks tests for other packages so are both of those things possible so binding ports should usually just work so for instance it should depend on Apache Apache will install and bind to port 80 as long as the test sitting doesn't have anything else on port 80 it should be fine or if it's a high port you can just bind it yourself from the test script that's just fine about notification if the reverse dependencies fail that's a good suggestion it's probably possible we can just it just needs the code to do that can you explain a bit about how your ruby helper script moves the upstream source out of the way within the rules of I think I don't quite understand the rules of auto package test what you're allowed to modify in the tree and so on okay so usually there is a restriction that you can specify that your test needs a writable source tree so and depending on your test bed you need that but most of the time you don't need that you can just move stuff away from the source directly because the source package is copied into the test bed so most of the time you are not modifying your local so if you are building on your laptop and you run on chroot it will copy the source package into the chroot and run stuff there so it's not going to break your local copy so what the gentle helper does is just moving the files away it moves the files away you run the commands it needs to run and then move them back because okay so the structure of ruby packages is so that source files are specifically named a directory so you know you have the lib directory where you have pure ruby code and you have the x directory where you have like c extension so you know what you need to move away and put it back after the test ah okay yeah it's the put it back after the test was the key point thank you okay one of the question you ask was how to do email notifications and I think you just need to be hooked up in the pts because that's where you can optionally subscribe to something I'm not sure who is mentioning that piece of code now though yeah sure I think if we do email it has to go to the pts we'll reach the right people I think but please do it I want email notifications just a small wishlist bug report I guess it seems that ADT is putting these comments into the log like lots of ad marks and I guess it makes the machine possible but you don't do it yet would be nice to get a fancy easily navigable view of logs on depth CI because it's not easily possible to see exactly where it breaks among all the other output there and I think it's already there by having these markers of various steps just to stress the question to hide all the setup parts and just by default just show the output of your tests it's a good idea so I understand depth CI is at the same time test runner and test orchestrator does it make sense to outsource the paths that functionally overlap with things like Jenkins to external tools so the actual test running is done by auto package test so that's not depth CI at all in that case how is it different from Jenkins well to do this I had to that's a good question so I had to work out the Debian archive to figure out when to trigger tests anyway so I figured I would just do it myself instead of throwing them somewhere else so the primary difference is that it's able to track changes in Debian archive sorry? the primary difference is being able to track and trigger off events happening in Debian archive and be able to create your own user interface that does exactly what you need and nothing else I mean to me the Jenkins interface it might work for people but it's only completely confusing to people who have never seen it before and haven't spent a year or three figuring it out yeah I don't think we can spare one year of every maintenance in Debian to be able to use the results in a useful way I also get another benefit of the current approaches that we can tailor it very much to the kind of metadata, relation chain thingies that we have so we don't have built IDs but rather we have packages and version numbers and we can navigate it easily so I guess we gain something from not using an off the shelf product for this so I think it's reasonable to do our own thing here I like it so at one point you said you were going to support maybe multiple architectures then that's the idea I had during the previous session but I'm going to ask you again how much would it be possible to have a web interface where I can upload a build like a dot changes file in the build or maybe the source and get the auto package test run on nip-cell which I don't have any access to for example for running on different architectures you need hardware first and foremost in my conversation with the SA they mentioned the possibility of using spare build these cycles but I'm not sure that's going to work but as long as we have the hardware it's fine the only problem is having the hardware about so in the previous session I was thinking about solution for running test on arbitrary packages I'm not sure I want to have a web interface for that but maybe a special upload queue would work just fine and fits with most of our workflow just upload to I need to test this and then you have your test run because the web UI is currently just static at the HTML and I'm not sure I want to change that because it makes several things just a lot more easier but a special upload queue would be doable for sure that would be great well if nobody else has anything so if you guys want to get in touch so there I guess maybe the last thing I think this is a perfect thing for Debian we can really make use of it I think this will be a very big change for Debian once people use it more to have the ability to have this kind of automatic testing I want to thank you a lot for doing this it's great so if you guys want to get in touch so for general discussion like why my package failed and I need help you can use Debian QA both IRC and lists and if you want to help with DevCy development there is a DevCy on OFTC IRC and also you can use AutoPackage test-devl-list at lists.leof.dev.org and if there is an interest in hands-on session later come talk to me and we can schedule that with the program people thank you