 Okay, so we have a gobby document inside DevCon 16, both in the title of the box. So this is not a presentation. My idea is to get feedback from you and the main goal really is to get people to work for me. So until now, I'm pretty much the only maintainer of the FCI. So we had lots of contributions from Martin Pete, which is the Auto Package Test Maintainer, and he works on the Ubuntu Foundation team or whatever they call it, and he also put together a DevCI instance that looks like ci.tv.net for Ubuntu, but they had a previous infrastructure for running the test, so they are using only the web interface. So he contributed lots of things to make sure you could do that. And also, Brandon Fairchild, who was a summer of cold student in 2014, and he helped me to improve a lot on the DevCI web interface. So most of the stuff we see now is his work. So for those who don't know, DevCI is a continuous integration system for Debian and derivatives. So it can also be extended to do other stuff. If you implement other backends, you can even run any arbitrary thing with it. So right now, just all the backends run Auto Package Test, but you can also extend to do other things. And it's what powers ci.dev.net running Auto Package Test suite. So little plug here, if you are interested in the topic of testing Debian package, that's going to be above on that Thursday, 1400, on means it's 13. So we'll be discussing issues related to writing the tests and how testing tools and techniques and all that stuff. And here, we are talking more about the other side of the infrastructure that runs the tests, so running the tests is for Thursday. Now, since I got Brandon to work on summer of cold really early in the project, so I had to make sure that he could run the CI on his machine. So as far as I can tell, it's really well documented how you can run your own instance locally. So just make sure I did this before we started, so I cleaned up my tree. So here is the source tree of the CI, and there is docs hacking, which tells you how to set up. So there is some dependencies you have to install. So the main thing you need to have running is RabbitMQ, which is the message broker that release the message between the DevCI master and worker. We'll get to that in a moment. So usually in a developed machine, you want to disable that, to not have that running all the time and only start with when you want to test. There's a typo here, as you can see. I already have it there. So first thing is to make, it's just going to build the documentation and create some things to the JavaScript stuff that's used on the web UI. So the architecture can be expected, you can proc file. Proc file is a configuration file for process workers, process runners. So in this case, we are running one that's called Forman. So it's going to, for each line of the, in this file, it's going to spawn the process given by this command line. So basically, so let's see that again. So we have a web server. In this case, in the production environment, you just use Apache or whatever you have, but we have one local for testing. So you have worker nodes, which is the thing that actually runs the test. You can have as many as you want. The collector and the indexer are two gemons that run on the master server. The indexer, the collector receives results from the workers after they run the tests. And the indexer processes that and generates the web interface. So now we have it running. If we go here, you see, so you have an empty environment. But before doing anything, I will just do a quick, so the configuration directory, every file, ending with .conf will be loaded. So we want here just for testing. So we are just demonstrating running the FCI for multiple architectures. These are the ones we are running on ci.dev.net right now. And the fake backend is a backend that doesn't do anything. So it's not going to run auto package test for real. You just generate fake results for us to test. So if I restart everything, so we now have it running. So if you want to hack on the web interface, it is useful to have fake data. So this script here will generate fake submissions for you. So here you see the worker doing the work, running the tests here. And then here the indexer receives the results and updates the web interface. So the web interface will auto reload every five minutes. But in this case, we want to reload. We can generate a little more data here. So we can do this as many times as it wants. It will just run fake tests. And then the indexer is receiving the results. So now you start to have useful data. And here you have the news stream. So it only generates items here when the package changes status. So if package always fails, you won't be bothered with the package fail 100 times. So just when it changes status, you'll get a... So it's fairly OK to subscribe to this feed. I do that, and not much noise. It's really useful to detect stuff breaking the archive. So a few days ago, I uploaded a broken Ruby package. So all its previous dependencies can just fail in the feed. It's useful to know when things are broken or get fixed. And here is the same interface you have on the live site. So in this case, I'm using a feature that's probably useful for people who want to run their own instance. So you can set up a white list of packages. So in this case, I'm only running these packages. And you can use that, for instance, if you have a team, a maintenance team, you want to have your own instance to run your packages on each commit or whatever you want. You can just white list your packages. And if this file is an executable, it will use the output of that as the white list. So you can even use dynamic lists there, like I want all packages with this maintainer email address or something like that. So this is pretty much it. So get it running from scratch is pretty easy. So if you are willing to hack on DevCI and help improve the tool, then it should be fairly easy. So as I said in the beginning, this is not supposed to be a talk or a presentation. I want to hear from you guys and see what are your interests. And so there are a few conversation topics here we can use. And so ways to help you can help improve DevCI itself. You can have manageCI.dev.net, for instance. Keeping an eye on, we have a money instance monitoring stuff in the systems. And also, there is a status page here. There is a status alerts, which is temporary fails on packages. And then looking at those and identifying problems there, maybe is very useful. So for instance, this package here is failing all the time because there was a change in the archive recently that started to include GPG signatures for upstream sources. And then apt-cash-rng, which is the proxy we use on the workers to avoid re-downloading dependencies all the time, doesn't support that. So it's not white listed on the... So then this package has... It's not being tested because the package download fails. I'm looking at that, but in general, it's useful to have other... It should be useful to have other people looking at those type of things to make sure we keep up with the evolution of the infrastructure. So things you can do with DevCI. So you can use DevCI as the full CI system itself as we do in Debian. You can use only the web UI as a front end for auto package test data, as Ubuntu does. You can also, as I have shown, run only a subset of packages. If you have a derivative that's almost equal to Debian but have some custom package, you can, for instance, run a CI only for your package or for your team if you want to do that. You can do things like build and test on commits. As long as the packages are in some repository, DevCI will run them. Question. Do you recommend actually using DevCI itself if you want to run your own test or is that just too much? If you want to run in your development machine? And I want to test if my auto package test actually still works. Would DevCI be a good way of doing that or is that just overkill? I think it's overkill. Just using auto package test directly is just fine. I mean, you can do that. You can, I mean, DevCI will give you a few nice things like it manages the test beds automatically for you. So you don't have to know how to create a LXC container or a QMU machine. I mean, QMU is not supported yet, but in the case of LXC, you just do sudo DevCI setup and it does create the container for you. So it gives you that, but if you are okay with managing the test bed yourself, you can just use auto package test directly. At the beginning of your demo, I saw that you have a background file in the sources is that supported as a development environment? I think it is, yes. So it's, don't have much. So just call this script here. Yep, yep. So the only thing it's going to do is it will start the rabbit in queue for you. It will, right. So this is more for testing the Debian package, but you can also do that. It's as a, yeah. So here it's relying on the Debian package. So it's automatically installing the extensions, but if anyone wants to use Vagrant for that, it should be just a matter of taking the extra steps in the documentation, automating here should be easy. So another unrelated question is I'm interested in, I'm wondering if DevCI would be a good basis to restart doing archive rebuilds because there are lots of use cases that's not completely addressed by what the reproducible builds people are doing in terms of filing FTB FS bugs. And one thing that I wonder is, oh, well, first, do you see big reasons not to do that, not to use DevCI as a basis to run just a rebuild farm? Not really. And one thing I saw is that you specified the architectures list. Oh, flexible is that because typically for archive rebuilds, you probably want to have, well, unstable testing, but also unstable with custom GCC packages or stuff like that. Is that? So one way to do that is when I did this here, you will notice that every test request submission has to explicitly state the distribution and the architecture. So for instance, you could have in the distribution is basically, well, you can say use a distribution like unstable dash GCC six and then you configure the test bed associated with that distribution with the corresponding source list entries and that you just work. If anyone, we don't have live questions, we have questions here. So if you guys are interested, please help taking notes here because it's difficult for me to respond to them and take notes at the same time. So probably in certain packages to have not run despite the impulse. Yeah, so this is an issue right now. So some, I think there's some bug in the test scheduler that depending on race conditions on the archive updates and something. So, and sometimes packages are not being run. And this is something that I would appreciate help on but it's on my plans to at some point, he write the test schedule to be more robust on the face of these kind of things. And also it would be nice to have, it would be nice to have anyway a way of people like kicking a new test forcing to currently what I do now is when people ping me on IRC I just schedule a new test for them. I can probably arrange shell access for every DD should be able to, so yeah, I can do that. I can do that, so there's nice SSH tools that you can create accounts that have limited permission so I can just allow everyone to just request new test that should be doable, right? So I guess I think I hope that we sponsor the question viewing the Q summary. So I guess it's not linked here anywhere but there's actually a money in instance running. So it's the CIDabionette slash money. And here you can see the status of the queue. So I guess people have been busy during that camp, that point that camp because the queue is really high now. So in the past few weeks we have a really quiet scenario. So you can see here for, okay, the resolution is not, what? I don't have a reason to scroll in, why not? So you can see here that the queue was really okay on AD64 for the past few days. Then the Conf starts and we have all that. So what happens there, if you have like a base package that have lots of reverse dependencies and also transitive reverse dependencies, so if you have a new GCC upload then everything gets tested again, or libc upload. And so here you can see the state of the queue as far as waiting time. So if the queue is really high, you know that your package is not going to be tested really quickly, but if the queue is empty, you know that you get results really soon. And you can see that ARM64 has basically a horizontal line on the queue because we only have two boxes running tests. That's sponsored by Linaro, my employer. We have two, I don't remember the name of the board, of the seven boards, but they are running 24 seven, but they are not able to keep up with the load. And for AMD64, we have 10 Amazon C2 instances. And so they are able to consume that queue really, reasonably fast. So you can have the status of the whole system here. I usually remove to be able to show. So yeah, one of the ARM64 machine is currently broken. I can't figure out why. I didn't have the time to look at it. So this is package being processed by worker. So whenever there's a vertical green line, it means that the worker is busy. And here is the number of packages that have been processed by the worker in the last, well each time slice is five minutes. So you can see. So here I used to monitor if there is some problem with the worker. So in this case, it's very clear that there's something wrong here. And I know about that already. And I know the person who asked the question if that answers or not. There is a marker to say that whether each package is already scheduled or not. But I don't think that shows up in the web interface yet. I had a branch with a status page like this showing the current queue. But I was never able to finish that. But as you can know how the system is looking at this. This uses the Debian single sign-on with client certificates and all that. So all the dds and dms should have access. I mean anyone with a Debian SSO account has access. Can you explain a little bit at the Ubuntu website I can see why a package was tested or retested? Is the same mechanism working on ci.wmador? Yeah, it's not. So yeah, as I said in the beginning they use a different infrastructure. So they already had their own infrastructure for running tests. And so Martin is just using the web UI there. So a few details are different like this one. So you mean why a package was run like this? So here you have why the run was done. So in this case this package I'm retrying because the last attempt failed with an infrastructure problem. You can also have, let's see, the website itself. Yeah, so in Ubuntu it shows up at the website. Maybe that's a nice feature. Where? So in the overview of your package, I think it shows up why, yeah. So that information already on this overview. Okay. And then you don't want the rest to have. Yeah, maybe they have a patch there that never gets three back. I can look at that. Yeah, it's still about the queuing. Is it just a first in, first out? Or is there a way to tune that to prioritize a specific test? It's first in, first out, but I think we added since the beginning priority parameter. Maybe not. But so Revit MQ supports priorities. So it should be easy to just add a new parameter so the Revit MQ calls that puts jobs in the queue. I don't think we are using that now, but it should be a few lines patch to do that. I have another question. Maybe slightly unrelated to DAPCI itself, but Ubuntu is using the outcome of AutoPax tester as gating for proposed to the real archive. How far are we in Debian to do the same? We are supposed to, I had conversations with the release team last DAPCOM about that, but I guess life happened and then we weren't able to go ahead of that, but there is the idea to do that and DAPCI even already has, it already generates block, Brittany block hints for that, but that's not really right right now because we only want to block regressions and not package that have failed at testing forever. And I guess you need to migrate to Debian.org machine. Yes, that's another thing. So basically what happened was I probably wrongly assumed that having the DAPCI package as a proper Debian package was the way to go and turns out that it's not the way to go. So I have to figure out how to manage the master instance especially not having root on the machine because since I use package, so I upload the stuff to Jesse back ports and then upgrade using app as you usually would do, but of course for good reasons, the essay does want to have everyone having root on the machines. Yes, I just need to plan and do things in a way that I can manage with a regular user account. But is that limiting to actually use it for the release? I don't think so. The release is okay as it is with, as long as the data is correct. So these Britney Hintz files is not what we need currently. I just net, I or someone else that wants to help needs to get the least only regression here. Anyone else have questions here? Can you bring the microphone? Maybe it's still related to the current implementation. Is it possible to get an email if something failed? That'd be nice to be implemented. An email. So you have a- Because in the moment I have something like 80 packages of them are 60 FCI and I always have to check if they are still in a good condition. Right. It would be nice to have something like a autobug or email or something like that. Right, so you can subscribe to the RSS feed. So that works for me. I subscribe to the whole feed for the entire of Debian, but even that is not so much. The information is also presented in the tracker, in the developer. How it's called? The DPO, the Debian Developer Package Overview. Also displays CI test. So if you go to your page, the list of your packages, you have a column with CI results. I know that, but I mean- You want to be notified explicitly. He pushed if something fails. Is that something that most of people want? I don't know. Please raise your hand if you would like to get emails on your package fails. Okay. Yeah, I guess it should be fairly easy to like email package name at package.nev.org. Just, I think that's something to be better addressed in a more centralized tool. I mean, is it a tracker or a UDD? UDD works better for team, for groups of packages, but I don't think it makes a lot of sense to have CI re-implement something specifically for them. UDD has a RSS feed, but it doesn't include CI. I don't think so. Okay. 10 minutes? Okay, so we still have 10 minutes. If you are interested, I can show you how I currently manage the Amazon infrastructure that runs the test. So there is a configuration repository on CollabMate called Dev and CI config, which is a Chef repository. So that's another issue for the DSA migration because they use puppet, but that's okay because since I packaged everything, so the Chef stuff just put a configuration file there and install this list of packages, so it's pretty easy. I don't see how much time that takes. So I use Vagrant to simulate a production environment, so I bring up a master machine and a worker machine. Okay, so this is using a tool that I wrote to use Chef without having a Chef server. It's called Shake, so you can push your stuff to your nodes without having a Chef server to run. First, because there is no Chef server in Debian. And second, because for small infrastructures, you don't necessarily need the overhead of having a centralized server. And then we have lots of commands here. So Hake Converge is basically applying all the Chef re-scipes to the nodes and should be really fast if everything's already done. Yeah, so all the machines are in the desired state and you can use this as a shortcut to log in to the machine, so you can inspect the status. Oh yeah, I reinstalled everything yesterday, so it doesn't know about anything. You can schedule tests, so this is fast. Maybe something's broken here. And you can use the SystemD stuff to, you can use the SystemD log stuff to see what's happening. I think the RabbitMQ has some issue here, I don't know why, we can probably just live on the real server. So here you see the results coming. This is what I used to generate some of those moon ingraphs here. Yeah, so there's a new result just came here. Well, it's not that fun to keep watching that. Yeah, but basically it's, the worker has, have a Debci worker, Jamon, that stays there doing stream, so it's not very complicated. And the master has a few, so it's now generating the HTML here. It's also doing new tests, scheduling new test runs, DepsiBatch, and that's pretty much it. I think we are almost out of time. Does anyone have more questions? If not, I think we can finish here. And thanks for coming. I hope I see patches from a few people in the future. Thank you.