 an introduction to Jenkins Debian.net by Holger Lebson. And as the subtitle says, we not only want testing, but we want more. Please welcome Holger. Hi. Does it work? Yes. Hello for my talk about Jenkins.debian.net. So I call testing a silver, but we really want gold. Because testing is not enough. So I speak about motivation, the setup, and have some questions for you as well. I'm using Debian since quite some time. And Jenkins, I started two years ago. And last year in October, I think, I set up Jenkins Debian.net, which is running since then, with some groups using it. Who knows what Jenkins is? Or maybe easier, who doesn't know what Jenkins is? OK. Ah! Shit, how do I do that? How do I do it? It's stupid. So again, welcome to the talk. Who is using Jenkins? OK, quite some people. Who wants to use it? Some more. Who plans to attend the booth? Ah, OK. Quite some people. So testing a silver, but we really want gold. And gold is for me, build a team maintaining Jenkins Debian org, have more teams use it, and have more teams really caring about the results. Because I can write many tests, but that doesn't help anything if the relevant groups are not reading the tests and doing something about the results. There's email notification and IRC, which I'll explain a bit later. And Jenkins is called a continuous integration server, a crone on steroids. It does that. It also can do much, much more. There's a long explanation what Jenkins can do for scheduling, blah, blah, blah. But I would explain it Jenkins run stuff. We are crone or triggered usually by SCMs. Then it's stuff is treated as failing if the error code is exit code is 1, is unstable if output matches some patterns, and stuff it good if it exits with 0. That's what Jenkins does. It's nine years old now. There's a fork, or Jenkins is actually a fork of Hudson, which was Hudson was bought by Oracle, and then there were some issues. And so they decided to fork it. It's MIT license. Some bits are other license. It's very friendly and active community. There are, I don't know, 200 plugins, I think. There's their release, daily or weekly release, and long-term releases for roughly four years. I would say it's a bit, has become a bit more. There's Jenkins users worldwide. This is from last year, so I think the world has become a bit more red. There are quite some companies and open-source projects using it. These are only open-source or free software projects. I'll leave that in there, just sorry. So there's a Jenkins package in Debian, in Vizi. You can just up get, install it and use it. It also works with installing plugins from the net, and you can also upgrade it to the long-term version, with the long-term version, okay, the long-term version we use on this thing. The hardware it's running is sponsored by Profitbricks, which is a German cloud provider. They sponsored it since last October. Currently it's six cores of RAM, a six-core is 12 gigabytes of RAM and rather small disk space. I got, during this conference, I got three more offers for hardware, which I don't intend to use at the moment because I can still get more resources from Profitbricks. And the issue is really not hardware but writing and reading the tests. And if there's more hardware needed, I happily come back to Thomas, Martin or James to get more hardware, but I don't see this now happening. So Jenkins runs VZ, the long-term support version from Jenkins, and the KGB client from source. That's the only non-VZ package on it. Then everything for the configuration of the system is in this Git repository. All the jobs are configured there and the tests are in there. And there's also some Moon and plugins for Jenkins, which I should really release to Moon and Contra plugins. And I use a modified version of Jenkins Job Builder, which creates all the Jenkins jobs out of YAML files. So you don't have to create or modify jobs in the web interface and then copy and paste the jobs or whatever other Jenkins users do. And that why it's also easy to add new jobs because you just send me a pull request to this Git repository that creates the jobs and it's done. I also have jobs checking if there's jobs to be created because I scanned some repositories and see how I would create these jobs. And if there's a new repository, then it tells me, ah, this job is missing, do something. And currently this only is AMD64. There's no other architecture involved. I use some Jenkins plugin, obviously the Git and Subversion plugins. The Lockpazer plugin is used to pass the output and do pattern matching there. The others are not so. Then what is, I've not seen elsewhere is the read-only configuration plugin. So you can view all jobs, the configuration without having an account on this Jenkins instance, which is really nice to copy stuff. What this is something I really would like to see on the Ubuntu Jenkins installation that they enable the read-only configuration plugin because then one can see what these jobs are not doing without that access to Jenkins instance, not so useful. The, yeah, there's other plugins, also this throttle concurrent build plugin so that some jobs are really resource intensive. They need lots of run or course. So for certain job types, there are only two or three at maximum run at the same time. The green balls plugin is also really important. Every Jenkins know they use that. Or not every. The git, the tree of the git repository is like this. Job config has all the yaml definitions for the job. Bin has the scripts which are run. Log pass has the log pass patterns, which are Java regular expressions. DI pre-seed configs are for the DI testing jobs. ETC is for the machine itself. So everything which is done on the machine is in this ETC repository. So if the machine goes down, there's no backup of it because everything is in version control. There is some backup actually, that's not true. User content is some minor things and Debian is the beginning of a package which I've not, it builds and you can install it. And I also want to have a package which people can install to reproduce the jobs so they don't have to full Jenkins running but can just have a small package installed and then reproduce the jobs easily. The jobs which do exist are graphical installer tests you've seen in the beginning. There's the rescue mode in 12 languages, so all in whatever Arabic, Hebrew, Poojabi and five other Indian languages, Korean, Japanese, Russian. The idea there is to add some image detection that if there's a square or a rectangle which is usually an encoding problem to detect this and make the build automatically fail. This doesn't happen at the moment. So this is useful to have a look at these builds and see visually compare them until there's other detection. Then there's four different desktop installs in VZ, Jessie and SIT and Squeeze actually also. So there's GNOME, XFCE, LXDE and KDE being installed. After the installation, the system is booted, the user locks in with X2 tool so it can send keyboard commands to it and IceVisual is started, a shell is started and the system is powered down and if that all succeed, these jobs are successful. Similar thing is done for Debian Edo profiles where we test the different combinations Debian Edo has. What I'd like to do that have not done yet is to install, Debian Edo has this network architecture to install a main server first in one job and if that succeeds, fire other jobs with clients which rely on the main server and have then these client tested against the running main server. This is not implemented yet but I'd like to do this really soon because this really helps us in Debian Edo to check our images. Debian installer jobs, there are two kinds. Whatever 120 Git repositories with Debian installer packages they are built on every commit for the master branch. Would also be thinkable to detect whether whatever Jesse branches exist and then build those also in a Jesse environment and I've done the same with the installation manuals that for all languages, they are built on commits for the languages and the full manual is also built and those do notifications to the Debian boot IRC channel. So if you're there, you probably have seen them. And there's some CH root installation jobs where I just create a CH root and then do basically bootstrap of the distributions and if that succeeds, then I do again those different desktops. Citizen daily, Jesse I think every two days and we see now monthly I think. There's one Haskell job doing CH root installation at the bootstrap and then installs all Haskell packages. I happily don't care because Joachim Breitner or the Haskell group is taking care of this. They have embedded, there's a URL, one can embed in a wiki page or something, they have it in their Haskell team page which shows the results of these Jenkins jobs. So they don't have to go to the Jenkins page or just can check their page and see if there's a problem. And Antonio Tessero, is he here? Oh, hi. He's been working with me on getting Ruby jobs done which slowed down a bit because I was too busy and I really want him to do this so that in future he can maintain that and not that I'm responsible for this. Then there are also some web check jobs which run this web check tool on all DepCon web pages, the www.debian.org, the security pages and other. And this is a good example of a rather useless test because I think nobody including me looks at these tests. They show missing links or something. I was in the beginning when I experimented what I could do and there are some self-jobs which test whether the system is okay or having enough resources of leftover files which should be deleted. I'll do this in a moment. Yeah, what I've said, I want to have make it very easy for people to reproduce the jobs on their machine. So basically I want to include in the job output the commands needed to reproduce the jobs locally. And I've done bits of it but it's not really finished so if somebody wants to help with that, that would be great. Before I go to this, I'd rather show you the job configuration. There are any questions, feel free to ask. So a job configuration is, so there's the job name and a description and then the really important part is further down this git repository where the code is coming from, the branch to build which is here only master. And then I still use the SCM polling, which is of course a suboptimal. And the command to be executed is just the simple shell script. And this runs in the workspace of this job. And the YAML configuration, can you read this? No, yeah, I see. It's not enough, right? Is it okay? How much spare capacity do you have at the moment on the slaves? Pardon? On the build slaves? There's no build slave at the moment. It's just one single installed or single machine. I can add more cores and more run to this one and then I could use one of the three offers to add more slaves. Okay. But currently the machine, I have graphs where there's immune and running. So this is the number of current Jenkins builds running at the same time and it's only three or four. So that's usually four cores are free. So as a follow up, I don't know if you're in the last talk but how much would you have a heart attack if we started running auto package tests for all package uploads that declare the appropriate header? I'd love to do this. If there are more resources needed, I can add them. So I definitely want to do this. Great. I shall see if I can put people who know about our setup in touch with you. Yeah. Okay. All right, where did I put this again? Confused. Thanks for that. Awesome job. One question that jumped at my mind is how far is this to be able to run out of package test? Was that the same question or? Oh, okay. Then maybe I just didn't hear. Sorry. Yeah, to answer, it's easy to run auto package test or any other thing, just send me these tests. Come to the buff and see how to do it. Yeah. Which teams are interested in doing that at the moment? Which team are you speaking about, Julian? As a follow up to the previous talk where Colin mentioned that they are running auto package test on Jenkins for all uploads to Ubuntu. And doing that for all packages uploaded to CID would be interesting and form a really steep point of view if we're going to take that into account and export the results to Britney to take that into account for package migrations. I'm not yet that confident in this setup as I'm with PU parts. So I'm a bit wary about having this be part of the mandatory part of the release process. But we can work on this. I'm curious how you see Jenkins inside the bigger picture. So if I, for example, want to run a thousand jobs as a one-off test to see how much stuff breaks if I change packages in a specific way, say, add system desupport, would that be happy, would you be happy with that? I'd rather not be so happy with one-time actions. The QAT also has access to other cloud resources, this 5,000 cluster to do things there. I think this is better for the one-time test like that. Okay, so Jenkins is mostly for continuous stuff. Yeah. All right, thanks. We can have quite, there's not much more coming. I could find the editor and show you the drop config, but this can also wait. Did you already think, as far as I understood, you're just doing this on AMD64 now. Yes. Did you already think about doing it on other arches too, like with QME or with whatever? I think that might be interesting, at least, for GI. I have thought about it. Yeah, but that's it. That's it. And one other idea I had, I don't know what you would think about, would it also be possible to have like tests that do rebuilds out of Git repositories of packages or DI and test that? Like something like, I think that Seagull is doing for the OpenStack packages currently. He triggers builds on commits and has a Jenkins. I'm doing this for the DI packages, and I'm happy to do this for other packages as well. Like, we briefly discussed also how doing it for the Perl team, which has more than 1,000 packages. And there again, it was not a resource problem on the Jenkins machine, but rather a resource problem on the Perl team, because they cannot really deal with the results of 1,000 packages, probably. Or maybe they can if they all don't fail. So the idea is to introduce whatever batches of 50 packages, probably the most highest popcorn or whatever, and slowly get coverage for that. But the problem is really reading the results and dealing with them. How can people currently get at the results? Is it just you have to go to the Jenkins Debian.NET website or can you somehow subscribe to results you're interested in? At the moment, I've only enabled IRC notifications, so their Debian boot gets them about the DI packages and the graphical installation tests, Debian edu about the graphical tests for Debian edu. I have also enabled email notification, but those only go to me and the Haskell group, I think, receives their jobs. If the DI team wants their results on a mailing list, fine, happy, it's easy. Yeah. Was there another question as I could show this job configuration? Yeah, so just to make it clear, so building package is also an option. Yeah. Right. Okay. Yeah. I think I go through the job. I show you that the scripts which are running, which are really simple batch scripts, so you get a better idea of what's happening there. But I'll first ask a question. Since what I understand, Jankin's job are not necessarily related to some specific package, are they? That's right, yes. Okay. So what I was proposing is maybe kind of mood, but I like to have as much as information as possible on the PTS, but I think that given this structure, it's not very usual, not very doable to add some information about Jankin's in the PTS. Or maybe there is some way. I think there is. I think it's rather easy because Jankin's could provide for packages the results and the PTS could grab, scan them if they're for packages which have it. Okay. So there is some way to build a mapping from packages to relevant Jankin's result. Yes. Okay. Software. This maybe I think it would be very useful. More questions now, or should I explain this job thing? Yeah, I see lots of green on some of these tabs on the Jankin site, which is great. I see some yellow and some red, which is to be expected. What is the usual, well, what is the current way that the yellow and the red results, in other words, failure or, I guess, unstable results? What is the current way that those are handled and what is the desired way that those are handled? Depends on why the job fails. For example, for the integration test that you're doing running Ice Weasel and a Shell in a graphical environment. We have this here, this SIT, CHroot installation in SIT, which was successful all the last two weeks and yesterday it failed. I have no idea what's wrong in SIT, so you click on this link and it will give you this and then you go to the console output, just the job output and usually scroll down and then you see, probably see where it fails, where is it? There, LibAVU to the dev has unmet dependencies in SIT since today. So I could notify an IRC channel, but I didn't know really which one, whether Debbie and Devil is the right one there, I don't know, or Debbie and QA. So maybe a useful, so maybe there's really two useful teams that should surround, or a few useful teams that should surround the service, one of them maintains the service, one of them helps teams write tests and one of them triages failures and unstable results and directs the resulting error to the right source, to the right team to fix it. Yeah, just one follow up to our previous question. In fact, Jenkins has some syndication feed of the build result, so maybe you can make some job to extract information from syndication feed to provide the PTS some information about status of the build, something like this, or maybe we can use some kind of API, there is an API in Jenkins to extract the result of the build too, so we can provide some information in the PTS about each job. But one question about this instance, I've already used Jenkins for maybe a hundred of jobs and it's not really a performance. So you are talking about the parallel team to provide them some continuous information about their package. I far as I know, they have thousands of package. Do you plan to have some slave, so something like this to be able to manage all these jobs? As I said, hardware is really not the issue. The issue is reading the results, and this is something in this case, the pull team has to do. Yeah, okay. So I certainly cannot. I think I'd like to explain the job configuration because it's really very simplistic bascripts usually. So I've stripped down the YAML file a bit so there's different templates which have defined how long to keep the locks and some links on the sidebar. Then there's a description here for this which is for all the same, there's the Git repo in it and then the Git repo is replaced, master branch and this is the build script and this here are the recipients. If it's something like this Jenkins plus, this is used by my father to detect which there's an email created which is then passed by Proc Mail and then KGB client is used to notify the IRC channel. So I only use this email notification plugin because the Jenkins email notification plugins notifies you when the build fails, notifies you about every failure and then it notifies you about the first successful thing and then it shuts up again. There's an IRC notification plugin for Jenkins which doesn't work this way so I hacked my way with Proc Mail to achieve the same for IRC. And this is a DI test and as you see the mail goes only to me so if DI wants to have another mailing list if they don't want to have it on the main list just add the email address there. And this is it in this configuration. I should rather show the main file, the not example one. Is this not working? Probably this size is good enough. As you see now there's four different templates cause I didn't get this Jenkins job builder inheritance completely working so I have a bit duplication in here but then after the defaults, blah, blah, blah, blah, blah, blah, blah. Here are the job templates for the different manual jobs and they just repeat this template and the URLs are replaced there and this continues on for all Debian installer jobs cause they are all the same, they are all the same Git repository and they all run this script and can you read this or should I make it one bigger? Also good in the back, bigger? Yeah, I know. So the initworks page is just cleaning it. Right, so it basically always does PD build package and PD build package checks in this case if there's no Debian control then it fails or succeeds in this fail. There's some more, if the package is for different architectures also not built and then pbuilder is updated or created, usegen is used to download the source target set for three zero source packages and then PD build is called. This is all I used to build the package. It's really super simple and the CR root installation tests are more or less very similar. G edit would be the alternative, I'll try this. So what is happening here? So I first set a trap, then I guess these upgrade jobs are called, the CR root installation jobs are called with up to three parameters. The first is the distribution. The second, I'll see in the config. So then there's the full desktop test is defined here really what packages that means. I didn't explain that in the beginning, there's desktop for the four different desktop and the full desktop is all four desktops together plus whatever cups, chromium, ice, visa, LibreOffice, M player, wine, whatever I thought would be on a standard installation and there's some variants in here because the package names have changed over time and these upgrade functions are here. Prepare the bootstrap is where I write policy RCD and configure up and run up get update. There is a bootstrap also happening. The bootstrap really does a bootstrap and does up get install and to stress now to analyze how this script works but it's not that hard, yes. I'm relaying question from IOC from Peri. He asks, when should one test ISO installs and when should one use CH root tests? Jenkins seems to do both other times when one is more sensible than the other. CH root and GI tests are both scheduled based on time. The CH root tests are done daily for Sid, Jesse and maybe even still Weezy because they rather use less little resources when the GI installation test take more so I run them a bit less often. And one other question from him was how hard would it be to integrate the Jenkins result into the FedMessage Enterprise bus that's being implemented as a GSOC project? Into what? FedMessage, it's a, hmm, is Simon Chopin here? It's a Google Summer of Code project, something that Fedora wrote that lets parts of the distribution let other parts know that things are happening. So when a package build happens, things that want to be triggered off that package build can hear about it and do things, et cetera. It shouldn't be that hard, it's just running another script, basically. I really suggest check out this Git repository and look at the shell script. They are really easy to understand. I've commented them and, yeah. You had two more questions myself here. Oh yeah, then I was thinking about Jenkins, Debbie and Org, mostly because other people have suggested that I don't really care what the URL is. I should someday probably still during Debcom 13 finally announce this, was also running unofficially. Yeah, questions from you, Moor? Well, sorry if I'm mistaken, but switching from Jenkins.deb.org would mean that maybe the ASA would care about the machine and maybe that's a prerequisite to actually use Jenkins results for testing migration script. I don't know, but it could be, yeah. I mean, if we're going to integrate that into the normal release process, then maybe it should, I mean, it should become an official service and switch to that argument. I think this is the difficult part, the DSA maintenance there, because it's similar to the PU part setup, which DSA is half happy about. Hello, DSA, thanks for your support. I would just like to thank you for running this service and making it available for us to improve Debian. Thank you. This is the thank you session. I'd also like to thank Intree for helping me with the slides. We already had quite a bit of a question session, so are there any more questions? If not, then let us thank Holger.