 Okay, I'd like to welcome Michael Bokop, who's giving us talk about continuous delivery of Debian packages. So give him a big applause, please. Thank you. I'd like to start with clarifying the terminology. Everyone of you might know continuous integration coming from software development. Continuous deployment is what we understand with as soon as the QA criteria is fine, we ship or deploy it. We don't wait for any further feedback. Continuous delivery, this is what we are talking about now, is you release whenever you decide it's useful to release. So it's kind of a business decision. Does it make sense to release right now because some customers are paying us for this or users are waiting for a new release and it makes sense to release it right now? So in terms of, we can call it continuous integration in a further extended way. Why are we talking about all of this? It's like, what are the benefits we get from continuous delivery in terms of Debian project or in terms of what we are talking about here? Is the cost of a bug fix are getting bigger and bigger in terms of when we are in the pipeline. If we are in the beginning where we have requirements designed, it's quite cheap to change things and define things. What should it look like? And it's getting much more expensive later on, codes, development, accounting, operations. Once we deploy it, all the system, all the changes are getting much more expensive. So what we would like to have is some kind of independence. We don't want to rely on this specific laptop or on this specific machine. Like nobody knows how to rebuild it from scratch or how to change anything. What if we just take it out of operations? Will anything break? We want to be able to kind of scale, not just in terms of one build to many builds. Of course, this is nice, but we would like to be able to also start small and just grow as needed in terms of people. Maybe it's a small team packaging team, like we have five people. What happens if we grow to like 500 people? Reproducible, we would like to have a way of making sure the package we built, we can also rebuild like two years later on. That even reproducible project might be well known. And what's also important is that we have it predictable. Like we really want to have metrics to estimate build times, times to fix. So if we change anything in our software and the build time increases, it would be nice to identify what was responsible. Is it infrastructure change? Is it a packaging change? Is it like upstream change? Do we have any further build dependencies? So what we would like to know is, assuming we would have to fix this and that package, how long would it take us to deliver it to our end user or customer? I'd like to talk about a few problems. We had at a company where I'm working for. There was a kind of mess with golden images to ship custom software stack to customers. It was a long build time. As soon as it changed something, you had to rebuild all the image. You had to upload the image to the customer for one single small change. We also had like builds non-reproducible. It was unmanaged infrastructure so nobody knew on which machine was it built. Developers could even build their own package and upload it or maybe even a binary and just include it in the image. Also, the release process was holding back the ongoing development. As soon as we were heading for a new release, the code was frozen in the version control. Nobody could do any further work in master branches. And all the release process was holding back actual ongoing work. Getting more and more customers meant that we even had to build even more images and diverting from each other, like customers have specific needs or nothing to share, maybe even. We tried like Debian source package uploads to a custom build service, not so sophisticated as the Debian one, but in times of we don't even upload any binaries. We just rebuilt the package from scratch, but the developers still needed to manually build or release packages themselves. This means they need some tools from Debian or what you wouldn't do. They might have problems with actually releasing stuff. What should go in the change log? What version number do I have to use? So this was kind of a big problem. It solved some smallish problems in the first part, but not the overall vision we had. So what actually do we want to have? If we look at the continuous delivery book, the famous one, this is what a continuous deployment pipeline looks like. We have a delivery team, which has version control checks in some stuff. We have built and unit tests, and if they fail, we just go back in the cycle and start again. Once they pass, we can get to the next step, which is like automated acceptance test. If they still fail, we'll go back in the queue. Check in, trigger, trigger. Now we have user acceptance test, and if they pass, we can get to the release. So we transferred this into some kind of workflow for us. What would we like to have? The developer should just have something like git commit and git review. Just push it to the code review system. That's it. Nothing further. You shouldn't touch the event change log at all. Shouldn't care about what release are we actually facing. If we want to just go for ongoing development, just push to master. Jenkins, one continuous integration server, then verifies what we actually built. It relies on the event builds we automatically get. Once you do the git review, we get custom PPAs, which can be used for development and testing, like I want to test is what I'm doing actually working like. The developer might not know what the event package looked like in the end result. So he just pushes it, get a event package and can just install it on the environment. If Jenkins says, no, I'm not good, we are just going back in the queue and say, well, just fix it, git review again. And once you have the plus one, like I'm good, you can integrate the code reviewers like other people, just look at the event packaging stuff or possibly just the software itself and say, does this make sense? Can we push this? Yes, no. Once we're good, we just submit it to a master branch or whatever maintenance branch, we will look at that later. And we have some kind of different needs. The internal tooling, like we run our own infrastructure, we don't need a release dashboard, we can just push it and apply it on our infrastructure once we're happy with it. But we also have the product cycle, which is like a divan release, a new divan release. And we then decided to go for a release dashboard where we have all the projects and just say, we want to release a new version. I don't care about what's in the divan change log, we will automate this handling. We create the according branches, we create the according text, we create the according change log entries and apply the final build. And then all the release workflow, like automated testing, acceptance testing, the QA team can decide whether we are fit for release or not, just continuous. And then we ship it to customers like divan packages. It's really just app get update, app get upgrade. That's it. So that's what we actually want to have. Now, how did we get there? We decided on some principles. We just rely on divan packages and divan repositories for everything, no exceptions, whatever, really nothing. Only what's under version control matters. You don't have any chance to pipe us anything into the final product, into the final system without having it under version control. Otherwise it can't even end up there. And automate all the infrastructure handling. Like we don't want to touch any systems manually. All the configuration management systems, et cetera, should take place. We are using Puppet and Ansible. Automation, we have automated divan change log handling to simplify the releasing of new package versions. Like you don't want to think about what version number do we actually need. We know this is a new build or this is a hot fix or this is a minor change. We can all decide on that. So developers don't need divan or ubuntu at all. They of course are encouraged to do so. But sometimes you have developers working in very specific components of software and they aren't using divan or don't want to use divan for whatever reason. We have automated release branch handling. So whenever we have a new release, the according new release branches gets created the same for a new hot fix. Everything is created automatically. So once you want to fix an existing release, you know you can go at every single project, just go into the according branch and everything is there as released. We have VMs for testing and development. We are using vagrant, so whenever you have a problem and someone says there's this bug report, the steps to reproduce, all you have to do is like vagrant up, take the project, the product in the according name and choose the according version and everything else is set up automatically. So we have automated box builds at least once per day so that these so-called base boxes for vagrant are automatically there for the according releases. And important, but we have PPAs for development. So no version control freezes at all. You can always push to master. Masters should always be good, always good to release but what is actually released is in separate branches. So we just have fast forward approach. You always have to rebase. There's no option for merge and you don't know what you will get. It's just fast forward and the according release branches. Some improvements that we made in this process is usage of TempFS and Eatme data for just building faster or Ccash. So we try to get our builds as fast as possible. Once the user or developer just pushes a git review, the deep end package should come out as soon as possible. We use dashboards for extractions. So people can focus on their actual tasks. They don't have to look at Jenkins. Is this now blue or yellow or red? Where's the error? Whatever. Which build parameters do we actually need to use to end up in the according release? The dashboard takes care of this. It knows which project should go into a release. Which branches do we have? Which tags do we have? So the release manager as well as the according developers have according front ends for their actual needs. And very important is the code review system. The code review system improves of course code quality, but it's also nice for sharing knowledge amongst people. Like you are not yet working in some projects, but there needs to be someone reviewing the code. So you tend to ask, is this useful? What we are doing there? Could you maybe be more robust in the commit message to explain the actual situation you're trying to fix? And it helps, of course, introducing new people because new developers can just start hacking and they get good feedback from people used to work with this project. So you can actually see the progress in a bug fix, see what other people are fixing, what's the workflow there. So code review isn't just about the quality of code, it's also about the quality of a team. And what's under our system is the so-called Jenkins-Diebenglue. Many of you might notice the nice thing about standards is that there are so many of them to choose from, which is kind of like why I asked in the beginning, do you use COW Builder or S-Build? Then we have COW Dancer and P-Builder. We have D-Build, we have D-Backage, Build-Backage, D-Backage, Build-Backage. We have tons of wrappers or existing tools and they're like different flavors and this is good there and that's bad and whatever. So one of the main issues important for me was I don't want to build another tool, I just want to glue existing tools together to just be able to replace one component by the other if for whatever reason I'm unhappy with something. It's building on top of Jenkins. Jenkins was called Hudson before in 2011, it was renamed to Jenkins. They have weekly releases where you can just follow current development. They have LDS versions which is recommended if you run it in production or for actual usage. It's MIT licensed and nowadays there are more than 1,000 plugins available for good and bad purposes. Like you need to identify which plugins are actually useful and which ones actually help and provide something for you. There are more than 120,000 registered installations as of July and just as a disclaimer it's written in Java but it's absolutely not restricted to it at all. You can run whatever kind of project with it. It's like grown on steroids. It's really just a way of scheduling jobs of managing artifacts and such stuff. So why Jenkins deep and glue? We started to formalize the existing knowledge we know about even packaging, provide a framework we can work with, provide a common ground to base further work on if we decide to integrate new stuff. It should be built on top of it. We wanted to gather feedback from other users. What might they need and what's definitely useful and already happened is like you get contributions for further improving your internal system. So it was also a kind of community building so we can talk to each other what problems do other companies, what do other development developers have and don't create new tools or standards. Really we are just relying on what's available in the Debian ecosystem. And it should be easy to use also for non-Debian folks. Like there are people developing upstream software and they would like to provide Debian packages as soon as they have some kind of working in Debian directory. It's quite easy to get according builds with Jenkins deep and glue. So what's behind Jenkins deep and glue? It's open source project. Also MIT licensed started in 2011. We have more than 25 contributors to it so far. It's written mainly in shell, easy to adjust and extend. And mainly through hooks and according configuration variables, environment variables. It's just relying on a CI server. So technically it would be possible to just switch Jenkins to something else. But it's the easiest option and open source CI server available. It uses CalBuilder PBuilder as a build environment. Has out of the box support for Git and subversion. I know that there are users of Bazaar and Mercurial and whatever. But out of the box we support Git and subversion with ready to go scripts. It has repository management included with repripro, which everyone of you should know. And the so-called Fright, which is a very simple tool but seems to be useful for some specific purposes. And with plenty of QA tools included are in terms of support like view parts, LinkedIn, auto package test, tab age, per critic, shell check and check position. So who is using Jenkins, even glue? We have in the grimoire project, we also host the package, five in a drama first tools. Also all the grimoire projects or the grimoire packages. Postgres has a pretty sophisticated and big setup building for all the supported Postgres versions for all the all kinds of distributions. LLVM, Camelio, Wikimedia. So there are plenty of users and we get quite interesting feedback in terms of what they actually need. If you want to test it after the talk, there's the manual approach where you can just set up anything manually. But there's an automated setup where it just gets some script. Easy to review, it's trivial. It's just a puppet resype or module which sets up Jenkins, Jenkins, Steven Glue and three jobs for playing around. So you get everything what you need. It's set up in like five minutes on faster machines. It just depends on your system. It's set up in some few minutes. And this is what you will get. You get a Jenkins setup with three Jenkins jobs, Jenkins, Steven Glue source, Jenkins, Steven Glue binaries and Jenkins, Steven Glue cupboards. But this is what we will talk about now. The so-called whatever project you're working on source is generating the Steven source package for you. It's relying on the version control system. So it expects that everything is there in version control. Everything what's on the version control matters. It generates the upstream source, the event changes if applies the control file. It's actually executing a so-called script generate git or generate SVN sepshot. This also automates the change log handling. So you don't have to manually write anything to the change log. It's looking at your history. Thanks for git dch, Guido. Very useful. Important, it needs to be run only once per project, except if you're building for multiple distributions, something different. But out of the box, you just need it to build once as usual for Debian packaging. In the binaries job, then we do the actual Debian binary package build. We have a script called build and provide package. It automates the P builder or call builder actually set up. So usually, if you don't have any special needs, it does everything for you automatically. So you don't even have to set up call builder or whatever, everything will be set up for you. And you build once per architecture or distribution, whatever you are targeting, except for architecture all packages, of course. The Puperts job is useful to get automated install, upgrade and removal tests. It's optional. You don't have to use it, of course, but it's useful since you might forget about it. You don't have the according Puperts set up available if you're working on a package. Jenkins DB Glue automates this and you don't have to take care of this manually as well. We have the repository handling. It's automatically handling all the repositories without any manual interaction. So you don't have to call any repo command lines or yourself the configuration set up for you. By default, it's included in this so-called binaries Jenkins job just to make it easy. Once you scale out or have specific needs, you might want to just separate it into a specific Jenkins job. We by default assume that it's the so-called project repos job. And you can control it then to just build only in the binaries job and provide only then in the repos job. So you can be very specific what you need. You might even want to use the input or whatever other tool to just upload to a repository. Then you can just split off the binaries part and the repository part. It's just for having it configurable as needed. Then we apply further QA testing. LinkedIn is automatically executed in the source and in the binaries job. Auto package test is also executed automatically. So once you have a deep end tests directory in your package, it automatically invokes auto package test. It looks at the according code policies of Perl, Shell code, and Python. And all the results are available as tap and J unit tests in for Jenkins usage. Like you see in this line of code, we have a problem. Jenkins then can just provide according feedback. Do I fail the build or should we continue? It's fine for me if just shell check is unhappy. So all the results are available as according reports then. So an example of a build pipeline we use is like you push it to review, run some unit tests available in your Python project for example, then if this succeeds, you continue to building the source package. You run all the binary builds, run the puparts checks, make sure that the package itself is fine, include according result in the repo and then be able to just app get installed a package from the repos you're interested in. Now, managing many Jenkins jobs without driving nuts. Once you start for every single project to have like five or maybe even 10 Jenkins jobs and you have a product with 50 to 100 event packages, this is quite difficult to manage manually and you might want to change the behavior of all the binary jobs or all the source jobs. So what to do, the OpenStack project has a very nice tool called Jenkins Job Builder and it relies on YAML fires for configuration. You just have plain text fires describing how your project should look like and we have on GitHub, from the SIP-wise company, there's a Camellia depth Jenkins project which has an example of how this could look like. There are plenty of others available as well and it's very nice to use that way because you don't have to click anything in Jenkins web interface at all for handling the jobs. You have the possibility to just use it under version control. So you have your repos with all that your Jenkins configs in there and if you apply a change, you just commit it with the according message and of course you can include it in the code review system again. Like, does this change make actually sense for our infrastructure? Will the result look good? Included in testing environments, et cetera. Now, during this process, I mean, this is like, I was talking now for 25 minutes but this took us more than a few years to actually be there where we are and we had quite some lessons on the way. Developer needs might be quite different from operational distribution needs. They might want to have a specific package which isn't available in the distribution yet or they might need a specific version of a package which isn't available in the according distribution yet. Of course, you should contribute back to the upstream distribution in terms of here, of course, Debian. When reasonable, there might be packages which aren't distributable or whatever for some reason but it makes sense to just push back upstream as possible. Diverse people improve the overall quality. It's interesting to have some common ground of infrastructure for systems but for code quality, it's really interesting to have diverse people and this includes different distributions as well. Like not just think about what Debian provides, outside us might provide interesting input from other distributions. Code review requires good remote working culture. Open source folks are used to remote working so I'm not actually here to promote this because everyone of you might be used to working in this kind of working style but it's something not so much used in corporate environments that aren't driven by remote working culture. External dependencies, like we have failures on GitHub or Sipan is down or Unreachable or PyPy, PyPy, RubyGams, PuppetLabs, Parcona, these are all examples we hit in production usage. So what you definitely want to have is local mirrors of every external dependency you have. It's also good because you get a speedup of your build environment and you have staging options like before going and shipping this to the customer, you decide can I push an update from this specific module or package and only then I will provide it in the production mirror. Configuration management, it's essential to just have the infrastructure as code. If you want to apply any changes to all your Jenkins slaves, this should just go out to the configuration management, whatever you're using, if it's Puppet or Ansible or Chef, doesn't matter, but it's essential that you have some kind of configuration management to ensure consistency. Also consistent time zones, make all the systems in the same time zone, make sure they use some network time protocol so all the systems are, you can compare logs from different systems. Also the catch-22 situation, like we ran into this, the build scripts are broken, but the build infrastructure itself receives the updates for the build infrastructure. So we have some kind of recursion problem. How do you fix a problem with the underlying infrastructure actually applying those changes is broken? Also upgrading from VC to Jesse was everything working, but the deployment of the configuration management depends on unit tests which don't work on Jesse yet. So in terms of this recursion problem, you should definitely have some test infrastructure for setup, for configuration changes, so you don't break production systems, and this doesn't mean just the production system for the customer, it's also the production system of your own infrastructure. A rebuild of a system might look different from currently running one, even with configuration management, because just because you install the package now doesn't mean the same result will be in two years if you rebuild the system from scratch. So you should also have some testing for the configuration management in place. There are plenty of projects like server spec, M-spec data, test server, test kitchen, whatever you prefer, but you should definitely have some tests also for your own infrastructure available. Some tips. Regular rebuilds of all packages are very good and important because you apply recent policies and package build infrastructure changes to the packages. Once you change the underlying build infrastructure, the result might be different from whatever you're building. You might have parallel builds for your even packages, and this is a change in your infrastructure, and you have to rebuild the packages, and this means what we are doing is like for every single release, we rebuild all the packages. We never take the package from the previous release, all the packages are built with all this current infrastructure. If you have to deal with plenty of repositories, no matter if it's Git, Subversion, or whatever, the MR or MyRepos nowadays tool is very useful for dealing with large amounts of repositories. The Perl package group has very good documentation about this on the Devend wiki, and very important integrate your continuous delivery system in your monitoring infrastructure. If there's something broken, it should get the same attention as fixing something in customer production. As soon as you can't build any more packages, all the developers are stuck in development. It's interesting also to gather some metrics independent from whatever you use. If you're pushing data into Jenkins, you get build times and logs and whatever into Jenkins, but once you delete a job, it's gone. Also, there might be infrastructure changes or cleanup for the Jenkins jobs, and then you lose all this kind of data. So it's interesting to just provide the metrics into some independent project or database or whatever. We are using Garage as a code review system, and if you don't like the web interface, many don't know this yet, there's also from the OpenStack community or project itself, the Garty command line tool which provides a command line interface to Garage. So you can just use it on the command line. It has great support for offline, so you can actually go to your airplane, hack on it, review, and once you go back online, you can just push all the strings you did, which is absolutely great for working with it. And also good is some kind of Jenkins verified job, like whatever CI or CD server you might use, but let's call it a verified job to ensure the system actually is working as needed. Are the slaves there I need? Can I trigger a build of some specific job or test job? Does authentication work? Do I have the users look there, but do the nodes actually look good? Because once you get problems in your infrastructure when restarting the CI server, things get out of control. And we identified some anti-patterns for continuous delivery environment. As soon as you start with a manual SSH to some system, you might change the configuration, you might change the underlying system, install additional packages, which change existing behavior, and provide according debugging options is that. This is like keep the call builder environment up and running and provide it to another system or stuff like that, but try to avoid manual SSH. As soon as tests go okay, not okay, okay, not okay, and you don't know why, people will just stop looking at them and take them serious. They won't have any trust and they won't care at all. So once you integrate a new QA tool, make sure that it's working properly before making it mandatory to accept in the pipeline. Like if we want to have all the Puparts jobs to be okay, they should at least be once okay everywhere. Also try to avoid the pulling or crone jobs at the specific times, this is like the de-install run of Debian. You know I have to wait four hours to get my change in. Instead try to trigger immediate actions. Once you push something, immediately start with it. This is like in Jenkins jobs, don't pull for the polling for the version control changes. Instead, once you know that something is going up to the Git server or to the GARID or code review system, whatever, just provide according triggers to trigger the according builds. You want immediate actions and effects. The manual setup of machine configs, I'm not sure if this is an official term, but I just recently read it from Mark Haver, that there's no flake. All they look alike, but they are still different. As soon as you start to run a installer manually and there's one single change in it, the result might be just different. Really it should be all about automation. What was kind of a problem for us, and we have in Jenkins, several wrappers for that purposes, is like you have tools with no standardized output. It makes parsing harder. If you're developing a new tool, make sure that you can rely on the output and parse it appropriately. Checklists are a good way to identify that something is going wrong because you might just miss something from the checklist, the checklist might be out of date. Just use automation instead. If you want to check for something, use a Jenkins job. Hard coding, IP addresses, host names, board numbers, whatever, instead of configurability is bad thing. If you build the same thing in the continuous delivery pipeline once and once again, don't do this. It should be built once and then reused for tests for deploying whatever. If you don't have according notifications, developers start to wait and to pull for something instead of just continuing to work on it. We have some unresolved problems, actually. Dependency management is kind of an unresolved problem, like if you build the pen on package bar and you need another package built before, it's kind of tricky to get this automated, so we are researching on that front. We have built depends and depends, but we have no test depends, so we can't build packages which just have, which say I need this and this package, but just for testing, not for building and not for shipping for the runtime. Once you have high frequency continuous delivery, the deep end repositories cause apt to fail quite often because the mirror is updated and we have hash some mismatches. We are seeing this more and more often and I think we have some kind of ideas how to tackle that, but I'd like to talk to the according maintainers before. Pew parts, we had some like successful runs even if the package couldn't be installed. So the actual solution for Pew parts was remove the package and everything is fine, the actual package you try to test. So just to give you an idea, these are the projects that might be worth a look. Deben, everyone of you hopefully. Jenkins, Jenkins Deben Glue might help you. Vagrant is useful for all this automated testing. As a developer, Garret and Garty for code review is really nice. Jenkins job builder definitely worth a look. Everything under version control, automation is really important. Use dashboards for abstraction to not have users get into the details of Jenkins. Tests, tests, tests really. And built on established workflows and tools like the build package workflow works very well for us. And the only bad thing about this is once you're used to that working, it's really horrible to move outside of such environments. The one of you who are interested in getting deeper into this, we have Jenkins Deben Glue both on the 21st in Room Helsinki, would invite to show you up there. I think time is over. Yeah, are there questions? Hello, yeah. So I'm the Debian OpenStack maintainer. And so I've been exposed to a lot of CIs from OpenStack world. So I've been using Jenkins to build packages for like two years without Jenkins Deben Glue. The thing is I'm very happy to see that this kind of usage of ACI is spreading and that you are using it too. But what I would like to happen is that we use it widely inside Debian. To do that, unfortunately, we have to package Gerrit. Okay, so I started to do that during DebConf. I've packaged one Java library. Invite everyone to join that effort. So I don't know much about Java. Before DebConf I didn't even know how to maintain something with Maven. And so obviously I need help and I don't want to have it done all by myself. I simply won't have the time to do it alone. So first, please join me in doing that effort. I know everybody hates packaging Java in Debian, but we have to do it. That's the first thing. So the final goal would be, we already know that we have Digit, right? So Digit makes the whole of the Debian archive available for everyone to use using Git. What I envision would be using Gerrit on top of Digit. And then the people that are in the uploaders field would be set as core reviewers in Gerrit. And then we'd have this kind of... We were working about building the package, running few parts in Jan and whatnot. And then once it's done, then the people in the uploader fields would be able to say vote plus two, workflow. So that'd be the goal. And I think it's really, really important that we do that and that we have it available on Alioth or another machine. So the DSA already talked to them and they refuse to use Gerrit in Debian and I think they are right to reply this way. So please join me in packaging Gerrit. Do you know about how many dependencies we have left to go to package in Java before it will work? The question was how many packages are left in... Pardon? 60. Gerrit used once upon a time, used end to build. Now we choose this bug. So before we package Gerrit, we need to package bug. It alone is a lot of work. And then once we have package bug, like maybe it's 30 packages, then we have to do it for Gerrit, which is maybe again 30 to 40 packages in Java. I'm not sure. We will discover as we try to build it. Continuous. So just a short question. What are you using for the dashboards? We write our own ones. We started in different languages actually and just continued, but nothing highly sophisticated. It's really just an abstraction to get them. So maybe if I would start nowadays from scratch, it's some mixture of Go and Django, which would be actually preferred in our stack. Just a quick comment about the test dependencies. A quick comment about the test dependencies. So we ran into the same problem, that it's very hard to find out which packages test depend on me. And we discussed this with the package maintainers a while ago and we actually have a plan how to implement this. It's just not done yet, but we have their agreement. We know how it's going to look like. And so that will help us all for figuring out reverse dependency testing properly. Nice. I would like to chat about this later. Just a technical question. So the packages you deliver into production, they are all coming out of the continuous integration environment. So how are they signed then? Because do we have trusted keys on the infrastructure? We have trusted keys on the infrastructure. So designing is essentially anonymously, so to speak. Once you have access to the review system and you get approved packages, they might end up in the product if everything goes straight forward in the pipeline. But no manual signing. If you want to tag something for whatever reason, for example internal tooling or stuff like that, you're encouraged to get tagged with your sign. But no manual signing is needed to bypass. So the personal signing is in the versioning and whatever ready packages you don't rely on on that. Okay, thank you. Any further questions? Okay. How much effort do you think would it take to write a one-bit front, one-bit frontend for it? Like say I'm adding a new distribution stretch comes out and I want to rebuild everything for scratch and the system needs to figure out which packages still missing. I don't want to recompile everything, but just add the missing bits. No idea, but might be worth a try. We might join on that. Sorry for the question, but are you still of the opinion that it's not the right idea to package Jenkins, Debian, Glue and Debian? It's on my to-do list for this week actually. Yay! The main concerns I have is that Debian Pipeline should just, the binary packages should really just fit and so far I basically had the company vision or from different companies what's working for them. So I'm just looking at them, but it's definitely on the to-do list. I have a bug open on GitHub issues. There's time for one more question. No questions? So thank you very much for the talk and good luck.