 Welcome to my talk. My name is Hubert, and I'm going to be talking about Whale Builder, which is a tool that I wrote for building packages using Docker. So probably most of you know this already, but it's generally not a good idea to build packages on your working environment. You know, when I first started building packages, it was about 15 years ago or so, I would just, you know, do the packaging, and then I would just do a de-package build package right in that directory, and then it would spit out a package, and you know, for those packages that I was doing back then, it worked fairly, it worked fine, but you know, there are problems with that. Depending on what packages you're building, you know, you might have, you might try to build packages that have conflicting build requirements, and then so, you know, you don't want to be constantly installing packages, uninstalling packages, and so forth. You also want to make sure your build depends are correct, because if you're just building packages on your main computer, you might have packages installed that your package needs, but that you've forgotten to include in your build depends. Some software will compile differently depending on what packages you have installed, so a lot of packages will detect which libraries you have installed, and if you have certain libraries, then they'll enable certain features, or you might want to build for a different release than what you have. So, for example, if you're running stable on your computer, and you want to compile something for, I don't know, old stable backports or for unstable, then it might require different libraries than what you have installed. So, the solution, of course, is to build your packages in a separate isolated environment, and hopefully one that's minimal. That is, it only contains the packages that you've specified in your build depends plus build essential. So, how do you do that? There are various tools available for doing that. There's S build, P builder and cow builder, my own tool, which is whale builder, and travis.debian.net is slightly different from these tools, but I'm mentioning it as well because it uses kind of the same idea, and I know there's other tools as well out there. So, with P builder and cow builder already out there in S build, why did I write whale builder? P builder for me was just taking too long, especially on my computer, which is still uses an old hard disk drive, it's five years old. So, P builder, for those of you who don't know, it creates a minimal environment. It stores it on your disk either as a tar ball if you're using P builder or as a directory if you're using cow builder. And then it copies when you tell it to build a package, it copies that into a build directory, installs build dependencies, and then it builds your package in that directory as a CH root. But that all takes time. It takes time to copy and it also takes time to install the build dependencies over and over. So, for example, I maintain a package called no web, which depends on tech live, which is fairly large. But once all the build dependencies are installed, it only takes about two minutes to build, whereas installing the build dependencies, I haven't timed it but it's like over 15 minutes. And, you know, if I have a bug in there that, you know, it's happened before, I tried something to fix the bug, it didn't work. So, I have to rebuild the package again. So, I have to wait for, you know, tech live to install over again. And then, you know, maybe it still doesn't work. So, I try something else. And, you know, over and over, I have to wait for pbuilder to do all this stuff again. So, well builder is different because it's, it uses Docker and Docker uses union file systems. I'll talk about this a bit later. But basically, it means that you don't need to copy stuff. It just, there's support in the kernel for union file systems where you basically say, I want this to be my base directory and I'm going to write stuff on top of it, but it's not going to touch the base directory. It's going to write it into some other location. So, you can use that same base directory over and over without copying and without changing it. Well builder also creates reusable images with the build dependencies already installed. So, that means I don't have to keep on installing tech live because I already have an image that has tech live installed. Docker also gives us some extra features for free. So, it can create a build environment that doesn't have any network access. So, you know, there's a lot of software out there these days that will, when you try to build it, it will try to fetch stuff from the internet, which, you know, we're not allowed to do in Debian because we need to have all the build dependencies within Debian itself. So, by making sure that the build environment doesn't have network access, we can be sure that we're not fetching stuff from the internet when we're bounding the package. It can also check for file system changes outside the build tree. So, for example, if your build creates a temporary file or it makes a configuration file somewhere, then it will tell you, or you can ask Docker what have I changed in this directory and it will tell you. So, I will just do a quick demo of those. So, I've created a couple of packages. Oh, actually, I should, before I do that, I'll just show you what it does. This is the Debian roles file. So, all it does is, when it tries to build, it does an app get download of whale builder. So, it tries to fetch the whale builder source. That's just because I wrote the wrong thing. There we go. So, basically what it does, while we're waiting for it to run, is it creates this package here, which is just a dummy package that depends on the build dependencies for the package that I want to build. So, it installs that in the Docker image. Of course, oh, okay. So, it installs that in the Docker image and then it runs an app get update and an app get install dash F, which will install all the dependencies for that package. So, this package in particular doesn't have any build dependencies. So, it's just going to do the app get update and then do nothing. And then it's just going to copy whale builder. And then now it's going to try to build the package. So, here now it's trying to do an app get download of whale builder. And then it says it can't find deb.debian.org and that's because Docker has told it not to have network access and it's not because of flaky conference Wi-Fi. I'll also show if I try to do that again, then this time Docker has cached the results of the previous run. So, it doesn't do the whole app get update and install step because it's cached the results. So, if I try to build, if I fix it and then try to rebuild it again, then it will just jump straight to the it will just jump straight to the package building step. And this is the, oh, do a quick demo of, so the build step for this package, it just touches some config file outside of the build tree and then so Docker or whale builder tells us detected file system changes outside the build tree and then it tells us what file is there. Okay. So, just a very high level overview of how Docker works. It has two basic components, images and containers. Images is like just a base file system and it has layers. So, you start with a base layer and then you say now I want to add these files on top of it and then you can add files on top of that or any arbitrary file system change on top of that and then so it uses a kernel module when you try to access the file system. It first looks at the top layer, sees if that top layer says what that file is, if it doesn't find it there then it goes to the next layer and so forth. And a container is basically a running instance of an image and it's isolated from the rest of the system in certain ways and kind of a hand wavy description is it's more isolated than a CH fruit and less isolated than a VM. So, whale builder as we saw it starts with a base system image and you can either create your own or download a pre-built image and then it creates a new image with only the build dependencies installed and then it builds the package and then it copies the result out of Docker. So, to create an image you use this command whale builder create and you can either base it off of an existing Docker image which you either pull from somewhere on the internet or that you build locally. You can specify the distribution that you want so either a SID or testing or whatever and then you give it a name. There has been questions for whether you can trust Docker images that you didn't build yourself so it might be a good idea to use Debootstrap especially since you only need to do that once and a fun story on Sunday I was preparing for this talk I was looking at my images and I noticed you know my base image is a bit out of date so you know I should probably rebuild it so I tried using Debootstrap and then it broke so I frantically tried to fix the bug and it should be fixed in whale builder 0.5 which I just uploaded yesterday. So, if your base image is out of date you can either create a new one or use whale builder update which basically just doesn't apt get update and apt get just upgrade and then creates a new image based on that but even if you just do an update it's probably a good idea to rebuild occasionally or else your base image one thing is your base image will continue to grow because you have a bunch of layers and each layer takes up space so even if you know the whole thing gives you a certain amount of data your other layers add disc usage and also just so you don't accumulate obsolete packages. I have some pre-built base images so if you use whale builder slash Debian then it will probe those images from Docker Hub. I have SID testing stable old stable I think and I'm going to make a bunch of images as well but again you may or may not want to trust those images it's up to you. As we saw before this is how you build a package. If you have so when you build a package then whale builder will save the dependency image and will give it a specific name and you can tell it you can force it to use that previously built image instead of building a new one and this is how you do that you just specify the image name and you say no install depends. If the build fails then whale builder does not remove the container sorry wrong wrong command okay that was not supposed to happen it's not supposed to remove the container so that you can you can inspect it with either docker export which will give you a tar ball or a docker commit which will create an image. What I'd really like to do is to just jump into that container but I can't figure out how to do that because it's a stopped container and docker doesn't allow me to do that so if there's any docker gurus out there then let me know. Unfortunately I'm running out of time so I will have to pick and choose which ones of these to talk about I think this one's interesting so say you have a bunch of packages that have similar build dependencies so you don't have to have your base package being like a completely minimal Debian plus build essential image you can add stuff into it so for example if you maintain a bunch of qt packages that all you know they'll all go depend on qt5 dev qtbase5 dev then you can create a docker file that looks like this and then run docker build give it this name the dot at the end says to use the current directory as the to find the docker file and then you can use this as your base image instead of the bare minimal base similarly if you if you're packaging new software and you're not sure exactly which do dependencies you need for example upstream upstream just says it needs qt5 but you know in Debian we have a whole bunch of qt5 development packages so which ones do you need so you can start off with a base guest so you know qtbase5 dev is probably a safe guest and you know maybe the thing that you're packaging uses webkit so you know libqt5 webkit5 dev is probably a safe guest too so then you try to build it and then you know it gives you an error message because it can't find something so then you update your build depends and now you can use you can use the dependency image that well builder created previously and use that as your base package uh and then so now it won't it won't reinstall qtbase5 dev or libqt5 webkit5 dev again because it's already installed those so it will just install the new build depends build dependencies that you've added um and that'll save you time oh you can't you can't see the text highlighting and then so you know you can just repeat that as needed and I think that's about all the time we have so are there any questions if not thank you for coming