 Okay, welcome to my talk, Building Containers All Day. So I wanna talk about why you should care about building container images and how you actually do that. My name is Connelius Schumacher. I'm a distinguished engineer at SUSE. I have done a lot of open source work on KDE, on open SUSE, on Cloud Foundry. At the moment, I'm mostly mostly concerned with working on Cloud Foundry on the platform as a service solution running it on Kubernetes. So we are involved with a lot of containers. Containers are everywhere. So you have heard already a lot of stories about containers, what can go wrong, what can go right, how to use them at this conference. You will hear more. There are container runtimes, there are orchestration systems. Kubernetes has brought a lot of new momentum there because it's a way to actually run them in large scale. And it also goes into the direction of not only the technology behind that, but also modifying the way, improving the way, changing the way, how people run software, how people run systems and how they handle all the components in it. So that's where all these things come into play and where containers are a really important element. So what is a container? When I'm talking about a container, in the end, it's just a Linux process. But it's all together by C groups, by namespaces, by a root file system. So it's encapsulated in something and it abstracts certain elements. So what you get is basically a box with a base operating system. You put some stack on it, maybe some more stack and application and then that's the unit you operate with. And this allows to abstract it in a way which is different than before because it gets more isolation, more encapsulation. And usually you don't have only one container, but you have a set of containers. So you run a container with your application, you run another container where you have maybe an update to your stack, you have another container which runs a different application and you can all run that in parallel on one system or on multiple systems. It doesn't really matter, but you get a lot of different elements and a lot of different containers. So what is in there is actually quite important. One thing which I think is most important about containers is that containers isolate state. So that allows these new operation models. So you have the content of the container which is more or less static. You have the network you talk to, you have the volumes which create persistence. So what you get in the end is basically an abstraction which makes a container stateless if the application is stateless, but you can run it in different contexts. And what is made possible with that is that you can do something what people call immutable infrastructure. So as you have separated the state out of your application, you can trash your servers and burn your code. So that's a chat follower came up with this term. And you think of systems in a different way because your infrastructure is not systems where you lock in what you manage, which when they break you try to repair them, or systems are not something which you treat as elements you really want to maintain and run long time. But you get to this that you don't change them anymore, but if a system is broken or a system needs an update or something, you just throw away the container, you add another container, run that, start that, and you're fine. So these components get disposable, which means you can easily create them. Containers are easy to start, it costs them no time because it's just Linux processes. So what you're doing in the end is really just throwing them away, starting them, starting new version again. And there's this one number Google published some time ago. So Google starts 2 billion containers per week. That's quite a bit. And even if you're not Google and you're only the millions of the size of Google, you will still start thousands of containers a week. So this makes it really important to be able to manage containers and to create containers and to build containers in a way that you can actually reach these numbers because that's not possible by just managing systems anymore. And you also, we heard that before, you can believe this number or not, that doesn't really matter, but on Docker Hub, if you look at the containers which are there, there was this study some time ago which said 30% includes security vulnerabilities. And sure, if you publish something and then you don't maintain it, you will get outdated software and it doesn't run anymore. We also had, if you followed this morning, the presentation about the crazy container debugging, there was this example that the official Redis image on Docker Hub contains a slightly outdated G-Lib C version and that makes bash crash when you run it on a kernel which is not compatible with that, which can easily happen. So what is really important is that you have this under control. You don't want to trust what is out there. You don't want to stay with outdated versions or even if it's only a slight mismatch, this causes problems for security, for reproducing the things. And that's why if you run containers, you need to build containers and you have to do it continuously because dependencies change all the time because your software changes all the time and you want to deploy all the time. So if you run containers, you need to build containers continuously. And there are many ways how to do that. You can do it manually. You can just use Docker to build a container. You can use other tools. You can do it in a very abstract way using a platform as a service, for example. But I want to show you one system which does that in a pretty nice way and which has some unique features. And that's the OpenBuild service and Kiwi. So the OpenBuild service is the build tool Zuzi uses for building all its distributions. It's open source. Not only builds Zuzi, but it also builds packages for many other distributions. I think we support something like 20 different distributions there. And this provides the automated infrastructure for building packages. And we also can build containers now. Kiwi is Zuzi's imaging tool that also isn't restricted to Zuzi only. And that's what we use to build all kind of images. So be it virtual machine images, be it container images. Kiwi can do that. And the combination of that, I will show you how to use that to actually have a continuous update build pipeline for your containers. So you are better off than if you just take something from Docker Hub. So let's get started with some page which is pretty new in the build service. That's image templates. So you can see the nice black boxes here. That's templates for containers. And that's the easiest way how to get started with building containers in the build service. You just go to this page. So build.opensuzi.org slash image templates. And then you choose whatever template you want. There are all kinds of templates for virtual machines. But I want to focus on the container templates. And I want to start with explaining how it uses for building a container using the standard Docker tool. That means you can just start with what you have. And when you create a project in the build service, a project is a built environment basically which can have a set of packages. You get a project with a few files. And if you follow this template, it starts with just having a Docker file. So there are two more files here, an index HTML. So that's some HTML I added to actually build a Docker image which runs a web server and shows a page. And the important part here is the Docker file. So you can use the build service web UI to see that even to edit that. And that's a standard Docker file. So that's something which you would also use if you just run it locally with Docker build. You base it on a base distribution. And this time, that's a Susie Linux Enterprise server. You install a package. In this case, Apache 2. That's what we want to add. And you just use a zipper for that. Then you copy the HTML file into the server directory and you define the command which is started in the container to run actually the web server. So there's no magic here. That's just a standard Docker file. There's one command there which adds a tag. So that's what you would usually provide on the command line when you run Docker build. And there's more magic which the build service is doing which is not visible here. And that's actually parsing these zipper install files. So the build service then knows that you want to install Apache and make sure that Apache is available to the image. So you upload this to the build service. Then you let the build service do its magic and you will end up with containers build which is successful. So let's have a look how this looks like. So you get a tar file which contains the container image and some metadata. You can download this tar file, import into Docker, run it, and then you will get the web page. So this works nicely just with taking a Docker file you have, putting it into the build service and letting it build. And this makes it possible to actually, if you run containers, build the containers continuously. And the build service, what the build service does there is automatically rebuild stuff. So it handles this just the same as it handles RPM. So if a dependency changes, it rebuilds the container. And that's exactly the thing what we are missing in most other build tools. Now let's go one step further and show one alternative how to do that. So if you use Kiwi, our image building tool to build a container, you get a slightly different set of files. So that's not the standard Docker file anymore but you get a configuration file, the config Kiwi. And you get some root file system which contains our index HTML file. And the Kiwi file looks a little bit different because Kiwi is a declaration of how an image should look like. So if you have an XML file which contains all the information which should be in the container. And there are two elements which are important. One is you declare an image type Docker. So that means the Docker image is built. You define the command which is run. That's basically the same content but defined in XML. And you add the package in the package section of the Kiwi configuration. So that means you now have a machine readable definition of what is in your container. It's not the stocker file which basically contains whatever the dump of your bash history or whatever you have put into that. So then the same magic happens. When Kiwi builds containers, it ends up on images, not containers. It succeeds. You get some more files here actually. You get the docker tar accept. So that's the actual container but you also get a checksum. You get some metadata report of the build a list of the packages because Kiwi really knows what is in the container. And the result is the same. So to put that in one slide as a kind of a how-to if you want to build containers in the build service start at the image templates page. That's the easiest way how to get started. Choose one template. Then you can run OSC to edit your files locally. That's a standard process. You might be familiar if you're using it for building packages. Edit your files, upload them, let the build service do its magic. Download the images, import them into docker with docker load minus i at the name of the file. And then you can do whatever your magic is with containers, deploy them. Think you may need to deploy them with docker whatever you want to do. And the benefits of that are that you get automatic rebuilds on dependency changes. Containers are in the same line of dependencies as all the packages. So if G-Lib C changes your container automatically gets rebuilt. You get the version history, the change log the same as you would get for every other package. And you get the reproducible builds in the way that you have it all in the build service and you can reproduce the build whenever you want. You have the sources stored there. All dependencies are actually stored on the version control. And you get the build service development workflow that means you can do submit requests, you can share work, you can accept submit requests. And in the end, you're building at the same place where you build packages. So that's a really nice consistent view on that. And with Kiwi, you get some more advantages as I said, you get some more metadata like the checksums or the package list and all that. And you have this well-defined description how the container actually looks like. So if you run containers, the build service can build them for you continuously. You get all the benefits. And that's what I wanted to tell you. So if we have some time for a question, if there are some questions, I'm happy to answer that. So what are your questions? There's one in the back. This actually integrated with Docker content trust, something like that, so that it's signed. So the Docker build feature is pretty new. That just was released and there's work in progress at the moment to integrate the signing of containers. So that will be there in the next couple of weeks or months. So that's underway, yeah. And can you push directly to the registry from the build service? That's another part which is not implemented yet. So you can add some hook to push to Docker Hub, but the build service will get a registry to publish the images. So you don't have to do this manual step of downloading and importing. So you will get a registry you can directly pull from. Yeah. Last question. Do you already have or are you planning on implementing security scans for those Docker images? In the context of the build service, I'm not aware of any plans there. We have some tooling for doing security scans in containers with a zipper. There's a zipper plug-in which can do that. If you have the key list, you have this package manifest you can use for cross comparison. Yeah, but there's probably a lot more which could be done, yeah. Okay. Thank you very much.