 My name is Laurent, and I'm from Quebec, Canada. It's my first time here in Europe. I'm quite excited about this. I work for Docker. I work on maintaining Docker official images. Before joining Docker, I joined the Docker ecosystem in 2016 when I became a official maintainer of the Node.js Docker official image. And I wrote the Alpine Linux variant for that image, as well as many security improvements and their tooling for maintaining that image. If you're here, I assume the security is very important to you. I also assume that you know that security is a lot of work. So this true list image gives us the promise of reducing the amount of work that we do on security, and only do the work on things that we actually use and actually need. So before we can define what this show list is, we should define what a distro is, because without distro, we have to compare it to what a distro is. So this is a pretty good definition. And I found a Linux distribution, often abbreviated distro, is an operating system made from a software collection and includes a Linux kernel and often a package management system. A typical Linux distribution comprises Linux kernel and init system, new tools in libraries, documentation, and many other types of software. When we're talking about container image, it's practically the same. The only difference is the lack of a kernel, because the kernel is shared with the host. So this still applies in the world of containers. So here's a list of common Linux distro that you're probably aware. Debian, Ubuntu, Fedora, Red Hat, which seems to have a pretty sizable representation in this room today. One way that these Linux distribution differentiate themselves from one another is where one of many, but one of them is how they land on the usability to security dilemma. Linux, without the new tools, is very hard to use. So each distribution has to choose how much tools they provide to their users to allow them to do the work they do. But everyone has different workflows. Everyone has different things they do. So it's a bit of a guessing game. And some prefer to give more tools. And some distribution prefer to give a more minimalist approach and let the user figure out and add the additional components. One thing that's really important to notice, though, is that this dilemma does not make one image more or less secure than the other. The distribution that falls on the usability side, like every other Linux distribution almost, have a very busy security team that constantly puts out patches and address security problem. What it does mean is that it puts the onus on the user to stay up to date. So what's a distro less image? And it's an image that contains just a minimal amount of software without a package manager, without a shell, and without any type of web client where you can get external content for the web. So if your container is ever compromised, the amount of damage that can be inflicted is pretty reduced. This is a pretty minimalistic example. We're going to go into a more complicated example. Creating distro less image is more difficult. But thankfully, we have now modern tools that allows us to create them in a much more user-friendly way. One of those tools is multi-stage build. I was around in the Docker ecosystem before that was a thing. And when you had to do these monstrous Docker file with hundreds and hundreds of lines of cleaning up and installing, now with multi-stage build, you don't have to do this. You can strategically decide to determine what your dependencies are, your build time dependency, and have them in an image that can be cached. So that speeds up your build. But it doesn't have to end up in the end image. So that's a very powerful tool that allows us to create a lot more minimalistic image. The example on the side is an actual viable way to create a distro less using Go, because Go compiles the runtime inside the binary. So you can just, in this case, you build the binary in the first stage. Copy it in the second stage along with the certificate if you need to. And there you have it. You have a distro less image. It's not always that simple. The second tool that helps us create more complicated image, but also reducing the complexity is Billkit. Billkit brought a complete new architecture to creating image and introduced the notion of frontends, which can give you a different way, a more declarative way, to create Docker image. So instead of saying how, you're defining the what, and what happens in the back end is kind of hidden. So this is an example from CMD Julian called Mopey. It's a frontend that allows to create a custom Python image. So this looks very different than your typical Docker file. But in the end, the result is the same as if you were using Docker file. So that in mind is your image really distro less. A great many image have gone to the step of starting from scratch and not including a base distribution and therefore not inheriting a package manager. However, many of them still include back a shell and tools. Doesn't that defeat the purpose of distro less? And there's a very good reason why it's done, but it doesn't have to be that way. The reason is similar to how multi-stage build allowed us to separate build time dependency from the runtime one. Now we have configuration time dependencies that are only needed there for the first few seconds of the container's lifecycle. So you have all these tools that you don't really need. They linger on, and it's potentially the most problematic part of keeping the software. Having bash and busybox allows you if containers is compromised to pull external content, untrusted content. So there has to be a better way to do this. And thankfully, there is. And it's the init container to the rescue. Init container allows us to do the exact same thing as multi-stage build. Now instead, we're using two containers instead of one. One of them can be full-fledged with your bash, with all the tools you need. It can be pretty big. The thing is it doesn't have to be exposed to the internet. The only thing it does is it can figure the runtime of your application. Usually the only thing that is needed is to share a volume amount between the two. The first one starts, does its configuration, shuts down successfully, and the main container starts. So that gives you a much smaller image, but it also creates image that are very purchaseful. If you are working with relational database, you might be working with schema migration. So chances are you are doing something similar, where before starting a new version of your app, there has to be some step applied to the database. So then there's an init container that runs the step. So we can use this for configuration dependency. So now on for a demo. So this is not the way to create a digitalized image. However, it was a really good way to show how it would work behind the scene, how it could work behind the scene. So this is a digitalized Postgres. So the first part here, I'm creating the user configuration was kind of particular with Postgres. The biggest part is here. My distrules, I still, in this case, I still used Alpine. But instead, Alpine is used behind the scene. I'm not inheriting the package manager, but I'm using it at build time to fetch all the dependencies. So in here, I'm just giving it a package list that Postgres needs. And instead of installing them, I just put them in a separate folder that I can copy it at the final stage. So that's what we have here. We copy the user configuration. We copy the package. And then in this case, just for a simplicity case, I'm including the Alpine Postgres binaries from the DOI. But it could be installing from source instead. So I'll show the Kubernetes way first, because Kubernetes is where any container it came from. But it's still possible to do it with Compose. So even if you're not using Kubernetes, you can still take advantage of this solution. So in this Kubernetes, this is a simple pod. I have the main container here. So the image is my distrules Postgres. I set up the configuration for the users. And then the most important part is the shared volume. For the init container, we have the parameters that are required for initialization. And a really cool perk of this is the user now, the admin user, never gets exposed to the internet. Like the end Postgres does not know what its password is, which is pretty cool. It doesn't have to stay in an environment variable after it's no longer needed. So I mount the password here. I put the same volume mount. And in here, it's a separate command. It's from the DOI Postgres, but it's an alternate entry point. And this one, what it does is it initialize, but stops short of starting the server. Once it's done, it exits successfully and allows the next container to start. So now my init container is combing it out. So it should fail. Oh, this is probably because I don't have internet. Does anyone know what the Wi-Fi is here? There you go. This is creating. But now we're in a crash loopback because it doesn't have the correct configuration, so it refused to start. So if we start again, but with the init container, so now the status is initialization in a few seconds. Now the instance is running. Now for compose, it's pretty similar. We have our main container here and our init container here. We have the volume here and here, the environment variable. And then the part in Docker Compose allows to act the same way as in the container. This depends on with the condition to service completed successfully, which allows to run a container once another one succeeded. So if we run this, we see the typical wall of text that we would get from a Postgres container, but now it's separate into stage. We have the yellow part here, which is all the initialization part. And it tells us here that the initialization is ready, and then the main container is ready to start and successfully starts. So this is a demonstration on how we can make our distrules image truly distrules by getting rid of configuration dependency by starting using the tools, the advanced tools that are at disposal to create better images. Thank you. Here's a QR code for the repo for all this code. If you want to play with it, if you want to comment on it, it'd be my pleasure to respond to that. Is there any questions? So I have a question about the front end. So the front end, what it is. Now Dockerfile becomes one front end for the build system, but it's not the only way. So the Dockerfile front end translates the instructions in your Dockerfile into low-level build commands. But you can have other front ends, like other files that do the same conversion so that you can have different ways to programmatically create images. Does that make sense? Thank you very much.