 Welcome, everybody. Thank you all for coming To this session about bill kit. My name is stony stiggy. I'm a software engineer from docker. We're both maintainers of bill kit. What is bill kit? before we get into this, maybe we should Take a step back and look at how container images are built Today. The way it most likely looks like Is that you have a docker file. Basically has a sequence of steps that executing containers For your bill to run and then you just run docker build to Build this docker file and what it will do is it will use The build component in docker engine to actually invoke this. This is basically almost all containers that are built today. The build component was there in the very early days of docker. Why did we need to change this? what's the issue with the Builder and why do we need to build it? The main reason is that the builder was started in very Early days when we docker and now we really have quite A lot and we have different use cases now. We need to build much more complex builds and we also need Much better user experience and developer experience. One of the issues is that the build is very tightly modeled After docker file so that means that we basically can't Change anything because docker file has the compatibility rules. I think we could do better on Performance. There's also these issues that It's not completely isolated from the other docker engine. For example, you could do something like a docker stop on A container that's running as part of your build and it Will stop your build in the old builder and that's Definitely something that shouldn't happen. Because it's bundled into docker engine, you can't really use This code for anything else as well. It's just like for the docker use case and build it is made to Just solve all of those issues so it comes with dozens of New features and bug fixes. It's way faster. We have better caching. We can parallelize stuff. So it's not specific to docker files only anymore. You can basically build almost any language and dynamically load The language definition with the docker image. It's properly componentized. For example, the docker file part Is completely separate from build kit core so if you do not Build docker files, you don't need that one at all. It's not really like a new opinionated builder. It's more like a tool kit for building separate, building Other builders. So it's very flexible and Just solves the hard problems for you. So build kit is a completely new project. It's a completely fresh code base but of course we don't want To reinvent the wheel like when we don't have to. So it's based on the container d work. So it uses lots of code from container d whenever we do an Image pool, we use the container d code and we use Container d snapshots for storage and stuff like that. You can even use build kit directly with the container d Demon and let the container d demon manage the storage for You. We also want to play nice with All the open container standards. So, for example, at the time you execute the process as Part of your build. It's running through the Runtown spec. So if you have a different Implementation for implementing that specification, it's very Easy to block this in. And your builders also can be Exported with o c i m e spec or like o c i m e's darba. So it supports all the new standards. So this was a quick introduction to build kit. Here I will now talk about a couple of new features in Build kit. It has a bunch of new features and we don't have time to cover All the features, but I will have a look at some of these New features such as optimization of docker build. So with legacy docker build it doesn't compute Dependencies across docker file instructions Correctly. So if you modify a line in docker File, the cache for the next line was always invited. So in this example we have three lines. The first line is the base image. And the second line specifies which TCP Post to be exposed. And the third line has Some up to date for installing packages. And if we modify the second line and change the Port number, the cache for the next line Up to date was always invited. Even though the instruction doesn't depend on the TCP number to be exposed. So it was not very effective. And the legacy docker build also has issues In scheduling. So in this docker file We have three stages and stage zero So stage two depends on stage zero and stage one. So in theory we should be able to execute Stage zero and stage one concurrently. But in the actual legacy docker build implementation Everything was sequential. So we have no concurrency in the legacy docker build. And basically can analyze dependencies Accurately by using LLB, which is A new low-level format for building images. It's very similar to LLB, but LLB for Building images. LLB has graph structure So we can analyze dependency accurately And we can do efficient caching. And we can also do concurrent execution For multi-stage docker files. So LLB is not for humans. It's for machines. It's included in protocol And LLB is typically compiled from Human readable languages such as docker file But you don't necessarily need to use docker file. We have also front-end Which is a program that compiles High-level languages into LLB. So for example, we have buildparks That is ported from Heroku and CloudFundly We also have mocker file and gocker file These are very similar to docker file But optimized for very specific use cases We have also docker assemble Available in docker interpretations This is how LLB works So we have source op From instructions And we have exec op for run instructions So LLB op is very similar to docker file Instructions but not always the same For example, we don't have LLB op For exposed instruction Because exposed is just metadata It's not a real instruction And we have our dependencies across File op and stage 0 op So in this example, we can execute Stage 0 and stage 1 in parallel So it's very fast For example, if we build github.com.movil.movil Using docker build With legacy docker build, it takes 5 minutes and 42 seconds But with build git, it just takes 2 minutes and 50 seconds So it's 2 times faster And with build git, you can Extend docker file with custom syntax So in the first line of docker file You can specify front-end image That translates docker file into LLB So for example, you can specify Syntax equal docker's rust docker file 1.1 experimental You can also specify your own front-end And you can also extend syntax As you like So for example, with docker file 1.1 experimental We can have new instructions Called run-mount type cache And with mount type cache You can preserve cache Or compilers such as go build And caches or package managers Such as after-git or yam Or npm or whatever So in this example, you can specify the cache volume for thrush-use-thrush.cache Which is used for cache or go run Objects So this is very fast So for example, if we build Build git into self using docker With legacy build It took 139 seconds And if you enable build git It just takes 31 seconds And if you also enable Run-dash-sh mount It just takes 3.9 seconds So this is more than 33 times faster Than legacy docker build And you can also enable Run-dash-sh mount type secret With experimental docker file front-end So with Run-dash-sh mount type secret You can access private assets Such as private gtab-reports Or private s3 buckets Result reading the credential file In the final image So in this example You specify run-mount type secret And specify id equal aws And specify target of mount point Such as root-thrush.aws credentials And you can run aws-s3cp command For accessing files on your private s3 buckets And you can use this docker file By using build-ctl Build-dash-secret idaws Your home directory.aws-credentials Some of you may feel that You can just use copy instructions For injecting credential files to the container And you can delete credential After you run aws-s3cp command But please don't do this Because credential file Still remains in the layer archive Even after you run aws-m command For removing credential files Some of you may feel that You can use docker-build-build-arg For injecting credential files As environmental variable But this is not secure as well Because the build-arg values Is shown in docker-history command So anyone who has read access to your image Can read your credentials as well So this is not secure. So these were a couple of examples Of new features in build-kit. So let's see how you can actually use it. And the main takeaway from this is that there are many, Many ways, different ways how you can start to use build-kit. The most simplest one is that you can just use it through docker. So it's integrated into docker-build. You can use docker-build and it will just switch to build-kit Backend. We have a new product now In docker called build-ex that i will show a little bit later. It's also using build-kit. There are other tools, for example, img is basically a Version of build-kit for the cases where you don't want There are different projects that integrate with build-kit, Tech-down-real, poach. Basically, there's like, You can use it in any combination inside container in Kubernetes with a container d-demon without it in root less And so on. But yeah, the simplest one is Docker-build. So it's integrated into the Docker-build since 1809. So it's in the current stable Version. We don't have windows support yet. So that's why it's opt-in. So you need to basically define This environment variable. Docker underscore build-kit equals one. If you do that, then basically once you run docker-build Again, your output will switch to this new build-kit Output. You will see stuff running in Parallel. See how much time it takes. You know that you're now using build-kit. All the flags are the same and, like, should work. Like, the migrations should be, like, very smooth. So now about docker-build-ex. So what docker-build-ex is, It's a cll blogging on new version of docker. It's basically like a next generation build command from Docker. It's very similar to docker-build. It has the same build UI, same flags. So it's, like, very, very easy to start using it. But it's using a full build-kit daemon as a back-end. So not only, like, the integrated part in the docker engine. And it also has a bunch of new features. Not only, like, the single build command. It can break you, like, namespace instances of Builders. It can basically create, like, a Build cluster and target, like, a set of nodes with it. And manage, like, how you would want a specific build to be run. And build-ex supports, like, a driver concept. This is how build-ex access is built internally. And, for example, with a container driver, the build-kit itself runs inside a docker container. So what's cool about that is that you can use build-ex with Basically any version of docker engine. So, for example, we do some, we add some new build features. You don't necessarily need to upgrade to a new version of Docker engine to start to use it. You can use it with any older docker engine. It just runs inside a container completely standalone. And because build-ex uses full build-kit, we can do some Extra features that we can't do with the docker integration yet. So these are a couple of examples in here. For example, we can do remote caching. So this is very important, for example, for your ci, when Your build starts on a fresh machine, doesn't have the Local cache anymore. So now you can connect it to an External cache source, and you can still get much faster built This way and basically save some trees. Then we also have very good support for multi-platform Images in build-ex. So for this one, you can just use This platform flag, and you can set all the platforms for What you want to build with build for. And we will basically build for all of those platforms and Combine them together into a multi-platform image. So, for example, in this case, your image will now run in Both the md64 and rnr64 machines. And we will automatically connect with, like, if you Have, like, qmu emulation support in your machine, we Will take advantage of this. It can also just use multiple Native nodes. For example, like kws, you can Have, like, in the 64 end arm node, and you can connect them Both into build-ex. You run a single build, and it will Actually build on both of those nodes in parallel. And we also have actually a very good cross-compilation Support through the multi-stage build and docker file. So let me quickly show this build-ex as a demo in here. So, as i said, we have a docker-build-ex command in here. And you can see that there are a bunch of sub-commands in here. So the most important is probably the build. So build is basically very similar to docker build. You can just switch to it, and you will be all familiar with it. But there's a bunch of other commands in here. For example, there's a command for creating new builders So let's, for example, let's create a new one. And sorry, not great, but build-ex is great. And let's inspect what we just created. And let's boot this one as well. So now we're booting this builder. You don't actually need to do this. It's done as part of your build flow as well. And you can see that this is our new namespace builder. This is all the platforms that this machine supports. And it's running inside docker container. So if i do docker ps in here, then you can see that there's A new container in here. And this is the container where actually the builder is running. So it's not using the demon directly. It's just running inside this container. And i have a simple project in here. It's just like a simple docker file. And let's try to build this one. And let's try to build this one into a multi-platform image. So let's do docker build-ex build. Let's specify some platforms. So let's do m64, arm64, and arm. Let's give this a name. And let's push it right away as well to the registry. Because this is how you mostly want to handle multi-platform images. And let's build this one to see that it's quite fast. And it's done. What you would see in here is that we actually execute all of Those docker file commands three times. So you can see that this command in here, for example, Run for arm and then for m64 and then for arm64. And we created manifest for all of those platforms. And in the end, we joined them all together into a manifest List that makes up the multi-platform image. So let's inspect what we just built. Image tools inspects. Here's a command that you can use to inspect stuff in the registry. And we'll see that this is this image that we just built. It's a multi-platform image. These are all the three sub-manifests that will be used when you run it. So if I just run this image in here, you will see that it will Greet me and say that this machine is x86. But if I do the another one in here, like, for example, This arm64 one that will run on a native arm machine Automatically and run this one with docker. We'll see that this sub-image is now arm64. So this was like a super easy way to how you can do a single Build and build a multi-platform image and how you can, For example, in case of arms, you can start to take advantage Of all those optimized hardware that's coming out now. And it's not only for this well-known platforms like x86 And arm. We've also used the same thing To do some other crazy experiments. So you can use the same tools also to build web assembly Containers and you can actually run them in either With container d with a special shim or there's also a tool I created that allows you to run those wasm containers Basically in any machine without any requirement for docker Demon, for example. And there's also risk five That's picking up popularity. So you can use the same tools Already to build some early risk five containers. So if you're interested in any of those topics, you can Follow it up from those links. So this was how you could use Build it with docker. So why do we want to build images on kill brunt days? I think there are two different motivations. The first one is for ci city. So we have a bunch of build Cluster and we can load a balance using this cluster And we have some port for connecting to build a kit port Support can be just Jenkins or Tecton or any ci city platform That can be probably invoked to be some web hook And second motivation is for developer experience So you write code on some laptop with poor CPU And RAM and flaky Wi-Fi and property So this laptop is enough for writing code But this is not good for building complex images So you can just migrate your build onto cluster And you can load a balance using these ports With rich CPU memory and stable and fast network And stable power supply Previously the common pattern to build images on kill brunt days Was to run docker port with bind mounting But this is not secure So if the port gets compromised, the host can get compromised as well Because burst rush, run thrush, docker.sock Provides full privileges to the host Or another pattern was to run docker in docker port With security code is not privileged But this is not secure as well apparently So for build kit, we support rootless mode That means running build kit to demo as a non-root user So as to protect the host from potential build kit vulnerabilities So even if the build kit gets compromised The host cannot be compromised And this is implemented using username spaces So for run docker file instructions You can gain some quick privileges using username space So you can still run previous commands such as aftergit Or yum, or dnf, or whatever And to run build kit in kill brunt days You don't need to have any extra security contests Like security.contests.privileged But currently you need to disable sqlmp Because we need to nest containers on the top of the build kit container There's also a very similar tool called kaniko But it's different So kaniko still runs as a root user But it's kind of unprivileged So you don't need to disable sqlmp and uprma So kaniko might be able to mitigate some vulnerabilities That build kit cannot mitigate In the vice versa So rootless build kit might be still weak Against some of the kernel vulnerabilities That kaniko could mitigate And kaniko might be still weak against some runcy Quotes that break out vulnerabilities That rootless build kit can mitigate The next topic is deployment strategy So we can deploy our build kit As just deployment or a demo set Or a state force it Or even just as a job Without separate demo ports The most typical deployment is to use deployment But you can also consider using the demo set So it has optimal load balancing But it's not optimal for chasing When you have a lot of nodes You can also consider using state force it It's good for consistent hosting That we will discuss later But it has drawbacks on scheduling And you can also consider using job In job, you need to put client A familiar demo in a single container So it has some drawbacks or cash stuff But you don't need to manage the life cycles Of the demos So probably a job is the easiest deployment And for casting Each job's build kit demo ports has own cache And the build kit ports can share cache Using registry And for load balancing We can just use some head to the service With this round robin So if you build image The request is handled by a summer build kit demo And it can import cache from the registry And export updated cache to the registry But the remote cache on the registry Is slow compared to the demo local cache So for example, it takes two minutes And 50 seconds without cache When you have cache in the registry It takes 36 seconds So it's very fast But it's still slow compared to demo local cache So with demo local cache it just takes 0.5 seconds So it's more than 70 times faster Than remote cache on the registry So if you want to make use of demo local cache You should consider using consistent hosting So you can stick our build request To a specific port in state port state So you can always hit the demo local cache So for example, we have three build kit ports Our build kit to 0, 1 and 2 And we have three local files Full thrush local file, burst thrush local file And burst thrush local file And we apply same hosting functions To these build kit ports And local file names In this circular hosting space So we can assign full thrush local file And burst thrush local file to build kit to 0, 2 And we can assign a burst thrush local file To build kit to 0, 1 So even if you modify the content of local file Or you add or remove the nodes in the cluster You can almost always hit hash in the demo port But this is not optimal for road balancing Yeah, so just a recap So build kit is a modern builder tool kit built on top of Next generation of container tools It has significant advantages over any previous tools Basically it will beat any other builder in any benchmark As far as i know, you can start to use it today You can use it with docker, you can use it with kubernetes Basically any tool you use, it's probably like has some Integration with build kit and it's an open platform For collaboration around build and to have like New innovative solutions for building containers So if you're so make sure you opt into the build kit in Docker and start using it if you're interested in the Projects, there are like lots of interesting things in the Work so if you're interested in that side, make sure to Join us in the repository in github and thank you very Much. I think we have a little bit of time for questions. Any questions? Sorry if i missed it, but can build kit do deterministic Builds? Sorry. Can build kit do deterministic builds? Depends on how you define deterministic so in most cases Yes, if you're like in almost all cases it's good enough But if you're like going after preserving time steps and Things like that, then at the moment you will need to do some Stuff manually inside docker file for example for that. You can strip all time stamps? Yes, for example, we do stuff better than the old build For example, time stamps all your images and stuff like that. We try to avoid that if you're using cache then we don't Time stamp it again and we do some stuff like that. Awesome. Thank you. Hello. My question is are you just Show us docker build x? I want to know whether the build x is formerly supported by Or just maintained by yourself? So build x is part of the docker organization. So it's still open source. It ships with the current beta version of docker. So you can get it as part of the community edition and you Can contribute to it and things like that. But it's like it's a little bit more opinionated than build Get itself. That's why it's like in Docker organization not in the mobile organization at the moment. Another question is i want to know is there any new features Or big changes in pipeline for the next release of the future Work? Any such plan? What's the question? My question is are there any other features? In the pipeline that may be implemented in the future? Yeah, you can look at the kit up repository. We have like we have done a lot of issues with like feature Request or enhancement. I think the big ones are that we Want to do like fully distributed builds and we want to Just like basically make the developer flow much more Smoother so we can add some more debugging capabilities and Things like that as well. That will be really cool in the Next features. Okay, thank you. We also have some plan for nested docker file. So you can reference another docker file from your docker file. Okay.