 Hi, I'm Daniel Hulchin. And I'm Patrick DeWine, and we are engineers at VMware. And we want to introduce a new project that we've been working on called BuildKit CLI for good control, which we realize is a bit of an outfall. So we've been working with containers and container technology for about seven years now. We actually worked at Docker for about five years. One of the things that Docker really did well was they had a really good way of building and running container images. It's just two steps in the CLI. You do a Docker build and a Docker run. Actually, when I was working at Docker, I worked on Docker doodles, if anyone knows what those are. Being able to iterate using build and run made creating those Docker doodles. It was a really fast process. So Kubernetes is great as an operations platform. It's not quite as easy to use as a developer platform, but we think it could be. So I'm gonna do a quick demo of a new doodle that I've actually created. So let me explain a little bit about what we've got here. This is a single load cluster, a single load cube cluster, which is set up on my Mac laptop. It's running Minicube, and, but this should work just as well on any other flavor of Kubernetes as well too. I've already gone ahead and installed the KubeControlBuildClient, which I'm just gonna run here. And so what this is doing, because I haven't run KubeControlBuild yet, it's setting up a builder, which is, if you run the subsequent times, that you're not gonna need to go through that particular step first. But now what it's doing is taking the Docker file, which I have sitting inside of this directory, and it's going and building the various different stages. In fact, there was a cache miss, which is why it's trying to pull part of Alpine. It's going and getting some of the dependencies, which are specified inside of the Docker file. And now it's actually compiling this particular doodle. So now that that's actually been loaded by the builder, into the local Docker runtime, I can go ahead and do a KubeControl run to run, to create a pod and run it directly. And so there we go, this is fine. Awesome. All right, so let's take a little bit, let's take a look at how this actually works. So as you saw in the demo, the first thing that happens when the CLI runs is it checks to see if there's an existing builder running. If not, it starts one up for you with default settings. Once the builder pod is running, it uses the equivalent of a KubeControl exec to get a pipe into the pod. Then it uses the buildkit gRPC API to talk to the builder over that pipe. By using exec to talk to the builder, we're able to rely on Kubernetes native RBAC for access control. Now, by default, these builder pods are privileged. We mount the container run and talk so that they can talk to the runtime for that cluster node. What's cool about this is now every image you build is immediately available for the Kubernetes cluster to use. And you wouldn't want to run the builder in this way on a production cluster, as anyone who has exec permissions into the pod would be able to inject images into your cluster. But for a development cluster, this is extremely powerful and efficient. And you can choose to disable this and run a non-privilege builder, but then you'll have to push the image to a registry or save it off locally in order to use it. If you do want to push to a registry, we use standard Kubernetes image full secrets so the builder can push directly. So if you have multiple nodes in your cluster, you can scale up the deployment for the builder to get pods running on all the nodes. When you build an image, it gets built on one node and then at the end of the build, we use the CLI or the CLI helps transfer that image across to all the other pods. This makes your image available on all the nodes. So if you try to run a pod with that new image you just built, it won't matter where Kubernetes schedules it. This does mean that you'll need a fast network between the CLI and your cluster, unless your images are really small. Now, if you push to a registry, we skip this replication step. In the future, we're planning to implement a pod-to-pod transfer model so that you can build larger images on a distant cluster without a performance error. Okay, so why did we build this thing? We know that there's a lot of different ways to build container images in Kubernetes that are out there. And some of those tools are actually really great. There are, however, still a lot of people who are using Docker build. It's just really easy. It just works. So a lot of those other tools, though, they require you to use a registry and many of them require a lot of setup and the images aren't available locally. The other thing that we see which is happening around Kubernetes is that people are moving away from DockerD as being the default container run time. But people have built up scripts and automation and muscle memory around building images. So we wanted to create something which was close to the same experience that they already have. So the BuildKit project was started back in 2017 to create a more powerful toolkit for converting source to build artifacts, like containers. So Patrick and I were both working at Docker at the time. We were not part of the BuildKit project itself, but although we were working on some downstream projects that use BuildKit. So BuildKit itself is a really great tool. It's compatible with the latest Dockerfile features. So it takes your Dockerfile and creates a graph for all the build steps, and then it can run those steps in parallel to create your container image. It only transfers files from your local directory if they're actually used during the build process. And it's smart about tracking if files have changed for better incremental builds. To make incremental builds faster, it can cast those build results locally on the builder or even within a registry. So that enables multi-node build farms to build faster. I think I wanted to call out was that it's actually really good at building multi-architecture builds as well, which is what I use all the time for things like those Docker builds that we were looking at earlier. So let's take a look at a new build. So let's take a look at another demo showing the power of a fast developer in our mood. So let's briefly describe the setup that I've got in my environment. So I've got a macloptop running fusion. So I do a VM run list. So you can see I've got one VM running right now. Let's do a control get nodes. So we can see this is actually a two-node cluster. Right now I'm not running my second node, but sometimes I'll use that for demos. For this demo I'm just going to use a single node. So it is a simple Ubuntu VM running container D as the runtime. Okay. If I do a get pods, you can see I've got nothing running, so I haven't booted up my builder yet. So let's go ahead and look at the demo setup itself. So let's start with a really simple Docker file. So the Docker file uses busybox as the base layer, and all it does is copy in the command and set that as the entry point. So let's take a look at that command. So this is just a really simple shell script just to kind of demonstrate a fast inner loop to kind of simulate, you know, if you actually had code that you were adding here to your application. So all it does is just spit out a date and a hello world with a number, and we can increment that to see the inner loop and again simulate what a developer will be doing as you're writing code in that fast inner loop. All right. And then let's look at our application definition. So what I've got set up here is, again, kind of to help optimize for a developer inner loop. We've got the deployment. I've got the strategy set to recreate. So instead of doing a rolling update, it's going to immediately terminate the old pods and spin up the new pods. A few other settings of note in here. I've got my image poll, image poll policy set to never. That's to make sure that I'm always using the image that I've built locally. We're not trying to pull this from our registry and we're trying to do a fast inner loop where it's a local development on my local coop environment. We've got a restart policy set to always. So it's always going to restart the pod and a termination grace period set to zero. So that's going to make it faster to terminate and shut down the old pods. So all of this should help to kind of build up that fast inner loop. So as a developer, I write some code, compile, and then test. Now in a production environment, obviously you'd have different settings, but this kind of helps optimize that inner loop. All right. So let's go ahead and actually build the image. We'll just cut and paste that in. All right. So what it's doing right now is actually booting up the builder. And you can see that it's actually attempting to use Docker, but the Docker runtime failed and it's going to retry with container D. And the reason we do that is, Docker D is still the most popular container runtime out there in the wild, even though a lot of folks are switching over to container D. As many folks know, Docker actually has container D under the covers. So if we started with container D by default, we might incorrectly assume that your cluster is using container D and not realize that Docker is sitting on top of it. So kind of for simplicity for now, we default to Docker and then we'll flip back to container D if Docker is not detected. So you can see that was all automatic. Didn't have to do anything. You can, if you're manually creating the builder, you can explicitly specify that you want to use the container D runtime and that will kind of short circuit that auto detection logic. So you can see it took it about 17 seconds to attempt Docker, fail, and then restart or retry with container D. Now at this point, the builder is already running. So if I do any subsequent builds, I can immediately be available and I can just jump right into my build. Let's look at a few other things in the output here. So we can see it's pulling the base layer of library busy box and that was able to resolve in the container D image cache. Then it copied in the image and that was pretty much it for this Docker file. There was no compilation or anything like that like we saw in Patrick's demo. Finally, what it's doing is just exporting it to an image and tagging it with the tag that I specified. So at this point, that image is now loaded into my container runtime and available on those nodes. Now if I had my second node running, it would have transferred the image over to that second node as well and so now both nodes would be able to run this image. So let's go ahead and get the application running. We do a get pods. We can now see both the build get builder, which is running and ready to go for the next invocation or the next time I build an image and I've got my app running as well. What I'm going to do over in this terminal window is I'm actually do a little loop. So just kind of an infinite while loop. Could control logs, do a follow on the logs and use a label selector for app equals my app. What this will let me do is in this terminal it will continue to follow the log output even as I iterate. So we'll go ahead and start that up. So you can see it's just continually spitting out hello world with a date stamp every three seconds or so. So let's kind of do a simulation of a developer interloop. So I'm going to write some code, make some changes, recompile and see my code running. So we'll go ahead and modify this program to keep it simple. We'll just say hello world 2. So we'll write that out. And now I'm going to repeat the build command but I'm going to do one little difference here. So after I build it what I'm going to do is if that's successful I'm going to do could control delete pod with a label selector app equals my app. So what that's going to do is if the build failed it'll stop but if the build succeeded it's going to go ahead and delete the pod where app equals my app. So that should delete this pod and allow the deployment and the Kubernetes scheduler to detect the pods dead and automatically restart the pod which will pick up the new image that I just built. So let's go ahead and see that work. Voila. So there we go. Second or so we've got hello world 2 coming out and you can see that the build took much shorter this time. We didn't have to bootstrap the builder. It was already present and running. All we did was copy in the new command and export the layers. So there you go. Nice fast inner loop. Optimize for developers. Immediately have your images available on your local system. Works on single nodes. Works on multi nodes as well. All right. So let's actually look at how you would go about customize builder if you wanted to. So first let's go ahead and delete the builder that we already have. Okay. So one thing I should mention real quick. So we've been showing kube control build as the kind of primary UX or the primary CLI commands that you would run because it makes it feel very similar to the way the Docker build works. That's actually an alias to build kit build. And so we have a number of different commands underneath kube control build kit. So if we say kube control build kit help, we can see build is one of the sub commands of build kit, which again is just an alias to kube control build. There's other commands that you can do under the build kit top level command. So you can create a new builder list existing builders because you can create multiple builders with different configurations. Remove your builders. I was just using kube control delete for the deployment but you could also use the RM command under build kit as well. So let's go ahead and take a look at build kit create help. So there's a lot of command flags here. We're not going to go through all of them, but just kind of a real quick summary. So this allows you to manually or explicitly create a builder so you can tune and optimize the configuration of it based on what you want in your environment. So this would be how you would set it up to run root list if you wanted to have kind of a deprivileged environment. You can pass in specific flags to the build kit demon inside of the builder if you want to tune and again kind of modify the behavior of build kit and pass in a specific build kit config file if you already have one that you've kind of set up or if you want to customize that setup. What I'll show just real quick as an example is the runtime flag. So by default we have it set up to do auto which kind of does that auto detection logic I talked about. You can also specify the run times explicitly so for this one I'll show you kind of using container D explicitly to do the automatic detection. So let's say control build kit create runtime container D and so now we'll go ahead and boot up the builder with container D immediately we don't have to try docker or fail and then fall back to container D so you can see it was much faster to get started and I'd be ready to run and start doing builds now. So I mentioned I mentioned Balti arc support a few minutes ago and I wanted to talk about that a little bit more when I say multi arch what I mean is that if you've got nodes in your cluster which are running let's say x86 for Linux or ARM Linux or even Windows. So for example I'm running a personal cluster at home that's got a mix of PCs and raspberry pies running Linux to do some home automation. So how do you get a common image tag to run across all of those nodes you could make images with different tags for each one of those different architectures but if you want a single deployment to scale across each one of those or each of those nodes it has to have a single image definition. You can already see examples of these types of images which are on Docker hub right now where most of the library images are already multi arched so if you pulled something like Postgres or the Redis images it'll just work on whatever architecture you're using. What's cool about BuildKit is that it lets you easily build these types of images and there are three different ways of doing it. You can build natively on the target architectures which we don't support quite yet but we are working on that. You can also potentially use an emulator such as QEMU or to emulate those different architectures and if you're using an interpreted language like Python or Node or a compiled language which supports cross compilation like Go or Rust or even Java it's pretty easy to do. You can do this in languages like C and C++ but it usually takes a little bit more effort to get your toolchain to work. One big caveat that I should mention though is with multi arch images is that if for now you have to push your images to a registry when you're doing the build but we have been looking at ways to assemble them locally which wouldn't require you to do that push. So it's still a little bit tricky to get the cross compilation working. There's a bunch of changes that you're going to need to do to your Dockerfile to get it to work. If you're familiar with multi-stage builds inside of Dockerfiles this should look relatively straightforward. In this particular example there are four different stages and in the first stage what happens is we are going to build for each one of the different build platforms that actually gets executed once for every one of the platforms that you're using. The second and third stages of the Dockerfile are required to get the correct base image for whatever architecture that you're trying to target. So in this case there is a release Linux and a release Windows stage. In Linux I'm just copying I'm using from scratch and I'm just copying the built executable to them. In the case of windows I'm having to use nano server so unfortunately it's a lot bigger than just using from scratch but then I copy over the newly built executable over to it. And then of course I set up the entry points for both of those. The last stage is actually kind of like the magic sauce that buildkit uses to assemble each one of the architectures. The target OS and target arch and build platform variables that you can see in the Dockerfile are variables which are automatically set by buildkit and they're pulled out of the platform argument that when you're executing the kubectl build itself. Alright so what environments can you run this on? We've tested this on Kubernetes versions 114 and up so pretty much every supported stable version of Kubernetes today works. As far as runtimes we support both container D and Docker D runtimes additional runtimes could be added in the future as far as distros most distros should just work some Kubernetes distros like K3D use some special tricks around how they set up their container runtime in these cases we currently aren't able to mount the container runtime socket to load up the images. So you can still use the tool on those those 10 distros but you'll need to use registry and push the images to the registry so you won't be able to do the immediate build and run like we showed in the demo today. Alright so this is a really young project we do have native OS packaging for macOS windows Linux to make it a little bit easier to get installed go take a look give it a try we're always looking for help so if this is interesting to you give us a hand pick up an open issue submit a pull request join the community help us define what the 1.0 release should look like Thanks for coming and watching our talk we'll open it up Q&A now