 Hello, everybody. Thanks for virtually attending our talk on Cloud Native Build Packs. My name is Jesse Brown, and I work at Salesforce. I'm also a maintainer on the Cloud Native Build Packs project. Cloud Native Build Packs just graduated from Sandbox to Incubation in November. So let's talk about what Build Packs are trying to accomplish. The primary objective of Build Packs is to take your source code and convert it into an OCI image. It doesn't just execute a single batch script and then poof you have an image. Instead, it creates distinct layers that make sense. So you've got your application layer, you've got your runtime layer, which might have your Ruby runtime, dependency layer, which might have your Ruby gems, and the stack layer, which is sort of just the base image that your application would run on. And this is all without you having to write any Dockerfiles. Taking a step back, Cloud Native Build Packs are built on the rich history, rich and successful history of Build Packs. Build Packs started back in 2011 at Heroku. Pivotal shortly thereafter adopted Build Packs as well and sort of went their own direction. And Cloud Native Build Packs really represents sort of the convergence of those and creating a new API that takes into account our containerized world today. Looking at an image that comes out of Cloud Native Build Packs a little bit closer, it's got a few key features. First is that your app image is reproducible. This is important for security purposes. And as long as you give it the same source code, the same base images, and deterministic Build Packs, then you'll be able to produce the same image every time, which is very important for, as I said, security and application developers. Secondly, all images are built with metadata that can be expected after the build. This can contain all sorts of things, depending on the build pack, but primarily things like your dependencies, which version of Ruby or Go you're running, as well as other metadata about the build that could be very important for your containers at scale. And as I mentioned earlier, the layers themselves are logically mapped. So you've got a layer for your Go or Ruby runtime. You've got a layer for your Gems or your NPM packages. And you've got a layer for your application. And it's really up to the build packs on how to divide this up, but it gives them the tools needed to create fast reproducible builds. The Cloud Native Build Packs project is responsible for multiple specifications, as well as a reference implementation of the specifications. The pink box on this slide is the reference implementation that we call Lifecycle. This implements the two primary specifications. One is the platform API on the left, which describes the interface between Lifecycle and a platform. And a platform is something like Fly.io, Tecton, Pack, K-Pack, Heroku, Salesforce, VMware, Tanzu, Google Cloud, things like that. And then the other side of Lifecycle, the other specification is the build pack API, which represents the interface between build pack authors and the Lifecycle that executes those build packs. As you can see on the left, there's quite a number of platforms that have adopted Cloud Native Build Packs. And we're continuing to see more and more services and applications adopt build packs. And on this slide, this is sort of a slide that's going to show the flow that takes your application source code and an image, a build image, and then pushing it through pack, which is our CLI, to create your application image. But let's take a look at that a little bit closer. Let's start off by building this simple Node.js application. As you can see, there's no Docker file. And so what we're going to do is we're going to give this application source code via our CLI called pack. And we're going to go ahead and tell it to build us an application image. And I'm going to give it a Docker Hub URL. And I'm going to tell it to publish this application to Docker Hub at the end of it. One of the first things you'll see here is our phase markers. And you'll see detecting here. And what's happening here, I'm actually using a Google Builder. And Google Builders are packaged with a bunch of build packs. And you can see that we chose four build packs here. So a Node runtime, a Node npm, a config entry point, and a Udall's label. And the end, yeah. So each build pack in the builder has an opportunity to opt in to the build based on the application source code. So a Node build pack would look for a package JSON. And then it would get that information of what version of Node to install from the package JSON. And the npm one would be very similar. It's looking for package JSON and also to see whether there's any packages that need to be installed. And so you can see that we had those build packs chosen. So in this builder, there's also going to be other build packs like a Go Build Pack or a .NET Build Pack. And all those build packs had an opportunity to opt into this build process, but decided not to, because there was nothing there for them to do. In the building phase here, you can see that it's outputting extra information like they're actually going to go and get the Node runtime that we need. And then it's going to install all the packages for our Node application. And then finally, at the end, it's going to export to the image that we told it to export to and add Docker Hub. Excuse me. Let's take a look at, let's run the image first, just to show that this image does work. It's going to run this application. Boom. So we've got a Node.js Getting Started application running on port 5000. And let's take a look at some of the metadata that we talked about earlier. So I used the pack inspect command and gave it an image. And so using the specified metadata that is on the image, we can see that the stack was the Google stack. We can see that there was a run image with a particular digest that it was built on top of. And that would be very important later when we talk about Rebase. And then also, you've got metadata about which build packs were selected to run and which ones contributed layers as well as the process that were defined by those build packs. And then let's see, you can also go a little bit further and build packs can contribute very specific data that would be useful for their platform or for their developers that use the build pack. And that's the build materials. And so you can see here that in the build materials, the node build pack or a node entry was created with a version of 1416.1. And so at scale, you can imagine how useful it would be to know which applications or which containers in your cluster are running, which runtimes that need to be patched and all this without cracking the image open, right? All this stuff is built into labels on the images that are output from the Cloud Native Build Packs project. And let's go back to the slides. Now that we've built an image, let's talk about this slide in depth. In the middle, you'll see two images referenced with a label of stack. And this is a build image and a run image. This is very conceptually similar to multi-stage Dockerfiles. So the build image would be a beefier image that has all your supporting libraries for building your application. And the run image is generally a thinner image or a completely different image that can run the result of your build. The build image is used as part of the builder image. And earlier, I used the Google builder image. And a builder image consists of the lifecycle component, which is the implementation reference mentioned in the previous slide, which influenced the spec, as well as the build packs that are bundled with that builder image. And then combined with your source code, it executes all those build packs, which then create dependency layers. And so as we talked about earlier, there's a node engine dependency, NPM dependency. The packages for your specific application, all of those can be logically put into different layers and exported and spliced together on top of the run image that is defined for that stack. And then finally, your application source code can also be put on top there. And one thing to point out here is that no two applications have to be alike. That builder image that we looked at earlier may have .NET, Ruby, Node.js, but the process is the same for the application developer. So that consistency of being able to check out some source code from a centralized repository and pack, build that into an image and have that builder handle all of your applications. You know, your Elixir, your Google, your Ruby, all of those, they all work the same way. They're all gonna be built on a consistent set of principles. The layers are going to be logically divided, which means that when they get pushed to a registry, they will be shared when they can be shared. They're built on the same run images if you use the same builders, which means that you can enforce and just leverage the fact that those run images aren't the same and patch them appropriately. And you can patch for cloud native build path projects with rebates, which we'll get into a little bit later. A more recent addition to the BuildPack project is a public BuildPack registry. Similar to Rust crates, NPM, and other distribution solutions, this is targeted at application developers who are looking for BuildPacks to meet their needs, as well as BuildPack authors who wish to share their work with the larger BuildPack ecosystem. You can search for specific versions of BuildPacks published on this registry and use them from PACK. So in this example here, we've got Minecraft server published by user Jay Kuttner, and it gives you usage of how to use this with PACK as well as the supported stacks. In this case, we've got Heroku 18 and IO BuildPack stacks.biotic. And this will help the application developers choose a stack that can accomplish what they need to do. So let's take a look at this real quick. Here I've got an example Go application that only uses the Heroku Go and Heroku Profile BuildPacks. So let's go ahead and see what it would look like to add another BuildPack. I looked at the BuildPack registry and saw that Joe Kuttner had a very useful SSHD BuildPack. And so I'm gonna build this just like I did previously and we'll see that in addition to the Go BuildPack, we also get the BuildPack from the registry, SSHD, and you can see it there in detecting. So it's ran through the detection phase. I believe it just automatically tells CloudNative BuildPacks that it wants to participate in the build. And now when we run this image, we should see a log line from the application after we pull the image from the registry. You can actually see here, even though I got an error starting the application, that the SSH service was started and it told us that the user's gonna be Heroku, which is the user that is defined on that Heroku stack that I'm using. If I wanted to get the error to go away, I could just go ahead and do the Heroku thing and set a port with an environment variable. And now my web service, my Go web service here is running and it also has a SSH service on it that I can connect to over the port 2222. And that's how easily you can use BuildPacks from the registry. Strongly encourage everyone to check out that registry and find BuildPacks that work with or replace BuildPacks you use today. Thanks. Hi, my name is Steven Levine. I'm a Core Team member on the CloudNative BuildPacks project. I work at VMware on VMware Tanzu. And so, you know, Jesse just showed you kind of what a CloudNative BuildPack build looks like, what it looks like to build an application using BuildPacks. You know, today I'm gonna go into a little more detail about what that build process looks like. You know, how does a BuildPack build the application image? And I'm also gonna talk about how the CloudNative BuildPack model, these kind of keeping these application layers separate, lets us patch CVEs and operating system packages, you know, in at scale scenarios or just really easily, you know, locally using the Pac-CLI. And also about how we plan to extend this API to let us install additional packages kind of on a per application basis in the future and what it looks like to patch CVEs in that case too. So just to kind of kick off, you know, as Jesse mentioned, to do a build, you have a builder image that has a build image and build packs and you have source code and a platform with the Pac-CLI, you know, takes those artifacts and, you know, does a build and exports the new application layers on top of a runtime-based image that, you know, might live on the registry. And so this process is actually six different steps. And so there's a phase at the beginning called detection, where build packs can kind of opt in or opt out of the build and also detect versions of dependencies that you can install. There's a restore phase that, you know, restores a cache from the last build, there's an analyze phase that kind of looks at the remote image and says, you know, figures out if there are any layers that are already just, you know, are already good, don't need to get rebuilt and can stay on the registry when the next image gets built. There's a build phase that builds new layers and there's an export phase that, you know, puts those layers on the registry to create the new image. And then at the end, there's a caching phase that'll kind of store any build time artifacts that might need to come around next time. And so just to give you an example of what a build looks like, imagine you have a Node.js app, say has a package JSON, package JSON lock and maybe some Node.js source code. And you've selected a, that you want to build this with a Node.js build pack and also maybe that you have a custom metrics agent and you, you know, made a custom build pack that, you know, you, that installs your metrics agent into the application. So in this case, the, you know, a Node.js build pack could be a meta build pack. Like in this example, like the real build pack, it's like this is the, the potato Node.js build pack is a meta build pack that's actually just describes a composition of other build packs. And so let's say your Node.js build pack is, you know, Node.js build pack, yarn build pack and NPM build pack. The configuration is so that there are two groups. It could be installed node and, you know, run yarn install or install node and run NPM install. And if you combine this with the metrics agent build pack you get two groups that, you know, include the metrics agent as well. And so what this build process looks like is, you know, the first group runs during the detection phase. And the yarn build pack, you know, immediately opts out says, nope, I can't, I can't help here because the application doesn't have a yarn lock file and so that group fails. And then the next candidate group comes along. And in this case, the NPM build pack says, yep, there's a package JSON. I see it's, you know, the app needs node 12 and says, okay, I require node 12. That's matched up with the Node.js build pack that said Node.js engine build pack that said, yep, I can provide node and the metrics agent, you know, it's always just gonna install a metrics agent. So after that detection process, the build process kicks off and the kind of cloud into build packs tooling tells the metrics agent build pack, okay, go ahead and install the metrics agent. Then it tells the Node.js build pack, oh, NPM build pack said it needs node 12, go install node 12 and so it installs node and the NPM CLI into separate layers. And then the NPM build pack runs and generates node modules into its own sort of additional layer there. And then, you know, maybe the modules also get cached and kind of a local, you know, inter build cache. And so on the next rebuild, the, you know, some of those layers don't necessarily have to get rebuilt because of that kind of, you know, analyze and restore phase you saw before. Some layers can just stay around on the registry and, you know, be part of the next image when it gets exported. Some layers, you know, may get cached locally as well. And so, you know, given this API, one kind of really nice thing about generating these application layers that are, you know, contractually separated from the rest of the operating system is that we can patch CDEs in that operating system package layer very quickly for lots of images. And we can do this with really minimal data transfer and without, you know, even starting build containers. And to do this, we rely on ABI compatibility, which is this contract provided by operating system vendors, you know, like Canonical with the Buddha Bionic, where they, you know, provide an LTS version of the operating system that, you know, just gets security patches that, you know, have a strong guarantee that they're not gonna change the behavior of a code that's, you know, linked against operating system code. And so, you know, Heroku and Cloud Foundry are examples of platforms that have, you know, used LTS operating system package, you know, container distributions to patch CDEs at scale and production scenarios for a long time. And so I'll kind of run through an example of what it looks like to patch, you know, operating system packages in this model. So imagine you have a Docker registry, which has manifests, which are essentially container images and layers, which are, you know, the file system layers that those images point to. So say you have, you know, three applications, each with their own, you know, application layers, and you have a runtime base image that's just that set of operating system packages. When there's a vulnerability in the operating system packages, you know, we're gonna update that runtime base image, upload a new set of operating system packages. And all we have to do to patch all those applications is make a quick change to their, you know, image manifests to point to the new set of packages. This doesn't require any additional uploading or any, you know, it's just a metadata change for the, that those JSON files on the registry. You know, you still have to deploy those, but that deployment process thanks to container D is also very efficient. And so say you're, you know, deploying the application to Kates after it gets built, you, you know, now you have all these applications running in your Kates nodes that are vulnerable from before. All we have to do is, you know, update the, you know, deployments for those applications to point at the new digests of the new image manifests that triggers the new set of operating system packages to get downloaded exactly once for each VM, right? Not per application, but, you know, each sort of very minimal data transfer, each VM just has to get newly updated packages once for every app on it. And then, you know, the apps will restart and with their new image digest, the point of the new layers. And so that's how we can patch kind of a large platform of many applications, you know, without doing any of your rebuilds essentially. And so, you know, this looks a little different if you have customizations to your base image. So, you know, say your, you know, your organization doesn't just, you know, use Ubuntu Bionic for all your apps, but you also need some additional operating system packages or CA certs or whatever. The kind of process of, you know, rolling out, you know, patches to CDEs in that case looks a little bit different. And so, you know, imagine you have this extra, you know, OS extension set, extra packages, CA certs, whatever, when there's a vulnerability that kind of lower operating system package layers, you know, vulnerable, but it's also important to note that we can't, we can't just replace that operating system package layer underneath those extensions because, you know, unlike the application layers, that set of extensions is, you know, not kept separate from the base operating system. It's changes to the base operating system that depended on the original files. And so we have to rebuild those layers unlike the application layers, which we can just keep on the registry, you know, and swap things out from under. So when this happens, and we get a new updated run image that has, you know, a new version of the Ubuntu Bionic, whatever it is, you know, we have to take that updated, you know, we have to take that patch set of packages and, you know, maybe locally in a Docker container, maybe, you know, in a CI server or, you know, on Kates with Conoco or something, you have to reapply those, you know, extensions maybe with the Docker file. And then, you know, re-upload them to the registry to create our new customized version of the run image. And then at that point, you know, we can do the same thing, point all the application manifests new layers, you know, redeploy everything, see the packages get downloaded, see the apps kind of spin around and point at the new operating system packages and, you know, just like before we patch the platform. A much more interesting case to talk about here is what it looks like to patch operating system packages when you've just installed the operating system packages to specific, for specific applications. And so there's a new proposal in the Cloud need to build packs project called StackPacks that are special build packs that kind of per build basis allow you to install additional operating system packages. This is a really commonly requested feature. You know, you have a Ruby app that, you know, needs image magic to do image processing, but you don't want to put image magic on all of your applications that use that base image. And you don't want to maintain your own special base image just to add image magic. And so this is something we'll be able to support soon, but it took a lot of thought to create, you know, an API that's still preserved this rebasing ability or still made it easy to roll out CDE patches to lots of applications at once. And so to do this, you know, we took the, you know, a set of six steps that we had before, we introduced a new step before build called extend. And what extend does is it takes the, you know, builder image and run image that were provided and just, you know, runs the stack pack on each of those images in separate containers to generate a new, you know, two new layers, you know, one for each image that have additional build type packages and additional runtime packages. So the build can happen with, you know, a set of build time packages that's different from the set of extra packages that, you know, might get installed in the application that gets exported in the app. And so in the end, you end up with an application that has new packages installed to do this kind of rebasing process, to replace the, you know, operating system packages when they're vulnerable, we had to get a little creative. And so the way this works is, you know, and I'll kind of run through an example. Imagine you have app one that has its own set of packages but apps two and three are still pointing at that kind of shared base image that everything else has. When there's a vulnerability in that operating system package layer, you know, we have to rebuild, you know, the packages on top of it that app one is using. And so you can see in this diagram there's that orange layer that's, you know, app one's packages. When that new kind of base image comes in with, you know, the CV patches, we have to take that base image and, you know, kind of either in Kate's run a, you know, some build containers or, you know, run build containers in your CI system or, you know, in the case of the pack CLI specifically, we have to run some extra containers in parallel that, you know, extend the runtime base image with new packages. Once we have our, you know, special base image for app one that would get uploaded back to the registry or in the local case, it could just be in the Docker daemon and, you know, point all of our application images at the newly patched packages. And just like before, you know, everything can get redeployed by updating the, you know, digests on the cluster or wherever they're deployed so that the images snap around and, you know, we end up patching all the apps on the platform. And so that's how we solved, we kept that ability to patch CVEs at scale but still introduced the ability for individual applications to, you know, install operating system packages that are specific to those applications during the build process. And that's all we got. Please check out buildpacks.io. If you want more information, we're super active on Slack. It's Slack.buildpacks.io. So please come say hi and feel free to join our mailing list too. You know, we have a monthly newsletter. Thanks everybody.