 Hello, everyone. Thanks for attending our talk on Cloud Native Build Packs. My name is Samhab, and I work at Bloomberg. I'm also a maintainer on the Cloud Native Build Packs project. I'm joined today by Javier. And together, we'll be presenting an overview of what Build Packs are, how they can help, and later, Javier will be ending this talk with some really cool demos. Without further ado, let's get started. So what exactly are Build Packs? Cloud Native Build Packs transform your application source code into runnable container images without Docker files. Let's do a deep dive into the Build Packs API that makes this possible. First up, we have Build Packs themselves. At its core, a Build Packs is just two executables, one called detect, which detects whether a Build Packs is needed or not, and the other called build, which does its part in building the final runnable image. For instance, while a Java Build Packs may look for the presence of .java files, a Node Build Packs can look for the presence of package.slot.json. At build time, these Build Packs may download dependencies as needed, compile from source, generate, build or runtime build of materials, or set start commands or entry points. Interestingly, multiple Build Packs can work together. For example, you can combine your Node Build Packs with your Ruby Build Packs. And this combination allows you to utilize a variety of different Build Packs and building separate parts of your final application. The Cloud Native Build Packs project, interestingly, doesn't produce any Build Packs. Rather, we define a specification and the tooling, which is then utilized by a variety of different vendors to create the actual Build Packs. We also maintain a registry that allows developers to discover these Build Packs from various vendors. At this point, the most well-known Build Packs are produced by Google, Heroku, and the Piqueta project. Speaking of discovering and reusing Build Packs, this brings us to the concept of Builders, one of the key ways how we distribute Build Packs. Builders are an ordered combination of Build Packs with a base build image and a run image. They're a convenient way of distributing all the Build Logic for Build Packs in the format of a normal OCI image. The build image provides the base environment for the builder, for example, in a going to bionic image with all the build tooling. And the run image provides the base environment for the application during runtime. A combination of a build image and a run image is called a stack. It can be really helpful to have these two things be different. Build time dependencies can be left out of the application image, making it smaller and lowering the attack surface area. As a platform operator, you can choose what Builders are safe to use. And you can construct them as you'd like, precisely defining what sort of applications or language versions you wanna support. And you can also inject any necessary environment variables, settings, or certificates, as the case may be. Finally, we have the platform. A platform is any tool that takes the builder together with the application source code to produce the final image. A platform can range from a local CLI tool like PACK or a cloud-native platform like KPACK, or it can even be built on top of existing CI CD platforms like Tecton. App developers don't really have to know how any of this works. They just have to write their application. Under the hood, the platform uses the lifecycle bundled in the builder to run and orchestrate all the build packs, running their detect phases, then running the build phases of all the build packs that pass detection and exporting the final image to the registry. This allows us to have a single tool that can take different builders and build all sorts of applications automatically. Now, this is how the image build operation typically works in the build packs ecosystem. However, we also expose a special kind of image creation operation unique to the project called VBase. VBase allows app developers or platform operators to rapidly update an application image when stacks runtime image has changed. By using image layer rebasing, this command avoids the need to fully rebuild the application. At its core, image rebasing is a simple process. By inspecting the application image, VBase can determine whether or not the new version of the app's base image exists either locally or in the registry. And if it does, VBase updates the app's layer metadata to reference the new base image version. Now that we have a better understanding of the build pack API, let's take a look at how these abstractions and getting rid of Dockerfiles helps us. We'll focus on three main benefits. First, it allows app developers to focus on what they're building and not how to support it in production or build a container image of the fit. In that, build packs also have the added benefit of building and packaging the application better and faster than the app developers may have been able to do by themselves. Build pack authors can take care of internalizing both the container and language ecosystem-specific best practices and produce build packs that minimize the image's attack surface area, handle caching for you, and inject appropriate build and runtime dependencies and more. Second, it gives platform operators precise control over what build inputs are permitted and enables them to enforce policies on what the app images should contain using the builder concept we just talked about. Lastly, the abstraction of build applications as a collection of distinct layers just together into an app image can allow your DevSecOps team to detect and patch your images and scale. This is because of the rebase operation which allows them to precisely switch out one layer, for example, the OS layer from the image without disturbing any of the other application layers. As we will see, this can have traumatic consequences for large-scale reactions to course vulnerabilities. Let's take a look at the last point more in detail via two example scenarios. First, let's say we have a bunch of applications built by build packs. Because the layers of semantic meaning and are enriched with metadata through an accurate level of materials, we can have a good idea of the exact dependencies each app has. We can use this to identify vulnerable images. After we've identified these images, we can selectively patch the application dependencies by updating the relevant build pack or builder. For example, let's say we have a Python build pack that provides the interpreter, which has a security issue. We can update this build packs logic to now provide a patched version of this interpreter and we can use this new build pack to rebuild just the Python-related layers of our app images. If you're using a cloud native platform like K-Pack, you can declaratively update your builders and build packs and it will automatically handle finding the affected applications and rebuilding the appropriate layers. We can imagine a similar workflow with base images and OS vulnerabilities. The platform operator can identify and patch the base images declaratively and the plus side is, because of the way a base operation works, changing the base image, which can be a particularly expensive operation in the Dockerfile world, suddenly becomes a simple point of change in the registry. Apart from build time-app implications, it also has one-time implications since you don't need to re-download all these app layers on each node. Rather, you just need to download the base layer once and poof, all your applications are able to reuse this and have been patched. Next up, Javier will be taking over and he'll be talking about how you can use all these tools they can practice along with some amazing demos. Over to you, Javier. Thank you, Sam. As Sam mentioned, my name is Javier. I'm a software engineer at VMware and one of the maintainers of the cloud native build packs project. I primarily focus on platforms that run build packs. Let's answer the question of where can they be used? As you can see on the slide, there are many platforms. This is just a small data set of many more. What you'll notice is that there are different types of platforms. Build packs can run locally on your machine with the help of anything that could run containers as well as built into large cloud platform providers and various CI CD systems. The ones that we'll be looking at today are PAC and K-PAC. PAC is the Swiss army knife of all things cloud native build packs. It is able to build and rebase app images as well as provide a plethora of utility commands to help you inspect, create and publish build pack components. PAC was intended to be used primarily for local development, but quickly it made its way into being used by many CI CD pipelines. K-PAC is a Kates native implementation of cloud native build packs platform. It works by allowing users to declare their images as well as other components as Kates resources. These declarative resources are then managed by K-PAC itself. Images may be automatically updated as new build packs or base images become available. First, we'll take a look at PAC for local development. We'll clone our app repository. In this case, it's a vanilla Spring Boot Java application. And we'll CD it into the app directory. Next, we're gonna set a default builder. As previously mentioned, the builder has all the information necessary as well as all the bits to build our application. In this case, we're gonna be using a sample builder, not intended for production. Now that that is all set, we won't have to declare it every time we attempt to build our application. Next, we build our application. Simple as that, right? We run PAC build, and we specify that the image name that we want to be created is petclinic-demo. As you'll notice, we don't pass a source directory. This is because by default, it'll use the current working directory as the application source. Now, step by step, the first thing it'll do is pull the latest version of the builder image. It will then proceed to execute various lifecycle phases, the first one being detection. As you can see, it has detected that this is a Java Maven application. Next, during analyzing, it'll see that the image has not been previously built. It tries to gather information about previously built images for it to provide potential optimizations during the build process. An example of this is using various methods of cache. If this was their second build, we would see that certain forms of cache are restored during the restore phase. Next, the actual build occurs. Because this is a Java application, it'll be providing the JDK and execute the necessary Maven tasks that will skip past all these dependencies after waiting for the Java application to compile, we could see that it did so successfully. Onto the next phase. During export, all the layers created by the build pack are either cached or added to the app image. The container execution command is also set and the image is sent to the Docker daemon or registry. As you could see, the image was built successfully. Let's go ahead and try to run it. We're going to run this, just like we do with any other Docker image. We're running it in the background, finding it to port 8080, setting the container to be deleted when it's stopped and giving the container a name of pet clinic-demo, just like our image, because I'm not creative. Now that it's running, we'll open up the browser and check it out. There we have it. We have the application, the pet clinic. You could click around. There's not much to see here and we really don't care too much about what the application does for this demo. But what I do want us to look at is this welcome message here. We're going to go ahead and do a rebuild by attempting to change that message. Going back to the terminal, we're going to go ahead and stop our container. Now that that's stopped, we're going to look at for this message, the welcome message we just saw. I just happen to know that it's in this messages properties file. There you go. Now we're going to go ahead and use some said magic to go ahead and replace that with welcome back. Now that we've got that, we're going to go ahead and run a pack build again. And what you'll see is it'll still try to pull the latest version of the builder, but there are a couple other little things here that change. For instance, the analyzing phase. Now it actually found that there's a couple items that it could retrieve from cache, whether it be the app image or volume cache, which is what we're using here. Once it does that, the restoring phase or restore is what pulls those artifacts or more information down for that. And we'll see that this time the compiling or building of the application is a lot faster because it uses that cached information. All right, there you go. We have successfully built our application in a couple of seconds instead of minutes, as we did prior. You'll also see during the exporting phase that we're reusing a couple layers. So these layers are not pushed off to the daemon. They're reused in that fashion. We still set the process type, which is the execution startup of the container. And then we put the image up back into the daemon itself. All right, now that we've got that, let's go ahead and run our application again. All right, we'll open up our browser, go back to the application, refresh, and hopefully we should see, welcome back. There you have it. Now that we are done with that, we can move on to our next demo. We'll go ahead and stop this. For this demo, we'll take a closer look at K-Pack and how it could help keep a fleet of app images up to date, resolving the concern of unpatched vulnerabilities. Before we begin, I want to reiterate that these images and other components are registered just like any other case resource. If we look at this sample's repository, what we'll see is a couple of resources. We'll take a look at the builder, and we'll see that the builder has a couple things defined. It has the stack, which is a cluster stack, and a store where the build packs are stored. It also defines the order in which the build packs from the store will be used. We could then take a look at the stack. The stack defines what build image it will use and what run image it will use. From there, we could look at the store. The store is what tells us where these build packs are coming from and what build packs are available. Next, we'll look at an image resource. This image resource here defines the builder that will be used for this image as well as where the source for this application will be coming from. Now that we have that understanding, let's see it in action. We'll jump over to the terminal and we'll actually open a nifty little demo UI, which allows us to see what images are already registered and are managed by KPAC. We can see that this tool displays what stack and build packs are being used to build it. If we go back to the terminal, we can see that while we were distracted with the previous page, we got an alert that our engine X applications may have a vulnerability. We'll go back to the UI and simply mark the ones with a vulnerability. This is strictly so that we can see which ones should be getting updated when the build packs provider patches the vulnerability. In the terminal, we're going to simulate that the build pack is getting patched. In the real world, our build packs provider, which could very well be our own dev ops or dev sec ops peers, would push an updated build pack image. This would trigger a rebuild of our application images. Going back to our GUI, we can see that the highlighted images are getting rebuilt. In this case, we only have one and that should be getting updated here very shortly. There we are. As you could see, this is being rebuilt, meaning it is pulling down the latest version of the builder and build packs and creating a new app image. We'll give it a few seconds to finish. It shouldn't take that long. And there we go. We can see that our engine X dependency basically has been updated. So back on the terminal, we've done that. We've gotten another alert. A vulnerability has been discovered with our run image. We're going to go back to our GUI and mark our stack that has a vulnerability. We're going to use the SHA in order to mark that. And as you can see, it's practically every single image that we have here. Okay, we're going to go back to the terminal and we're going to simulate again that our build pack provider has updated the stack. Because this is the run image, we will be taking advantage of the rebase operation mentioned before. This should take less than a few seconds for each image once we get that update. As you see now, every single image has been queued and we'll see how long it takes. The first one has just started. We're updating, we've updated five, we updated all of them. So as you can see, right? We've just updated a whole fleet of images and it took less than a few seconds for the rebase operation. That's the base layer aspect of it. That concludes our demos. Let's go back for just a few more slides. Now that we've gotten a better understanding of what build packs are and how we can use them, let's talk about the future. This year, the project has identified a few key areas it would like to focus on. These are a few highlights of what's top of mind and what's going on this year. Configurability. As users migrate from using Dockerfiles to Cloud-native build packs, they see the value, but they also miss the flexibility and extensibility available in Dockerfiles. The project has a couple of good ideas on how to safely provide some of that desired flexibility while still maintaining the same level of core functionality. Things like inline build packs, which allow users to create ad hoc build packs as part of the configuration, along with additional OCI-specific configuration and more extensive modifications to runtime images during the build process in order to enable more advanced use cases. Supply chain security. Security is a core value proposition of build packs. While build packs may already provide build materials, we want to do more. We want to make them core of the project and align them with existing standards. We also want to enable better image signing workflows, something we are working with Cosign on to achieve. More cloud-native integrations. As the ecosystem evolves, we want to make sure we continue to align with it. We want to take advantage of those great projects in the ecosystem and enable users to pick and choose what tools they want to use alongside cloud-native build packs. As I mentioned before, this is just a peek of what's in the works. To learn more about these and other items, check out the official roadmap on the buildpacks.io website. That concludes our talk for today. You can go to buildpacks.io. You can find us on Slack, along with the rest of the community, on Slack.buildpacks.io, on Twitter, and GitHub. We have two GitHub locations there, one for the Buildpacks project as a whole and another one for K-Pack, which is, at this moment, a separate repository or project altogether. Thank you for coming. See you next time.