 Hi, everyone. I'm David. And I'm Natalie. Thanks so much for joining us. We'll be discussing how you can use build packs right now in your production CICD systems. We're going to start with a refresher on cloud-native build packs, defining them, explaining their benefits, and explaining some of the concepts in the CNB ecosystem. Next, we're going to demo build packs in a few different contexts, some of which will be more interesting for developers, while others will be more focused on DevOps interests. Lastly, we'll conclude with some resources for learning more about cloud-native build packs. You may be wondering, who is this talk for? We hope to cover usage of build packs that will be relevant for app developers and platform operators. For app developers, we hope to show how build packs can make your work easier and faster by reducing developer toil and building containers more efficiently. For platform operators, we hope to show how build packs can enable more control and compliance within your organization and help with your security requirements. We'll assume some knowledge of a cloud-native build packs and the PAC-CLI. As a refresher, we'll cover some key elements that will be relevant. What are cloud-native build packs? Cloud-native build packs, to put it simply, transform your application source code into runnable images without the help of Docker files. Why is that helpful? So there are three main benefits we're going to talk about. Firstly, it allows application developers to focus on what they're building and not on how to support it in production. In that, it also has added benefit of building and packaging the application better and faster than they may have been able to do by themselves. Second, it gives operators precise control over what build inputs are permitted using the builder concept that we're going to introduce in a few slides. And third, the abstraction of build applications as collection of distinct layers stitched together to be an application image can allow a system to precisely switch at one layer, for example, an operating system layer from the image without disturbing any of the other layers of the image. As we'll see, this can have dramatic consequences for large-scale reactions to operating system vulnerabilities. What is a build pack? A build pack is really just two executables, one called detect, detects whether it's needed, and the other called build does its part in building the runnable image. For instance, while a Java build pack may look for a .java file in npmcmb for the presence of a package JSON, a yarncmb could look for the presence of a yarnlock file. At build time, build packs can download dependencies as needed, compile whatever source code that it needs, or set start commands. Multiple build packs can also work together. For example, you may have a combination of node and npm build packs working on one application or Ruby and bundler, downloading the Ruby binary and doing a bundle install. This can allow you to combine build packs and utilize a variety of them in building separate parts of your application. The cloud native build packs doesn't produce build packs. Rather, we define a specification that is then followed by a variety of different vendors. At this point, the most well-known build packs are produced by Google, Heroku, and the Paquito project. This brings us to the concept of builders, one of the key ways we distribute build packs. Build packs are packaged as container image layers together with a program called the Lifecycle in an image called the Builder. That image contains, as its base image, a so-called build image, which is used at build time to construct the runnable image. Platform operators can choose which builders are safe to use and can construct builders as they'd like to precisely define what sort of applications or language versions they want to support, as well as to inject necessary environment variables or settings, as the case may be. The Lifecycle, which we just saw on the builder, is an executable that runs in several phases to transform application source code into a runnable image. Those phases are detect, analyze, restore, build, and export. Detect runs each build packs detect executable and determines which group of build packs will participate in the build. Analyze analyzes the build, the application, while restore is used to bring back data on previous builds from the cache if it's relevant. Build runs each build packs build executable, while export produces a final app image and updates the cache. The Lifecycle also has another executable called the Creator, which executes all the phases, reducing some of the complexity from the program. A platform runs the builder together with the application source code to produce the app image. App developers don't have to know how any of this works. They just press their code. The app image is a layered image. Application layers and build time dependencies are cached, so builds will be faster over time. This makes for happy developers. Finally, CNBs are a CNB construct that link together the builder-based image or the build image and the application-based image, which is the run image. The build image is used when constructing the image, while the run image contains everything the application image should need when running. It can be helpful to have these be different. Build time dependencies can be left out of the application image to make it smaller and lower the attack surface. Again, operators can decide which stacks are safe to use. I'm going to pass it over to Natalie now to actually demo the use of some build packs. Thank you, David. Just get set up. So now we will look at build packs in action. The first tool that we're going to talk about is the PAC CLI. So PAC is maintained by the CNB project. It is a command line tool that can be run locally or as we'll see in automated systems. PAC requires a Docker daemon that it uses to bring up the containers in which the lifecycle will run. So just to illustrate what we'll see, PAC has access to a Docker daemon. It brings up containers based on the builder image and together with the source code produces the application image. This image can then be saved to the Docker daemon, or it can be published to a registry such as Docker Hub, GGCR, or your favorite hardware registry. There is an optimization, as David mentioned, called the creator, the create binary, which will run all of the lifecycle phases in a single container to make things faster. So to get started with PAC, just visit the buildpacks.io website and choose your installation method of choice. You'll notice that PAC is supported on Linux, Mac OS, and Windows. And as of recently, PAC also supports building application images for Windows containers. So let's get started with our demonstration. So I am using PAC on a Mac. As you can see, we can run PAC suggest builders to get a list of builders that PAC is aware of. So you can see, as David mentioned, Google, Heroku, and the Paquetto project all have builders available. I'm going to go into my application directory. You'll see that it is a Java app. And let's build it. And then PAC, hello PAC is the name of the image that I will build, and I've given it as the builder, the Google builder. So let the build run. Looking at the logs, we can see that we have pulled the Google builder down. We're running the different phases so detect, determine that forward build PACs will participate in this build, analyze and restore or bringing back data from the cache that we will use. And now we build the app. You see the app is completed. Here we're exporting the application image in this case to the Docker daemon. So that was PAC. You may be thinking at this point PAC seems like a pretty good tool. We encourage you to download it and check it out. But you may be wondering how you can use build PACs in your automated system of choice. So in the next part of the demonstration, we're going to show build PACs in a few CIC systems starting with GitLab. To give a high level overview, a typical CICD system might include the following phases, some development testing, building of an application image, testing of the output artifact and deploying to production. For our demonstrations we're going to focus on primarily on the build part. So let's talk about GitLab. It is the cloud native build PAC support is maintained by GitLab. It's used with their autodevops tooling, which uses Kubernetes, and it runs PAC under the hood. To use cloud native build PACs in GitLab you'll want to configure autodevops, which is its own process we've linked in this presentation we've linked to the instructions. You can see that we'll create a GitLab project and from within GitLab create a Kubernetes cluster where the app will be deployed. And with that we just need to add a little bit of extra configuration to our source to tell GitLab that we want to use build PACs. So here in GitLab ci.yaml. As you can see we've set the autodevops build image cnv enabled to true. And just to illustrate, similarly to what we've seen previously, GitLab is using PAC with Docker. The only difference being that here PAC itself is running in a container, as well as Docker. Alright, so let's see GitLab in action. My, let's see GitLab in action. So going into my app, you can see that it's the same Java app from before. If I open up one of the files and then you can edit. I can now push that change. Just to illustrate I'm going to push it to GitLab.com made my commit. Now I push it. And let's go see that commit triggering a new build. So here I am in my repository. You can see let's go to our pipeline that push triggered a new build, which looks like it's hidden here but here's the build getting triggered that's in progress. So I'm going to show this is not the page I want. Here's the pipeline. I'm going to show the logs from a previous build. So starting from the top. You can see we're running Docker and Docker. Here we're pulling in the Heroku build packs. Those are used by default, but it's also configurable. Now we're pulling in PAC. And here's the output that we should be familiar with the detect phases running looks like we just detect. Not finding anything in the cache. And now we're downloading build dependencies. Oops. Go down. And down to the bottom you're seeing we're done downloading the dependencies and now we are have a successful build. And here's the connection image is being exported in this case to the local Docker daemon and then it gets pushed up to get labs registry. So I'm now going to hand it back to David who's going to take us through the next part. Thanks for that Natalie. Circle CI is a continuous integration and delivery platform. The CnB project maintains an integration called an orb, which allows users to utilize PAC inside their pipeline. It also takes advantage of caching to keep information from previous builds and speed up future builds. You may already be using circle CI to use the CnB integration. All you need to do is declare that you're using the build packs PAC orb and then utilize it in your workflow. As mentioned, caching is enabled by default to allow for faster builds. You can also note that we're using the Pacado project build packs in this case. Let's demo a circle CI. I already have the repo setup. I'm just going to echo a change and I'm going to push it up. You should be able to see fairly fast that circle CI picks up the change and attempts to build it. And in fact it's running already. For the sake of time, let's look at a previous build. You can see that it spun up the container as necessary, it downloaded PAC, and finally it ran a PAC build. Something interesting that you haven't yet seen before is that there's a concept called caching involved. You can see in Analyze and Restore that it restores data from previous builds from the cache. All the platforms that we've seen so far have used the PAC CLI under the hood in order to construct the image. Another way platforms integrate with the CNP project is by directly executing the lifecycle phases, which can give a bit more control over specific usages. For example, PAC doesn't currently support caching against the registry while the lifecycle does. One CI CD system that does directly execute the lifecycle is Tecton. Tecton, if you haven't heard of it, is an open source CI CD platform that runs on Kubernetes. There are two Tecton tasks maintained by the CNP project that use the lifecycle directly and also take advantage of caching. We'll introduce a number of concepts that Tecton uses in order to better understand the demo. First of all, each published action in Tecton is called a task, which is itself composed of a series of steps. Tasks have inputs and outputs that they then pass to other tasks. A couple of tasks are associated together within a pipeline. Finally, in order to run a pipeline or task directly, you create a pipeline run or task run object. To make this a bit more concrete, in the example pipeline we'll be demoing, we've used two tasks. Get clone to clone the repository we're interested in monitoring and build facts to build and push the resulting image. Here we can see successful pipeline runs which have pushed their changes to the resulting image. As mentioned before, there are two ways to utilize the lifecycle. Platforms can either run the five phases separately or use the creator executable, which orchestrates the phases in a more performant manner. As such, the CNP project maintains two different tasks for Tecton. One, build path, which is shown here, just prepares the pipeline and then uses the creator binary. This is intended to be used when you trust your builders and is faster. The other, build-packs-phases, uses the phases separately. This is more secure since it doesn't pass registry credentials to phases that don't require them. In order to set it up, we'll need to define a pipeline resource defining the image output, a pipeline defining what tasks we intend to use, the get clone and the build-packs task, and finally, a build pipeline run in order to execute that pipeline. Just run kubectl apply to create those resources. Let's see it in action. If we go over here, I'm going to apply a change. If we go to our status page, we can see that a run has been kicked off. For the sake of time, let's look at a previous run. You can see that it's successfully completed. And if you look at the logs, much the same, we can see that it successfully restored the layer from cache. So far, we've seen three different CI CD platforms that accomplish two of the three main build-packs benefits. They've freed the application developers from concerns over how to support their application and production. And they've given operators precise control over the build inputs through the use of builders. We're going to examine one last platform, KPAC, which takes advantage of the CNP spec to allow for so-called rebasing images from one flawed operating system base layer to another secure operating system base layer. Let me pass it over to Natalie now. Thank you, David. So now we're going to take a look at KPAC, which is an open source tool maintained by VMware, which we both work for. You may think that because KPAC includes the word PAC, that KPAC runs PAC, but it actually uses the lifecycle directly. So if KPAC supports building and rebasing your apps, so it will keep application images up to date when inputs change. So if there's a new version of a build PAC, KPAC will rebuild the app. And if there's a new version of a stack, run image, KPAC will rebase the app. It uses Kubernetes primitives, and we think is most interesting to platform operators. So to get started, just download the latest release YAML from GitHub, run kubectl apply, and you'll see a bunch of custom resources get created. Wait for everything to become ready. And now we're ready to create our resources. So the KPAC tutorial, which we'll link to has some example configuration files to get started with. You can see that many of the concepts we're familiar with are being referenced here. So we're using the Ubuntu Bionic stack from the Paketo project, pulling in Java and Node as our build PACs, which we're saving to a custom builder. And here's where we've configured our source code that we will build with the git repository and revision, as well as the tag where we'd like the final app image to end up. Just run kubectl apply. And now we're ready to see it in action. So here I just want to show the image resource definition again, you can see that there's a specific git revision that we're building. And let's update that definition to be something new. Just going to apply. So one thing about kPAC is that it is intended to be run in meshed with another CI CD system that would be sort of in charge of keeping the git revision up to date. So you can imagine if you push your source code, some other element of your CI CD system will run tests against that commit and then provide it to kPAC. So let's watch. You can see that a new kPAC build was triggered because of that revision update. Just wait for it to complete. And it's finished. So I will copy the pod name. You can take a look at the logs. So now if we take a look at the logs, we can see here we're using the Paketo build PACs. There are six build PACs participating in the build. We're able to restore some data from the cache. Here is the build happening. And now we are exporting the final image. And that should now be available in our registry. The firmware that maintains kPAC also put out a kP CLI to make it easier to get information about builds. So here and other resources. Here I'm doing kP build to see the status of my hello kPAC image. You can see that the build was successful and it was triggered because of a new commit. There's also additional information that can be useful, like the builder that was involved, the run image that is used, and any build PACs that are participating. So that was a build with kPAC, but we may also want to see what happens when build inputs need to change. So for example, you could have a vulnerability in one of your application dependencies, which would require a rebuild. Or you could have a vulnerability at the operating system level. So, for example, one of the packages that's installed on the run image could have a critical vulnerability. This brings us to the concept of rebasing, which involves taking the application image with the vulnerable run image layer. Producing a new secure run image. And then swapping that run image layer in to the application image without having to rebuild everything. At this point, you may be wondering how is it that the cloud native build PACs project enables this to happen. The details of that are beyond the scope of our talk, but at the highest level, it is the build PAC specification that defines, for example, where build PACs are allowed to write to in the file system that is safe, and also gives platforms knowledge about which build PACs and stacks are safe to work together. That's for safe swapping of the run image layer. And to learn more about that, we encourage you to check out other build PACs talks, or take a look at the build PACs specification which is available online. So let's see rebasing in action with K pack. Just going to describe my image again. So this is for the purpose of showing the stack. That is the Ubuntu Bionic stack that we defined earlier and we called it base. So if I describe that stack, I can see that the run image which is being used is the big Kato build PACs run image at this specific tag 0094. So, I could update that stack definition. The 94, which we can imagine has a vulnerability and swap in the 95, which has now been patched. So now if I apply that that configuration has taken hold. And we can now watch for new pods. So here that happened very fast, you can see that it was build 18, which was initialized and completed in about 10 seconds. I would have pulled the latest version of my application image. The latest identifier would be different because the run image has now been changed. Anyway, we've now seen three of the benefits that we promised with cloud native build PACs, giving application developers the ability to focus on development, giving operators control over build inputs by choosing which builders stacks and build PACs are safe to use and enabling fast patching of operating system vulnerabilities through the rebase capability. This is a large and growing ecosystem of platforms that use cloud native build PACs. We hope that your favorite platform is among them. But if not, please come talk to us we love to hear from build PACs users we love to get feedback. And if there are new features that are going to be necessary to make cloud native build PACs a better solution. We'd love to hear about it. So here's some resources links to the documentation and demos that we used in our talk. You can find us online at build PACs.io Slack Twitter GitHub. Thank you.