 All right, let's get started. We'll have the stragglers come in as time progresses. Welcome everybody to my talk. I'll just kick off the demo here and switch to slides. And then we'll come back to the demo when it becomes relevant. My talk today is about a particular tool known as K-Pack. So anybody familiar with K-Pack in the audience yet? No. Okay, good. One or two hands. That's always good. I'll start at a rather introductory level to K-Pack and then we'll build on as the session goes. A little bit about myself. I am the chief evangelist at the Cloud Foundry Foundation. What that means is I get to present all of these cool stuff that the community has built while collecting a bunch of free t-shirts. But why I also decided to give this presentation is there's always a relentless pursuit of automation within the larger DevOps community and within the Kubernetes community in particular. And there's these two areas of build automation and supply chain security that K-Pack provides an excellent answer to. And I thought I'd focus much of this presentation on these aspects of K-Pack. Like I said, I'd like to introduce K-Pack as a tool to as much of the audience as possible. There's also a lot of work that the OpenSSF is doing. Again, show of hands. Do folks know the OpenSSF at all? Yes, no, maybe. So I just thought I'll quickly introduce some of the work that they're doing and put that in context with what Kubernetes build automation has to do with security and things like that. I thought I'll also give a quick shout out to the first keynote that we had earlier today. He talked about engineering scalability and how you can get your engineers to work more work better and continue to make a case for repeatable engineering work without all of the trouble that comes along with it. And so a large part of this presentation is going to follow the same theme and I thought I'll just do a quick shout out to the message from one of the earlier keynotes. So K-Pack is defined as a Kubernetes build automation tool. It runs natively on Kubernetes and helps people generate OCI compatible container images out of source code that they might have. So if you're already using a Docker build command as the first step of your process, then use of K-Pack is an alternative to a Docker build. Now, there's about half a dozen reasons I can think of why somebody should use K-Pack versus use Docker build, but it'll become very apparent as we go through the presentation and things like that. But if you're using privileged containers, for example, and you have to run a different Docker build version for your staging versus your test versus your local and things like that, then all of those are good reasons to ditch Docker build in favor of K-Pack. So it provides a very parameterless, repeatable way of building these containers and these containers are built natively on Kubernetes. So if you have Kubernetes based infrastructure and you're taking advantage of Kubernetes for your applications and reliability and all of these things, then I think it's also easy to just make use of the same advantages at a container build level as well. We covered that and one of the other reasons I love K-Pack is because at its heart, it allows or it enables a Git based automation to be set up. So if you really love this notion of GitOps and you want to get in on the GitOps bandwagon some form or manner, K-Pack allows you to do that very easily because K-Pack basically is designed to work in either a pull manner where it will go and pull your Git repos for the latest revisions or you can trigger a K-Pack build every time you update Git repositories as well. So you can pass along that Git hash along with the URL and K-Pack is capable of generating a build. We'll take a look at how it does that and a lot of the K-Pack architecture is designed around declarative files. So if you love YAML engineering, K-Pack is the tool for you. And if you feel that you were missing YAML early on, early enough in the build process, then K-Pack makes it happen. And so all of these things combined make it one of the best ways you can go from code to container for your clusters. And that's really the whole raft of things that I like about K-Pack that really make it worthwhile for me. Let me quickly show you some of these in action. I think it's only fair that in the advanced track we take a look at some YAML as soon as possible. So in this folder, which I'm just using to store some YAML files for purposes of this demo, I have all the constituents of the K-Pack tool. So like I mentioned, K-Pack works with a handful of YAML files. And K-Pack takes what is known as a service account. So it's a typical Kubernetes service account. And then you make use of it within K-Pack in order for K-Pack to be able to output to a container registry. So I think everybody is familiar with the build paradigm where you have source code, you create a container or an image out of it, and then you upload this image to a container registry and then you deploy from that container registry to your production container runtime, whatever it might be. Considering that you're here, it's probably Kubernetes and something that's running there, but that's the same colophone that K-Pack follows. So K-Pack takes care of the life cycle between the time you supply code to it and putting something on a container registry in the form of an OCI image. And so that life cycle is handed through a combination of YAML files that give the definition of the tool itself. And the first step going there is to create a service account. Now K-Pack takes this notion of what is known as a builder and this builder is divided into two pieces, a stack and a store. Let's think of it that way. K-Pack internally makes use of cloud native build packs. Anybody heard of cloud native build packs at all? Yeah, great. So cloud native build packs is a CNCF project. K-Pack is a part of the build packs org for about six months now. And so K-Pack consume cloud native build packs. Cloud native build packs are essentially an evolution of build packs that were available back in the Heroku days. I mean, they still are available in Heroku and Cloud Foundry and all of these tools. But now they've evolved to export OCI compatible container images. They did not do that before. They had the ability to import certain immutable artifacts that were specific to either the Heroku ecosystem or the Cloud Foundry ecosystem. Whereas now the project has taken on a sort of new form where instead of creating an artifact that will run in either of these sort of closed ecosystems, it will instead generate an OCI compatible image that you can run on pretty much any container runtime. The builder that K-Pack uses here references to two build packs in particular. So if you look closer to the bottom, you have a Java build pack and you have a Node.js build pack. Now, obviously this can contain more build packs. So you could have PHP build packs. You could have Rust build packs. You could have whatever it is that you want. But for purposes of this demo, we just, it's just limited to two. There's also this thing called the stack. And what the stack means is it is the base image that it will use as part of the build process that it does. And it will also supply a base image layer to what is known as the run image, which will actually be run inside the container. There's also a store that K-Pack references. So you remember that Java build pack and Node.js build pack I talked about. So the store is basically a list of all different build packs that are contained for reference by K-Pack. And if you wanted to add more build packs to K-Pack in order to build more language families as written in other languages, you basically add to this. So when you install K-Pack, you install all of these YAMLs one after the other. And then that's how K-Pack effectively gets installed to any Kubernetes cluster that you might have. So in this example, we have a K-Pack running. Let's see if I can... So I'm using K9s to connect to a local Kubernetes cluster in this case. And these are the two CRTs that... These are the two pods basically that reference K-Pack. So if I had to describe this, then you can see all of that information gets applied here. And this is the K-Pack control plane that will reference all of those build packs and other things here. Now let's see. So it's basically a very simple thing to do. So on your Kubernetes cluster, whether it's local or whether it's remote, you can install K-Pack and have it create all of these images as you go along. Yeah, internally they use build packs. And one of the things I like about K-Pack is all of the native integrations that it comes with for a lot of the supply chain security work that I've recently discovered. I have always been an engineer, which means I don't care about security, or I don't know if that's fair to say, hopefully my manager is not watching, but I've never been one to pay attention to security from the get-go. And that attitude is kind of changing with a lot of the recent supply chain attacks and things like that. But what is facilitating a large part of the change is how many of these aspects that K-Pack comes with support for. Now I wanted to highlight the open SSF a little bit. So this is Honk, the open SSF mascot. The open SSF or the open source software security foundation. I think that's right. They have a mobilization plan that goes into 10 streams of investment. So among these 10 streams, they advocate the use of S-bombs. They advocate the use of digital signatures. And they advocate creating more secure and more isolated build environments. And so those are three areas where I was performing a bunch of experiments over the summer. And I noticed that of all the tools that were available, K-Pack was one of the more convenient ones to apply these principles. Now some of these principles also include building more trust along build pipelines. If people are aware of the CNCF tags, which is how they organize a lot of the work that goes on in different areas. So there's a tag supply chain security within the CNCF who published a reference white paper on here's how you do supply chain security when you work with CNCF projects. And that tag also published what is known as Fresca. So FRSCA, which was a reference architecture using various CNCF projects to basically build a way in which you can do repeatable builds of software artifacts. Basically, repeatedly create container images that are also signed and that are slightly more secure than like regular container images. And so I took that same reference architecture, but then I tried to apply different tools to it while preserving the same principles. And so the first thing that I hit upon was generating S-bombs. Now when you use build packs, the specification dictates that every single build that makes use of build packs generate S-bombs automatically. So it's a part of the spec. And so if you're in the business of creating containers and you want to know what's inside your container, but then you don't really want to do a lot of extra work in exposing what's inside your container, then build packs through K-Pack provide this manner to export S-bombs that's in a very native fashion. So when you use K-Pack because of the use of build packs, you get S-bombs for free basically. The other area where a lot of effort has gone into highlighting is what is known as Salsa. So Salsa advocates security levels for software artifacts. So there are a lot of areas that are involved when you create a container image or any software artifact in general. And so Salsa levels are meant as guidelines for helping secure each of these individually. So there's different best practices that you have to adhere to when you do coding, for example. There's different practices you can adhere to when you're doing deploys, for example. And then there's different things for the build phase of things. And so Salsa governs a lot of these different areas. A lot of it is based on different build practices that happened at Google, open source later, and a lot of different members from the community came together to improve it. And you can check their website out. And so the project website is salsa.dev. And you can learn more about the project. You can contribute in terms of security protocols and best practices if that's your wheelhouse. And they basically put out information about how to design your build infrastructure in order to get more secure and things like that. The other thing that I toyed around when using K-Pak and I was really happy about was its native integration for Cosine. So users of six-store projects here, Cosine, people aware of what they are good. So Cosine is basically what is known as a six-store project. Six-store is a collection of projects that basically exist in order to build more trust. And verify identity around container images. So the goal of six-store projects collectively is to enable people to verify authentically about who built a particular image. So if you're in the business of doing a lot of image builds and you want consumers of these images to verify your identity with it, then the six-store family of projects, particularly Cosine, is very useful. And there's native integration for Cosine within the K-Pak ecosystem, which made it another reason why it's a great choice for doing builds. This is an example of the BuildPaks architecture. So I'm making use of Paketo BuildPaks. So Paketo is a family of BuildPaks that's open-source available for production-grade workflows. We know tons of people who are using this in production. So this demo consists of making use of the Node.js family of Paketo BuildPaks. This is an example of what different salsa levels are for a build process. Now, because K-Pak focuses on the build, I just thought I'll highlight the build aspects of the salsa level. And so, as you can see, it's an incrementally growing number of best practices for how to design or how to architect your build environment. On the lowest level, you have an undocumented script, maybe that's doing a build. But at the highest level, you have a repeatable and what they call a hermetic parameterless build, which basically means your build is happening in an environment that is sufficiently isolated that only certain people have access to. And that build will succeed upon a 2% review at the end of it. And so the other levels are somewhere between that kind of automation and sophistication and having nothing at all. So it allows people to incrementally build a better build process for themselves. This is the handshake of the Cosine or the six-store family of projects, to be honest. Cosine is at the center of it. And then they have what are known as Fulcio and Recorr, which are other projects that basically allow you to create a certifying authority and verifying that signature using Recorr, which is a log. Now, you can put this on any private infrastructure that you own and you can make sure that images that you publish go through a verification stage before their output. Again, all of these projects are independent of Kpack itself. You can take the principles that are here and do it on your own. It doesn't have to be done in the context of Kpack. I am demonstrating it in the context of Kpack because it's just a more convenient way to make it happen. And Kpack has all of these advantages, other advantages that it might make sense to make use of or take advantage of as well. So, like I mentioned, four times I think now all of these are natively supported with Kpack. And let's look at a quick demo. Now, for this demo, I have, like I showed, a kind cluster running on my laptop. Again, not the most secure way to do things, but it's more for illustrative purposes. You could have your Kubernetes cluster run anywhere and this demo and the principles contained within it are just as portable and functional. Like I mentioned, Kpack sits in the heart of everything. It will take source code from a GitHub repo, create a container out of it using buildpacks, sign that container, upload it to a public Docker Hub instance. And what I'm also doing in the demo is showing a rebuild. So, for one change that I make on GitHub, it will rebuild the containers, sign them once again, upload them once again and verify them once again. So, let's quickly walk through what that demo looks like. So, I just started this at the beginning. And so the first thing that I'm doing is creating a kind cluster again slightly outside the scope of this demo, but I just wanted to invent the universe from scratch. So, what is happening here is Kpack is being applied. So, Kpack as a resource is being applied to the Kubernetes cluster and that will install Kpack on the cluster. So, it will make use of that service account that I showed you and that service account has two parts. It has like regular Docker Hub credentials and it also has cosine credentials. So, it will have two parts to it. So, I create that as a Kubernetes secret and then apply that to a service account here. Once done, you can see that the store and the stack and all of those pieces which basically run Kpack, they're all being created here within the resource. Once that's done, it will get into the first image build. Now, even that is a Kpack resource. So, let's see. So, this is what the image is being built. So, the image basically consists of a URL that's a remote repository somewhere and a Git revision number. So, these are the two things that Kpack needs in order to trigger a build from that particular code for that particular repo. So, if you've been paying attention and not sleeping after those wonderful snacks and lunch and what have you, you'll notice that this is where Kpack triggers things off. And then from here on forward, the build packs life cycle will kick in. So, the first stage in the build packs life cycle is to detect the language of the code that's being used. So, it detected a Node.js application and it basically goes through this process of building different layers on that application. Now, as it goes through building each and every layer, it will generate an S-bomb for that particular layer. The Keto build packs will go through the whole life cycle. Like I said, they'll generate S-bomb for each of the different participating layers. And finally, a container image is exported. Now, this is the name of the image and before completing the export itself and uploading to Docker Hub, Cosign will kick in and it'll sign the image here and then push the image along with that signature. So, and then next, I've written just a Cosign verify step just to make sure that the image that we built was actually built by the ID that I've supplied. In this case, I'm just using my personal credentials and this is where a rebuild starts. So, instead of this Git revision, we're now supplying a new Git revision in the same manner and then Kpack will now detect that a change is occurring and then it'll go through the exact same steps just that a lot of parts are just restored in this case. So, both data and metadata for the various images are restored and this kind of caching can give a lot of speed advantages if you're using Kpack. So, it'll go through the same process essentially, sign the build for a second time and verify the build for a second time. We can check Docker Hub and we can notice that it'll have that signature file as part of the thing on Docker Hub. So, basically it'll have information about all of this. So, along with the different image layers, there's also like a signature layer here. So, this represents that signature that Cosign basically uploaded here. Finally, the same image will also have all of the S-bombs. If you want to know what's inside the image and what are the different dependencies that are built and what are the licenses associated with these dependencies and things like that, you can see them here. If you were in my previous talk, I sort of promised to show this to folks and so here's a way of knowing exactly what licenses each of the different dependencies in this application have. So, yeah, it's basically the trifecta of who's building the image, what's being built and where it is being built. And so, all of that together can answer a lot of the supply chain security questions that people might have. Again, Kpack is fully open source. It is a CNCF project. You can check it out on GitHub Repos. Obviously, the project needs as much love as it can get. The community is always looking out for contributors and maintainers. So, if you're even remotely interested in an automated Kubernetes build service, I would highly recommend getting a leg on to Kpack. That being said, thank you for coming to my talk. I'm happy to take any questions and feel free to connect on any social media. I'm Ramayangar in most places. Thank you so much.