 It's like writing for the referee at the start of a rugby game where they're looking to the sideline saying when's the camera starting? Miss Yola referee Okay Beyond bonvenue to La Monapari Welcome to Paris At half five in the evening and we have a full house. I'm really impressed. Well done what you're staying power folks. This is this is Really cool As you probably know because you've read the title of the talk Juan and I are here to represent the cloud natives builds back cloud native pillpacks project um, I Suppose the first thing to say up front is that I'm gonna do a little introduction for the first ten minutes or so This is a maintainer track talk So some folks are going to expect some more technical details Which Juan is going to go into and then we'll we'll conclude at the end with some stuff that everybody might be interested in as well So cloud native bill packs What we do a cloud native bill packs is that we we develop a specification hosted at billpacks.io Now there are multiple implementations of this specification All of them though share the same intention What we want to do is transform source project into production application images Bill packs do this by understanding your existing applications build system. So for example Understanding npm if you are building a Node.js application And we is essentially orchestrate the build using your existing build system But output an OCI image a docker image that you can then use to deploy on kubernetes So We want to start with some kind of application source I've got a demo here, which I'm not going to show you the source But trust me. I'm a professional It has a go back end or a back end written in go and it's got a front end written in TypeScript, which needs to be compiled down to JavaScript and then served up using engine x or something And what we're going to do to build this is we're going to install the pack cli from billpacks.io That's our command line tool called pack And then I'm going to choose a builder image. You can see the command at the top of the screen I've got to choose a builder image and a builder image is a Set of bill packs a collection of bill packs. We just package them all into a single OCI image for distribution And here I'm going to use a set of bill packs from the marvelous Pequetto project an open source project However, I could have chosen bill packs from Heroku who make an excellent set of bill packs or I could have chosen a set of bill packs from Google because they Specialized their bill packs for Google Cloud Run And some of their other platforms And I got a run pack as you can see in my source directory passing in the builder image and After pack runs through all the stages specified in the cloud native bill packs specification we'll end up with an image that we can use and We can execute that output image in the usual manner Now on this slide. I'm using podman to launch the back-end service on port 8080 But you could use Docker or you could use Kubernetes or if you're really cool, you could just directly invoke run C So this launches my go back-end and I'm choosing a separate entry point on the image to launch my front-end service on port 3000 and this launches my react front-end So visiting local host in a web browser shows the hello world message in tiny writing on the bottom That's passed through from the back-end to my front-end Now pack is a great tool for local development But most of us want to scale our image builds by incorporating pack into existing CI into existing continuous integration systems and Pack can be integrated into your existing Jenkins build Now you can go and write your own Jenkins pipeline if you'd like to it's straightforward, and I've done it many times Or you could adopt the open-source Project Piper Jenkins library which you can see on the right-hand side and that Jenkins library library can be used to run a CNB build a cloud native build packs build If instead you're a GitHub action user you can use a GitHub action to make pack available And then your project can invoke pack to build and push images to a registry The project documentation at build packs.io forward slash docs does demonstrate how to integrate pack with tecton as well But if I've not mentioned your favorite CI system then please do look at our documentation for a longer list of integrations and We're open to PRs if you've got an integration that we don't have documented In addition to the pack command line tool. We maintain a kubernetes operator, which is cunningly called kpack Which acts as an image build cluster? On the left-hand side of the slide you can see an example image custom resource definition With some fake bits put in because I didn't want to give you the real Location of this image because I had to pay Google to host it and that would kill my billing at the end of the month And the image crd is applied to the kpack cluster on the right-hand side You can see a real example Where I actually do build the source application straight from the source repository on github So it's monitoring the source repository on github looking for changes and anytime there's a change to the main line It builds a new image and progresses through each of the steps defined again in the cloud natives build pack specification Now sticking with the kpack built image. I'll create a deployment from my local k3s cluster Which is look working on my actual laptop You can see in the deployment that there is a single image that contains the default entry points to run our back-end service We apply that to our cluster a viola That's French people We can hit the back-end with a get request And if we wanted to create a htp ingress to the cluster Well, then it will take us a total of 6 minutes and 10 seconds to have gone from source code to a working deployment And if we were to automate all that then the time itself gets reduced to simply the amount of time that it takes to build the image plus some change now At the start of the presentation. I stated that bill packs build production application images These are the kind of images that are intended to run as web applications on a service such as heroku or vm Where's Tanzu platform or Possibly as functions as a service platform such as Google cloud run They may even be things that you want to run as jobs in an Argo workflow Or if you attended the keynote yesterday, my colleagues were talking about using bill packs to build images that run as machine learning training jobs Or AI services running on top of K native But the thing is by production image I mean images with the kind of properties that we've listed up here that they have a small attack surface That they've got a full software bill of materials That when we we design things to run in reduced privileges, so there's non-root users on them and ideally it's reproducible builds Kind of by small attack surface like ideally images contain only the dependencies that your application requires and a minimal run image Would be a scratch image. That's using a non-root user However, you know this that's pretty much the smallest attack surface that we can present to any external interface But in reality to support something like a python runtime You probably need a libc on that image and to get closer even to reality your minimal image if it needed to support a jvm certainly needs to have libc on it and lib free type I'm not sure why and if somebody can tell me I'd be really interested In these cases generally the run image does not need a shell unless your application code uses one So you can put in a run image into your build packs build that doesn't contain a shell Now I've used the default piquetto run image in this case Which is described by the upstream folk the piquetto folks who we know and love as an a bunch to jammy image with some common dependencies like TZ data, so the time zone data and open SSL using small images improve the performance of image distribution simply by being easier to cache and smaller to download It's kind of makes sense Because build packs though orchestrate your application build the system is aware of all the build time dependencies that your application uses For example, if you had a python application And it used GCC to compile some native dependency Well build packs then have enough knowledge to add GCC to your software build of materials from build time It's kind of easy to overlook the value of software builds of material I know we've been talking about them for a few years at this stage, but particularly from an EU perspective right now Obviously not in the Middle East, but you know, sorry. I've got a cater to this audience From an EU perspective the software build of materials is a cornerstone of many of our technical processes to address the recent digital operations Resilience resilience act obligations that we have The final thing I want to talk about here is Reproducible builds if you use pinned dependencies now go does this by default you've got your go dot some file node has its package dot lock file and Python you can pip freeze to pin your dependencies but if you use pinned dependencies and If you build the same commit of the same source code using the same builder Then two separate image builds using pack will produce images that are byte-for-byte equivalent to each other You can see this is a real interesting side effect if you exec into one of the running images in a pod and Then LS minus LD slash CNB directory And what you'll see in there is that the creation time for all the files is the first of January 1980 And this makes sure that simple things like at the the creation time or the end time of files on the image Does not make the images differ and we get the byte-for-byte reproducibility So kind of in summary We've looked at how to build images using the pack CLI We've had a little look at how to build images using the k-pack kubernetes operator And we've had a look at the properties that we expect from output production application images so say fantastic and At this stage, I'd like to check with you all to see how much of what we've covered is new so Could I get you to wave at me if that was new content to a bunch of you? Cool. Okay. Wow. So about 60% of the room. This is brand new too So we've just introduced build packs to you and about 40% of the room like we want more Aiden Give us more Okay So what we're interested in So we've had a look at building images using pack and k-pack and the question as this stage is does it scale and The answer that is that pack and k-pack scale to VM where Tanzu scale or similarly pack and k-pack scale to Heroku scale But both VM where I Tanzu and Heroku provide collections of build packs with wide support for many language stacks by contrast Yesterday's keynote my colleagues from Bloomberg. I described how build packs are Used to build training workloads for on Bloomberg's data science platform Usually I know Leon also described how build packs are used to build production AI services at Bloomberg Another example of the kind of scale achieved that can be achieved by build packs is in a functions of the service Which you can see as an example on the right-hand side on Google Cloud run This is me actually executing that G cloud build command down here and capturing the output And you can see in the output that it's running through the same steps that we expect from the build pack specification And Google's like command line tool can build your your function using the Google collection or build packs So we've looked at this stage at some of the existing features of build but that allow build packs To scale to a large-scale image build platforms and we've even seen examples of some of these large-scale image build platforms I am now going to hand over to Juan to show you the progress that we've made on new features over the past year Thank you, Aidan cool the future Right, let's see what we have here so one of the things that are being asked in from the community is When are we going to have a better support for multi arc and basically the idea behind that is right now most of the people is using probably an M1 or M2 local machine and They want to actually build their image in the local machines, right? So let's start by answering this question. What is a multi architecture OCI image? So basically a multi architecture OCI image is based on a OCI image manifest and That manifest is basically Describing uniquely your image right and it has all the layers inside of it And when you run for example back build right now and you inspect your image in the demon For example, you will see that manifest now if you want to support multi arc then you need Image index in front of your manifest So basically an image index is just a collection of manifest and then you will have some metadata there, right? That a specify which platform your manifest is supporting So that's all part of the OCI is back nothing new. We knew this back and that's cool Now just for Make things more clear, let's take a look and how a money an image index is Inside of it, right? So it's just a JSON file, right? So if we use crane tool to inspect the VC box image that everybody is using probably right now you will see You will see that we get this image index, right the image index as the purpose of the demo We just have to manifest inside of it one for md64 and one for arm now But what is this useful, right? Why do I need to care about this? So let's put our head as a developer, right? You just want to run probably docker pool docker round and you don't want to worry about your Host OS and your host architecture, right? You just want to do docker pool and that's it, right? So that's the beneficial or that's the idea behind the image index that you don't need to care about So we want to do the same thing, right? The problem is we want to run pack build as Aidan already showed before and we want to do it on an arm machine md64 machine and we expect that everything should work So What is cloud native build packs doing to support that? Building multi multi-arch architecture images So let's just make here the problem the problem is something like this I already told you we just need image indexes, right image indexes are part of the OCI spec Cool now right now We could build those image those image indexes But the problem is that our friends the built-back authors providers do not have the tooling to make that work easy So if we use this compatibility check table on the Left side from you you will see that There are all the different components that we are distributing in a registry back lifecycle and on the left on the right Side we will see that for linux md64. We are okay. We don't have any age But for arm machines or for arm architecture Then we have a technical depth on build packs and builders and that's problem That's the problem we have right now build packs and beer and builders are part of the job that the build packs are author are doing So to answer the question what we are doing Basically, there is an RFC to update the pack CLI to allow the pack build pack package Command and the pack builder create command to handle the image index creation for you Right, we want to make their life easier easier. So you can scan the QR code if you want to have more details but in summary what the RFC is proposing is Two main things the first one is just a new folder structure to organize your Binary so you are a build pack author, right? You want to provide the different binaries you need for different OS and architectures So now you will be able to organize your binaries according to the OS Architecture or maybe variant, you know all the flavors you need the other change is You will have to update some configurations files Maybe the build pack tumble or the package tumble depending on what you're building To include targets, right? Those are the two changes, but let's try to see how it will work in a Real demo So give me just one second here Okay Okay, so we are going to use our samples repo. Okay, the build pack samples repo Inside that repo you will find in this path in inside the build packs folder You will have you will find several built examples build packs, right? Let's take a look on the hello war build pack right now the one that we have If we inspect what we have inside that Folder you will see there is a bin folder with two Binaries right built and the detect that's part of the spec and then you will have a build pack tumble and a package tumble If you just expect that you will see the problem. Okay, if I'm a build pack author How do I do if I want to create multi arc images if I need to create a different binary for the build and the detect So that's the real problem. Now what we're going to do is Let's apply the change proposed in the RFC and let's see how The hello war build pack it's gonna look right now Cool. So we just reorganized the binaries. Probably we recompile them, right? And we put it on different folders according to the RFC. That's the first change now The other change is Let's take a look on the build pack tumble. We need to do some changes there. What are the changes? We need to add targets Right. So if you see the mark there on the left, it's exactly what we just changed We just added the targets. We want to support now, let's use the Binary compile with the new features is still in progress, but let's take a look on what's up So we are gonna run pack build pack package comment And we're gonna say let's save it on some repo. It's I'm running just a local registry and let's publish the result Now what just happened Packets gonna read the build pack tumble. It's gonna find more than one target, right? And for each target, it's gonna try to create a single image It's gonna push it to the registry and once that's done. It's gonna create the image index for us, right? So let's take a look and let's use crane similar on what we did before with the VC box and You will see That Effectively we have this image indexed and now we are supporting MD 64 and arm CC 4 for the hello war Example with those two changes Nice, but sometimes things are more complicated, right? I did some stuff behind the scenes. It doesn't matter It's just for this demo, but what happened if we have a multi architecture compose build pack What do I need to do if I'm a build pack author? Well, the answer is the same thing Right, you just need to update but in this case is not the build pack tumble file You will need to update your package tumble And what you need to add there? Let's see is the same thing, right? You need to define the targets and the other changes are just for the purpose of the demo We are using a local registry where we already published the hello war Multi arc build pack and behind the scenes. I just built the hello moon build back, right? Just to for saving some time cool Now we already update our package tumble for our composed build pack and we want to package it, right? We are going to execute the same common the build pack package common But in this case, we're gonna also pass through the flag With a configuration file that contains the changes. We just did Let's run it and again similar to the previous example Packets gonna read the package tumble gets gonna notice all more than one target Let me create all the intermediate image for you and then at the end of the day I will create the image index for you awesome Now things could be more interesting now we already have You can see and you can inspect also the manifest for this composed build pack This is also just we are just built the Builder and the running matches behind the scenes just for saving some time. So right now we already have Multi arc build packs. Hello war. Hello moon And the hello universe, which is the composed build packed We just build a builder and a run image that are also multi arc now It's time to create a multi arc builder How we do it? No magic behind the scenes is just a matter of updating your builder tumble You need to add your targets, right? And then again execute The pack builder create common Pack builder create common their configuration file you just updated and The publish flag to save it into the local registry. We are using and then we will have this Multi arc builder Right packets gonna do all the heavy lifting looking for the correct image according to the OS and platform that you are building and At the end you can also inspect the builder distribute that to your team and You're done Awesome, and you're gonna ship that now. Yeah That's right after the talk Can we get a coffee first? Brands Right I Need a drum roll Natalie, can you just start a new drum roll? Come on people. I need a drum roll. It's late in the day We need your energy. I need a drum roll We are delighted to announce that k-pack now supports the highest level of salsa compliance Right for those of you who are not aware at salsa dev is a security framework for artifact build systems the slide here shows a summary table that I've taken from their documentation at salsa dev and That documentation describes level one through level three of salsa compliance now From a European perspective, and we are in Paris Salsa dev is a list of technical objectives that we can then relate back to our obligations under the Network the various network and information security directives and particularly under the sector specific obligations such as the digital operations resilience act I'm a technologist. I need more Operational things than the legislation and that's what salsa that dev gives me at the moment So k-pack with the salsa attestation feature enabled provides salsa level one compliance This is because it comes with a consistent build process and it generates a provenance document If you're interested, it's an in-toto attestation to which we add the the build our provenance to it But k-pack with the level with the salsa attestation feature enabled and with a signing key provided provides salsa level three compliance and Level three describes what they call upstream the hardened builds The build occurs on a Kubernetes cluster and usually this means it's on dedicated infrastructure The signing private keys are provided via a Kubernetes secret Which uses or back to ensure that there's minimal access to that secret The build are running pods themselves which are isolated from each other's via the standard Kubernetes primitives and Builds that are run using a build packs builder Which is an immutable OCI image that Juan just showed us and this prevents tampering with the build steps during the build process Finally the private keys that are used to sign the attestation only become accessible During the completion stage of the process. So they're not accept not accessible at any other steps in the build process Which means that we get to level three compliance and we are very grateful And we would love to thank the the k-pack team as part of the build packs Organization for all their work in getting to this milestone. So thank you very much k-pack team. This is Unseen work except So what are we done? And we've demonstrated the operation of pack and hopefully convinced a bunch of you just to try pack as a command line tool We've also demonstrated k-pack and showed you that there's a kubernetes operator there to build all your images directly from your Your source repositories And we've provided examples how build packs build production images at scale In addition gonna build pack specification itself and pack and k-pack were designed From the ground up with security in mind This is with things like Dora in mind, but don't five or six years ago because you know some people are very clever That we have an ongoing investment in the build packs project on things like salsa compliance to ensure We have the highest level of security And we've got an ongoing investment as you've seen in multi-architecture support What Hans just demoed hasn't been shipped yet because we've got to crack that whip and get them to ship it But this is on top of building of our five plus years of existing production deployment predigree So we feel like we're delivering trustworthy systems that can be trusted in production So people of Paris, thank you very much for staying late to listen to us And we're happy to take any questions. You can throw our direction Hi Thank you very much for the great project. I think it's it's a wonderful project big fan of heroku way back in the day And and that whole experience of just get push and just works The you demonstrated this application and had like back end and front end and I'm not familiar. There was this thing that was deprecated. I can't remember that the name of it, but it's stuck Yes, is there is there something that allows me to kind of I mean, I would probably have my own builder for that, but Generate separate images for the back end and the front end Like or is there builders that do that already? Yeah, um, so I Intentionally made that demo just a slight bit more complicated because it was building on a talk that we gave mercy Last year about extend our ways to configure the build process So Natalie gave a talk last year on how to extend the build packs process So what that demo actually does is it? pulls together a bunch of different build packs and mixes them and matches them to get the output that I want now can you build a Go and can you build the go code in that repository and output one image and then the Node.js code in that repository or the typescript code in that repository Yes, you'd end up executing pack twice Once just to build the go application in that repository and a second time to build With with a Node.js only builder to build that the typescript code in that repository. Does that make sense as an answer it does I'm like the thing that I'm concerned about here is like artifact distribution, right? Let's say I have a big mono repo and I don't necessarily want to build all those service and squash them in one big image, right? But I'd like to fan out Yeah as My understanding right now is that the only way that we could achieve that is by running pack once per your Application that you're building from that that repository so you'd effectively have multiple targets and you'd run pack for each target now Because you are running pack in that repository I'm not going to comment on how the caching gets shared because The people who know about caching better than I are sitting in the front row and maybe we can talk about that afterwards But yeah, it's it's something it's a feature that has come up before We have not yet written an or FC so a request for comments, which would be the first stage on Adding the feature to pack, but if it was something that you're interested in we'd happily help you write that or FC and See how we could get it that implemented sure. Yeah. Thank you very much. I Have a follow-up question. So what if I do have Basically sequential build steps. So imagine I have some protobuf definitions in my repository that my go back and Needs and you need to generate types out of that. Can I model that with a build pack? Good question. Um I'm not as familiar with protobufs as I should be if I gave you an area code generation step ahead of Well, if I gave if I used a Node.js example, um, so the The the package of Jason file. Thank you. We've got two minutes left the package.json file would allow me to chain build steps in that package.json file and as build packs all we do is orchestrate the underlying build system that you're using so that if your build system is capable of Expressing that generate these files before you do that the build then yes, we could orchestrate that But for example with protobuf if I need to run protoc or CLI or something before running go build then that probably wouldn't work well No, the real answer here is actually and I don't know why I'm just not you build packs are composable units So if we're missing a build pack that does something like say runs protoc Then we would invite you or anyone in the community just to write that build pack and click it in as one of the Lego bricks inside one of these builders Then your build pack would examine to see is code generation needed if it is it will run If it isn't it will opt out of the build and just let the build run unfettered. Is that a better answer to your question? Yeah, thanks a lot. Good. Thank you We are out of time. There is a big triangle at the back of the room saying that we've run over time So thank you very much, but you know, we're hanging around to answer more questions. So yeah