 Hello. Welcome. It's after lunch, so I know it's a tiring time for people. You've got to stay awake for all these talks, and I suppose just for the avoidance of doubt, this shirt does not mean I've lost a bet. It means that I'm winning one, so I'm happy. But Natalie and I are here today to talk not about awesome, violent pink boola shirts, but about bill packs, and in particularly customizing the bill process around bill packs. What we want to do is introduce a couple of features that some of which you may know, some of which are very, very, very new, in order to customize the bill packs bill process. So what I'm going to do first is go into a really short demo, and then we'll go into kind of an overview of the rest of the talk, and we'll dig into dive deeper into each of those features. So the question always I try and ask myself the question is what what problem are we solving? And the problem that bill packs are trying to solve is given my application source code, how do I get an OCI image at the far side? That's it. Given my application source code, how do I generate an OCI image, aka a Docker image, something that you can deploy on top of Kubernetes, something that you can do a Docker run with or a podman run or whatever your run time at chosen runtime environment is. Now, how many people here have used our pack tool before to generate an image? Can I see some hands in the air? Oh wow, it's roughly about half the audience. Fantastic. That's more than we expected. We must be doing some good work. Fantastic. So I've got a quick example here of using pack to build an image. Now, the typical user experience with the pack command line tool is simply pack, build, and then your image name. I've explicitly here decided to put in the dash dash builder to show that we're using the Pequetto builder in this particular instance. Now, we'll talk a bit more about builders and the entire build packs ecosystem in a bit, but it suffices to say at the moment that there are multiple providers of builders. Build packs project is a vendor neutral CNCF project. There are providers from Google, Heroku, VMware, and other people that I've probably forgotten already, which is good. But whatever builder you're using, the output image will have the following similar characteristics. The output image is going to be small. It's going to be as small as is possible given and peck in only your application's dependencies. The output image is going to have a full software bill of materials. We know about the build process because we're controlling the build process. And because we control the build process, we can tell you what version of the Go compiler was used to build your Go application, for example. So the S-bomb is pretty neat. Interestingly though, we have byte for byte reproducibility on images. So that if you build an image, and then if you don't change your application's dependencies, and if you rebuild that image, you will get byte for byte reproducibility on that image. Now, that's particularly interesting for experimental workflows, whether they be in science or whether they be in data science or in other kind of regulated industries. And the byte for byte reproducibility allows us to deploy some fairly advanced caching strategies. So if you build an image layer only that has your application dependencies, and if your application dependencies don't change, well, then we can reuse that layer when you rebuild your application. So your application source code might change, but that layer doesn't necessarily mean to need to change. I'm going slow, sorry. We don't use root when we're building, which is very important. Natalie will come on to that a bit later. It's particularly important if you're the person providing the build form for your application developers, that's me, I, and I don't want to give people root on all my boxes. Then Natalie will talk a little bit about rebasable images. But what we've done here in this demo is built a multi-language application. The application here has a Go back end and a react front end in TypeScript. What's been produced in the image is an image with two entry points, and you can spin up each of those entry points. We can spin up the default entry point, which runs the back end, and we can spin up the entry point called Web, which spins up the front end for this application. And you can see a screenshot of it in the bottom right. Now, that image, the buildpacks.io image, I'll come on to in a little bit in a few slides time, but you can see the amazing hello world that is returned as the payload from the back end service. So what we have done is we have built reasonably quickly, just pack build example, a multi-language application from a Mono repo. And just a little bit of word on what goes into that build and then what comes out of that build. What goes into that build is your application source code. And you saw me explicitly use a dash dash builder. What is a builder? Well, a builder is, again, just an OCI image. It's an easy way for us to distribute collections of buildpacks. In addition to the collection of buildpacks that's distributed on the builder, it will have a build image. That's the image that the buildpacks actually run on whilst they do their work. And the builder will generally contains a pointer to a run image, which lives generally on a registry. And that's run image is the run image that's used as the base layer in your output application image. As I said, if we start and have a look at the build output, what you'll see is that at the base layer is the run image, the run image that's pointed to by the builder. What you'll see now is I'm going to build up this image in terms of layers. It's very much akin, if you've not used pack before, it's very much akin to a multi-stage Docker build. But we would argue that there are more controls and guardrails in buildpacks than in something like a multi-stage Docker build. And what you can see here is the different layers that are being added to my application image. The Node.js build pack in particular will contribute a node engine, the node runtime to the output image. And the Node.js build pack will also contribute just the set of node modules that my application uses to that output image. Slightly by contrast, what we can see with the Go build pack is that it contributes the output binary, the back end binary to my output image. But the supporting build packs that actually provide the Go compiler and a mechanism for downloading the vendor modules, they're not part of the output image. They will run on the build image, but they don't have to be part of the output image. And this allows us to keep that output image as small as possible and distribute things that only are necessary at runtime. If you've used buildpacks before, you would expect all the nice goodness that we claim. You get yourself a software build of materials that, as I said, is pretty complete. We'll ship your application source code on top of that and we'll put the entry points in your image. Oh, and one more thing. I did mention caching. I'm not going to go into any kind of detail, but the byte for byte reproducibility on these layers does mean that we can cache most of those layers so that when you rebuild, as I said, and if any of these layers haven't changed, so you're using the same node engine, for example, then we just choose that layer from the registry and we don't have to rebuild it. There's a lot of efficiencies built in there. So that's a quick demo and hopefully at this stage, you're going to agree with me that pack build is a nice neat straightforward way of taking your application source code and turning it into an output image. Now, there may be people in this audience that were saying, my build process isn't that straightforward. I need to customize one or two things here and that's what we're going to talk about for the rest of the talk. Five kind of ways of customizing the build packs build pretty much an increasing order of complexity. So we'll first talk about environment variables. We'll then talk about remixing the existing build packs. We'll talk about something that we call inline build packs. That's where my little logo example is going to come back in again. We'll then talk about a new feature that was released when? Yesterday. Yes, this is hot off the press, folks. A new feature that was released yesterday called Dogafile Extensions and then we will point you to some documentation about writing your own build pack. Right. My image contains a go back end and a react front end. Can we customize the go version that we used in that image? This used to work for another Irish guy. If I say can we, if can we, if I say can we customize the go version, you say what? Yes. Yes, we can. Right. Great Irish guy came before me, Mr. Obama and he popularized this. So if I say can we customize the no, the go version using the image, the answer is yes, we can. And can we customize the no JS version also used in the output image? The answer is fantastic. Thank you very much. And how do we figure out how we could customize these things? Well, I'm using the Piccaddo builder. What I'm going to do is use my pack tool again just to inspect their builder and it's going to print out for me all the build packs used in that builder and I'm going to jump to the documentation that they're that the Piccaddo folk have already written for us. And you can see that their documentation tells you precisely what environmental variables are available to customize things at build time. There is a BP underscore go underscore version environment variable that I can pass either on the command line or you can see that on the bottom right hand corner. I can put it into a project dot tumble file in my source code and then I can then point that to go version one dot 20 dot three, which I think is the latest go version if I wanted to for today. Now I'm using something slightly more advanced from the Piccaddo folk. I'm using their BP underscore keep underscore files environment variable. I went to their documentation. I read it turns out that usually the go build pack reads the go source. It compiles the binary and then it doesn't put the go source put code on the output image because that just doesn't make sense in go land. But if I'm building a polyglot image where I've got go source code and type script source code, I will not I will want to keep all those files on the output image and this customization by reading their documentation. I can use the BP underscore keep underscore files environment variable and keep all that lovely type script on my output image. But environment variables aren't just for application developers. They are for platform operators as well. Now you're a platform operator if you're the person who maintains your CI pipelines that use pack or maybe it's a Github action or maybe that's a Jenkins pipeline of some stage or you're a platform operator if you use the kubernetes operator kpack to monitor git repositories and produce images anytime changes are in the git repositories. As a platform operator, I can choose to either default end users application developers to specific versions of these environment variables and I can also choose to override end users environment variables and that makes it easy for me to customize support and even enforce corporate standards. So if we say that we only support python 310 and python 311 well I can default to python 310 and allow the end user to choose python 311. Second thing we can do in terms of order complexity is to start talking about remixing the build order. Again I'm using a builder from piquetto project and the piquetto project has amazingly wide support for a lot of language stacks. It's got support for ruby, python, php but by default their builder builds either a ruby application or a python application or a java application and in my polyglot one I want to build a go and typescript application at the same time. Terrible example. English is not a good example when you want to use or an exclusive or an and in the same sentence but hey I want to build a project that is both a go project and a typescript project. So what I'm going to do is well the question is yeah I suppose can we remix the build order of the build packs and the answer is oh thank you glad I didn't forget that one and it is another feature of this project or tumble file that I have here on the left hand side what I'm doing here is picking out two build packs particularly and saying that I want to build my project using only the nojs build pack and only the go build pack and use them please both at the same time rather than using them exclusively. Yeah this functionality reuses a lot of this allows us to reuse an awful lot of pre-existing functionality without actually going out and rewriting source code or even going as far as writing a build pack. Then a third way of extending the build packs build process and here we're more talking about it something that I know Natalie will talk about later something like more like an escape hatch sometimes and just sometimes you really just need to run a shell script. I know it's not pretty people and I know we don't want to admit to us but can I use curl to download an image or picture at build time and put it on the application image? Yes we can I can use curl if curl exists on the build image I can use curl to take the build packs logo from buildpacks.io and output it into logo.svg on the application image and that's quite cool I've actually here in the kind of last five or six lines there written a small build pack in shell script it's a restricted build pack so we tend to call it an inline build pack but it's a script that runs as the build pack called example slash logo. That does bring us on to the more complex ways of doing things and that is where I have to hand over to people who know more than I do. So Natalie please. Hello okay so we did say at the beginning of this talk that build packs take your source code and turn them into OCI images without the use of docker files. So you might be wondering at this stage what we mean by docker file extensions and how they fit into a build packs build. So to understand the need for this escape hatch we have to talk a little bit about the cnb specification right we mentioned that build packs are a spec anyone can write a build pack but those build packs are expected to conform to certain limitations right among them build packs run rootless they're limited in what they can do right and this is by design this is the value of cloud native build packs is that you're putting constraints around your build process and you know not allowing anything to happen but only the things that you've allowed right but sometimes you just need to do something custom on a per application basis so it might be downloading an image from the internet or it might be installing an operating system package that's not there on the base image right and in that case a build pack is really too limited right so we need something else and I have to ask you can we use docker files to extend our build or runtime based images as of really really as of yesterday so let's talk about how we used to solve this problem right here we have a base a base image it could be our build time base image or our runtime base image the strategy is kind of the same and we need a custom package for one of our applications right so we just add it we just install it ahead of time to the builder but as you can imagine the number of packages that you've added to your your base image could be quite large right and each package may only be needed by one application right this is not the ideal right so the new way that we've allowed is you start with your base image you can keep it very lean right and then at build time we detect what additional actions might be required right and we generate a docker file that then is applied to the build or the runtime base image in order to extend that image before the build packs run so then after this whole process happens you get the build pack provided layers the espom and all of the things that you come to expect from your build so how it works um just to take a step back right a build pack all it is is a piece of software right it reads your application directory it determines what actions might be needed in order to build that application and it contributes dependency layers right we've introduced sort of an analogous concept called an image extension that does something very similar it reads your application directory it determines what actions might need to be taken in a docker file and generates that docker file for application as part of the build process so these are just examples of docker files that might be output right on the left here we have a build dot docker file which as you might expect is going to be applied to our build time base image and all it does is install curl right and we have a run dot docker file that will be used to extend the run time base image and it does the same thing the run dot docker file is is a little bit the special right in the case of a build docker file we really don't want to change the the build time base because that has the build packs that we've already detected that we need right so we have this arg base image from base image which basically just means i don't care what builder what build base image i'm starting from just do this stuff on top of that right but in the case of a run docker file we can actually use that from statement to switch to an entirely different base image right and this is actually quite powerful because you could imagine having sort of a fleet of run images that are available each one might be targeted to a particular language family and so you can keep those run images again very lean with only the dependencies that are needed so to kind of explain how this all fits together we need to go into a little bit more detail about what actually happens during a build packs build and i want to emphasize at this point that not every user or even most users of build packs need to understand the process at this level of detail right we want to keep the app developer experience as simple as pack build my app right but as an operator you might need to know a little bit more detail so this is how we use docker files as part of a cnb build we start with our application source directory we put it through a process called detect where each of the build packs and the extensions get a chance to look at the source code and determine if they're actually needed during the build right and then we also generate any docker files that might be necessary as part of that step then we branch in our flow right we apply the build docker files to the build time base image and then we run the build packs right so at this point the build pack that needs to use curl is going to find curl installed at the same time we apply any of the run docker files to the runtime base image and we take the output extended run image and that becomes the base of our final application image so you can see it comes together with this extended run image and the dependency layers that the build packs provided to make the final image so um at this point as an app developer you might be thinking about all the things that you can do with a docker file right and you could even imagine providing a docker file at the root of your source directory with all kinds of things that you want to add to your application as part of the build process right as a platform operator you might be a little worried about enabling something like that right you can you could write an extension that does that but um more likely you're going to want to have some additional control over what actually gets installed as part of this process so we've given an example here of an extension right it's just a shell script right and build packs can also be as simple as a shell um it's given an output directory where it's expected to write files and here it just writes out that run docker file that does the curl install so this is a really simple example but you can imagine um something more complicated right you have a list of approved packages ones that are vetted and trusted by your organization and you you know detect whether the application requests any of those and only install the ones that you've already pre-approved so you still as a platform operator you still retain total control over what goes into your build so let's go back to our example of the pack build and all of the nice things that we say come along with using build packs right the small output image the s-bomb reproducible build but some of the things um might not be available if you're using docker files right in particular some of us may be familiar with the limitations of caching with docker files um the rootless build right like that's that's gone out the window um and then if you're familiar with the rebasable feature of build packs that's something that may not be available anymore right docker file provided layers might not be safe to rebase so we do view this feature as um something to be used you know carefully right it's an escape patch but it's something that we hope will help uh people kind of surmount the barrier of those first builds and could even be used as a way to iteratively improve your build process so finally I think you already know the answer to this but can you write your own build pack yes you can right so again c and b the project we provide the specification and the tooling but we look to our community to provide the build packs and there's some really great providers which we'll talk about in the next slide but yes you can write your own build pack and you can develop it in any language it can be as simple as the shell script that we saw on the previous slide um or we recommend writing in go we're go developers and there's some great tooling that you can take advantage of uh we do have a build pack author guide and previous talks that go into much more detail about what is a build pack expected to do how does it communicate with the c and b tooling um and and we recommend that you check those out or come ask us because we're always happy to talk about how build packs work so finally just to summarize what we saw today adan gave us a wonderful demo of pack build um a multi-language mono repo so you can really see how powerful build packs are and the nice features that they offer um but at the same time to customize your build you can go from something as simple as providing an environment variable or remixing the build order to writing your own very small simple inline build pack um to the more complex stocca file extensions and writing your own build pack so again just to talk about the ecosystem there are a number of companies and organizations providing platforms build packs or both um and you could be one of them we are an open source project we love to talk to contributors we love to hear from users and hear your feedback um we are in the project pavilion we have this lovely swag here so please come by and talk to us and thank you fantastic so we have kind of five minutes for questions if there are any questions in the audience gentlemen there in the blue oh fantastic oops all right so my question is mainly because my my experience with java native draw image build packs um is it normal that a build pack downloads over gigabytes of compilers and and libraries and everything every time you build an image depends um it really does depend um i mean i don't have a huge amount of experience with the piquetto java build packs but i do know that they will have um they'll point to a jvm which they'll need to pull down and extract then they'll resolve your if you're using something like maven they'll resolve your maven dependencies and resolving maven dependencies isn't i mean it's just a maven problem and it can take a lot of time to resolve now having said that depending on how you're doing your caching then the layers can be cached for the subsequent builds and by default packs should cache those layers for subsequent builds if it's got enough space so the subsequent builds should be uh cheap but it's the same with any kind of process when you're building a container if i was building a java container on a rel base image i'd have to pull down the the jdk or p m and install that and i mean the the jdk runtime is just the size that it is similar to the fact that my node js simple example at that node js runtime is about 250 megs which is a huge amount right and is then only cached after a successful build of the complete thing or also the subsequent steps please somebody who knows answers yes we we write the cache after we export the image so if your build fails that's it's too bad but um we do offer i mean depending on the platform that you're using like the pack tool for example offer support for arbitrary volume mounts so you can mount in a local cache but we kind of put red flags around that to say you know really be sure you know what you're doing here because that can introduce problems the interesting thing about java in particular is that the spring boot tooling itself uses a core component of build packs the lifecycle to produce an image so you don't actually even have to go as far as running our pack tool the spring boot tooling uses lifecycle the core component build packs internally to spit out an output image so if you're using spring you take that route out of there right thank you cool um i see question there um on the end of that row with gentleman the person sorry again thanks regarding the dockerfile extension i'm just unsure how do i feed that dockerfile into the build process so is build dot dockerfile a set name or how does that work yeah so the dockerfile uh it's the responsibility of the extension which you can think of as kind of like a build pack it's the responsibility of that component to put it where the cnb lifecycle expects to find it so you can write an extension that says i'm just going to look in the application source directory and if i see a build dot dockerfile there copy it to the output directory right but we have that the extension as that intermediary entity to kind of inspect the dockerfile and make sure that you're only doing things that you approve of in that step okay thank you cool Nancy gentleman here in the reds i think is hello i have a more general question i'm pretty new to build packs and i was wondering where should i put build packs as a tool and what is build packs responsible for i mean we are designing a chd pipeline and we are migrating to a new stack and i was wondering if if build packs is a tool i can customize to add some additional actions and actions besides building let's say running tests static code analysis or should i treat it just as a narrow responsibility block and just put you know testing static code analysis all the other stuff as well so if you could clear it up no it's a really good question and it's come up a bunch of times what exactly is the responsibility of pack build or in its narrowest conception we take your application source code we turn it into an image what you do with that image is up to you afterwards you can then use that image maybe to run tests as part of a subsequent ci pipeline step or you can go ahead and deploy that image if you want to now the questions come up so often that there is a proposal now that we kind of delineate execution environments and make the build packs process a bit more of being amenable to building a test focused image and then a subsequent production focused image so it's something that we have had requests for and that we're working on okay so i understand this is just a dedicated stage of building and that's it and if i want to do something before build packs or after it's just being bit elsewhere right if i go back to what natalie said earlier the kind of key point here is we want to build images from our application source code what i often don't want is to give tens or hundreds of developer teams control over that process because they just have too much else to worry about and when there's a a need to update a python runtime or when there's a need to update a base image they often don't have the the capacity to deal with that so if i can take a lot of that policy and pull it more centrally into a builder with some environment variables preset in it then if i run that pack tool as part of a cei process i'm going to be able to control the output images for those dev teams and let them as natalie was saying focus on actual development rather than maybe more of the dev ops that they're less comfortable with right thanks for giving this up so you said per default build kit build packs are rootless does this mean that i can also run it inside a container like if i have for example a git lab kubernetes runner is this possible yeah i mean one of the constraints again i'm taking natalie's words right from her mouth yeah build packs are component of the auto dev ops feature of the git lab so we know they work in that environment but get to answer your question you know build packs run rootless and that enables them to run in a variety of environments that require that okay but when i now think of an extension with a docker file then i would lose this feature do you have plans to maybe integrate something like karnico so that i'm still able to build it with a docker file in a container environment yes i probably should have mentioned that when i went over the flow but we actually use karnico to apply the docker files to the build time and the runtime base image so that under the hood that's how we generate the docker file layers we only say you know requires root if the commands you're running actually require root thank you i think we probably have time for one more question but again if people want to go to more detail we have the booth so come on over and okay maybe just a general question we're using basel so basel follows more or less an approach you build images with a docker file and so on was in the section can we use build packs inside basel is any kind of integration there i am not personally aware of any integration with basel it wouldn't google use build packs in gcr and in google cloud run so i know they provide a builder and they provide build packs i don't know much about basel i'm sorry thank you cool i think we've run out of time our friend at the back is furiously waving at us and i don't want to get manhandled off a stage by somebody so thank you very much i'll be seeing the pavilion