 So, we're here to talk about how bill packs we're going to tackle basically handling multi-architecture image builds. The QR code in the top right is for rating this talk, and it will have it also at the end, so you don't need to scan it right now if you don't want to. My name's Terence Lee. I'm one of the, I guess, co-founders of the Cloudany bill packs project. Look at Salesforce and Heroku. Juan was on kind of the description if you looked at the talk, and unfortunately he couldn't be here due to visa issues, but he's actually the one that has done the bulk of the work that we're going to be talking about here, and we're going to have a video kind of later on where he will be able to kind of say a little bit. He'll say he's a contributor, but we've actually elevated him to a maintainer, so we're super excited about that and the work he's done. And then I've replaced him in your stock with this guy, Joe, who also works at Salesforce and a different part of the company, and also co-founded the bill packs project with me. So in this talk, if you don't know anything about bill packs, we'll do a quick kind of intro on them, pretty brief, and then we'll talk about why you should care about multi-arch as a whole, dig into how bill packs can help you kind of deal with that and how we're approaching and solving it, and then Juan's going to, we'll have a little video from him where he's going to talk about key takeaways from the talk itself, and yeah, let's get started. So if you're not familiar with bill packs, we take your app source code and transform them into production grade images, focus on developer experience, and things like security. And the quickest way to get started is to use a tool called pack that is part of the project. It's a CLI that interfaces with the Docker theme, and this is an example of the output that you get when you run a pack build, very similar to a Docker build. And what you do is you get these images that are kind of fine-tuned for your application itself where we can split out basically like build and launch concerns. Bill packs have the option to provide a software build materials that can be done at build time. So the actual things that are ending up in your image are the things that can get documented. Given the same inputs, the builds are going to be reproducible. We care about security things like doing things as non-root as we do that. I mentioned kind of the ability to kind of put the right things in the image, and that's due to some of the things we have with the ability to have more events caching, kind of more granular control that you normally get compared to Dockerfile, and that allows us to also do things like be able to rebase images. And so what I mean by that is that with build pack, built OCI images, we know the theme of where kind of your app layer ends and where the base image starts. And so with that data, we're able to kind of, with ABI compatible images, replace the underlying image without having to rebuild your entire app. And so you don't have to basically do that compute for your fleet of images across your entire inventory. We're talking about pack. It's actually what we call a build pack platform. There's a handful of them out there. And so this is the main one that if you're getting started that you're going to be familiar with in using. People always use it as part of their CIC pipeline as a quick way to get started or all the way to production. As part of the project, we also provide a GitHub action to have pack accessible so you can actually just like leverage this in a GitHub action workflow. K-Pack, which was recently brought to the project, is our Kubernetes operator for that. And then at the other end, we also have the Spring Boot project, which isn't associated with the project, but they also have a way to kind of build build packs right from Spring itself. The easiest way to actually distribute build packs is through a concept we call the builder image. And so this essentially takes the build packs that you want to run. We've been in them into the image. And then we have this concept that we're moving away from called stack that essentially is your build and run image. And so all those things combined, turn them to an image and you can treat it like any other image and you can use that to essentially distribute build packs as well as your base image that you're using. And then we also have the life cycle. So the project itself is mostly a specification project that kind of defines the interactions between what a build pack author needs to do and with build packs as well as like how a platform interacts with it. And so the life cycle kind of sits at the center and is the build pack runner that combines those things together and allows us to kind of do all the magic that we're able to do as a project. And so those are kind of the core concepts. You'll need to understand what we're going to be talking about in the talk. Yeah, it's really that simple. That's all you need to know to get started. But this talk is about a lot more than just getting started. It's about how we use build packs to deliver multi-architecture builds. And we're going to use Salesforce as a case study. So as Terence mentioned, I'm a principal architect working at Salesforce on our Hyperforce platform. Hyperforce is just a fancy way of saying Salesforce running in the public cloud. It was delivered originally about four or five years ago. So we've been running the public cloud with great success for some time. Our customers are happy with the scaling and the elasticity and the data residency and all those things that come with the cloud. But we also have to make our shareholders happy. And that means raising the bottom line. There's a few ways we can do this, but I think many software, organization, software companies are doing this by driving down cost to serve. Which is really just another way of saying wasting less money in the cloud. So lots of different ways to drive down cost to serve. You can right size your services and applications. Make sure you're not wasting money on excess resources that you're not using. You can strategically locate your compute sort of arbitrage where you're paying less price over here than over there. You can write more efficient code or you can just simply pay less for infrastructure. And with the introduction of AWS Graviton, this is a very real situation and something that provides a lot of incentive for large organizations like Salesforce. So AWS Graviton is a new series of server processors that are based on the ARM architecture. As opposed to the x86 architecture that we've been using for so long for a couple of decades. ARM is by design more energy efficient, uses less power, generates less heat. And therefore platforms like AWS can charge less for them. So typical software organization may save as much as 40% on compute spend by switching to Graviton. So Salesforce has a very large AWS footprint. We have our core Salesforce CRM product. We have properties like Heroku, MuleSoft, Tableau, Slack. So I won't quantify this, but I will say our footprint is larger than your footprint. So we stand to gain a lot by switching to Graviton. But cost to serve is not the only reason to do this. We also have internal developers that are working on Apple Silicon now. And so even in our developer experience part of our product, we have to deal with multi-architecture images. So all that said, multi-architecture is not only coming, it's here. And as software developers, we have to accept that the free lunch is over. We've gone for 20 odd years, not really worrying about things like processor instruction sets, and now we have to. It's very similar to when multi-core processors came about and we have to start worrying about concurrency and making sure that our code is safe for concurrency. But before Salesforce started this journey and started actually lifting its services on to new architectures, we had to do some work and pack to enable pack to produce the builds that target the architectures that we're trying to deploy on. And so Terrence is going to talk a little bit about that. Thanks, Joe. So when we talked about multi-architecture for build packs, what you care about probably depends on where you fit in the spectrum of the end user for the project. And so kind of all the way at the one end of the spectrum, ultimately at the end of the day, as a business, you want to deliver multi-architecture images that can run in production, like Joe was talking about. So these are kind of the app developers, right? Like they don't really care how they get these images, they just want to be able to build them, right? And then you have build pack authors who are building these build packs. And in order to support these app developers, they actually have to be able to produce multi-architecture build packs that can actually do that thing. And then you have the platform operators actually running platforms like Hyperforce or your Google Cloud Run or other build pack platforms that have to be able to kind of support these use cases. And so they have to be able to be using pack as part of that CID pipeline, like pack has to be able to run in both all the architectures you want to support. Lifecycle has to be able to do that as well. And then also the builder images that actually contain these build packs and the base images also need to kind of support these things. So we actually have to go and support a bunch of different things, right? And so as the project, we created this checklist of the things we need to do to actually support something like ARM64. So luckily for us, we've been building pack and lifecycle binaries for both AMD64 and ARM64 for a while. So you can just pull those off our GitHub releases and get those already. But most of the build pack vendors aren't actually producing ARM64 build packs, which makes it kind of difficult to kind of support these use cases. For the stack images, for the build and run images, you can inherit from an ARM64 image already. So that kind of makes it much easier to kind of get those. But with builders, you actually have to combine those two things. So without those ARM64 build packs, you're not going to get ARM64-compatible builders. And so these are the areas we're really going to focus on. So the first place we wanted to target was for these build pack authors to actually kind of produce these multi-architecture of build pack images. And to start, we wanted to provide kind of the low-level, like the low-solve primitive similar to Docker manifest of like actually being able to produce multi-architecture build pack images because at the end of the day, when you're distributing a build pack, it actually is just an OCI image or a layer inside of an OCI image. That's one of the main distribution mechanisms. And so we take one of these things. This is the kind of blowout of what that looks like. This is an OCI image with the build pack layer, and it's just pretty simple. It has configuration and then the build and detect that you need to actually execute the build pack itself. And then if we do a Docker inspect and inspect the OCI image config, you'll notice that the architecture itself is empty. So it isn't architecture dependent, which makes it difficult when you're actually trying to run this thing at the end. So what we actually have to do is we have to produce, if you're not familiar with the image manifest inside OCI images or Docker, is that we have to actually produce the image for both architectures independently, and then we have to link them together inside of the manifest. And so the manifest is just a JSON that points to the essentially full images that are out there for each of the architectures that we want to support. And so in the next release of pack, we're going to have these commands. So you don't have to kind of go out to your podman or Docker to kind of do this, that you can just use pack to create the manifest, annotate it with the architectures that you need. And in this case, it's going to be AMD64, ARM64, point to the images that it needs to kind of point to, and then push this image to the registry so you can actually use it. This is actually a bunch of work, and so we're not exactly happy with this, but at least kind of gets you going and off the ground. And you, like I was mentioning, they like podman, Docker do allow you to kind of do this already today. But we do want to bring a bunch of this stuff in the house. And so there's a current RFC that one has that is what we're calling phase two, that is about improving the developer experience around this. And so, you know, we have those commands that I showed in this phase one of the next version of pack that's coming out soon. And we're going to replace that with a single command. So you're going to be able to run pack, build pack package and enable multi architecture and have it actually just build that manifest and link all that stuff for you without having to kind of do all that stuff by hand like you have to do today. And any of the tools that you're gonna do when you're doing any of this stuff kind of in this ecosystem, right? And so the way we can kind of get away with that is that there's kind of two key changes that we need to introduce into the system. So the first one's gonna be that we want a new build pack directory layout that actually is multi architecture aware and can basically differentiate between the different kind of build packs that we want to have for the different architectures. So using our current example, you'll see a directory structure that looks like this that allows to basically in a single directory have multiple different architectures that we want to actually package the build pack for. And the second thing that is happening is that we've been slowly moving away from the stack concept and moving onto targets which more closely aligns with how things just exist in the OCI image spec anyways. And so this allows us to get more information around the OS and architecture information itself. And so as part of the build pack configuration, when we run that command, we know what architectures this build pack does support deemed by the build pack author. And in addition to that, to make this compatible with builders, we want to bring this concept forward to the builders as well. So that allows us to essentially have a single command once we have those build packs and we can point to them, essentially create a builder with one command not have to kind of stitch all this stuff together after the fact by hand. Yeah, so a lot of what you've seen is stuff that's coming in the future, but what is there today is enough to start doing multi-architecture builds with pack. We just hope that it'll get easier as time goes on. So at Salesforce, we're doing that. We're taking advantage of what's there today and doing a little bit manually and some extra steps just because of the friction. But I wanted to share some of the lessons that we've learned and some of the decisions that we've made in this journey is I think has the potential to help you. So at the beginning, we made a few strategic decisions. The first of which was that we weren't going to cross-compile. There's actually a lot of edge cases in cross-compiling. You hit one dependency or you have one language ecosystem like Ruby that doesn't play well because it has native dependencies or something. And you're in for a world of hurt, right? Next thing you know, you're contributing upstream to an open source project so that it can cross-compile. And we really just wanted to avoid that so we decided that we were going to go ahead and spin up build infrastructure that matched the target infrastructure we were trying to deploy to. And ultimately, this was, I think, a good decision. We're running pack build in the environment or in the architecture of the environment that we're going to deploy to and it just simplifies a lot of things. The trade-off, however, is it creates a lot of circular dependencies. So in order to get something running in this new architecture environment, you have to have stuff that's been compiled for it. So we did have to go through this kind of awkward manual bootstrapping process to get the images, our runner for pack and the build packs themselves compiled for the new target architecture. But then once they were in place, everything was running smoothly. So definitely a decision that we're happy with. The second strategic decision is that we would always build both architectures in parallel and then stitch them together at the end with Docker manifest commands. So because of some of the things that Terrence described that aren't available yet, we're using Docker manifest. And because we're building both architectures in parallel, it actually simplifies the developer experience for our end user, which is our internal Salesforce developers. They simply express that in their CI matrix that they want this new architecture and then from there we spin up the new build infrastructure, we produce the images, we send them to registry and everything is enabled downstream. And also doing this with Docker manifest created a level of consistency or homogeneity across all of our images. We use build packs a lot, but not exclusively. So a lot of the things that I'm talking about we're having to do for Docker file builds as well. So by doing it this way, the runtime clusters don't really care how the images were built. And so that uniformity makes things a lot easier for us and makes adoption of the new architectures easier for our internal developers. So some lessons learned. The first one is do not couple the migration to multi architecture to anything else. Many people, many teams in your organization will see this as an opportunity to fix technical debt and fix things that they've always wanted to. At Salesforce, we did have to upgrade our operating system to a real nine that's part of what was necessary to get onto the new Graviton processors. But with that came a lot of things like, oh, we should change the way that we create users and the different users that we create on the operating system and so on and so forth. And that starts to transform what should be a simple, compile it on the new architecture to now application developers are having to deal with interface changes and things like that. So just good practice, cut scope, focus on multi architecture and don't let those other things creep in. The second thing we learned is that services are going to need to run in a mixed mode. And I don't mean like longer term, like definitely longer term, some services will run on this architecture in that environment or this other architecture in this other substrate. But I mean in the transition process, they're going to run in mixed mode in a single environment. And that's because this is not flipping a switch. You're going to build on the new architecture, run some instances, test them, find some problems, maybe roll them back and then you're gonna iterate on it a bit. And this can take weeks, it can even take months. And during that time, these development teams can't just stop coding, they need to continue shipping features. So we needed to approach this in a way that would support the ability to run both architectures at the same time and kind of go back and forth between them. The final lesson learned is really about the user experience. I think thinking about how your internal developers are going to adopt this, how they're going to interface with it is really important. As I mentioned, we gave them a single entry point, this architecture definition in their CI build matrix, made it really easy for them. We then used things like admission controllers on the Kubernetes side to determine which services were going to which architectures. But really streamlined things for developers so they didn't have to think too much about the inner workings of what it meant to be configuring your HelmChart or your deployment pipelines or things like that. So in addition to the services themselves, as I mentioned, we had to actually build the build packs for the new architecture. We don't have the advantage of some of the things that Terrence was describing with the directory structure layout for different architectures. So we did have to kind of make some compromises. We had to upgrade dependencies in some places and remove dependencies on other things to get the build packs built. But once they were built, they provided a lot of leverage. If we compare the build pack builds and the build pack multi-architecture builds to what developers are doing with their Docker files, it's really something that we had to do once in one place and then service developers get leverage from that whereas Docker file becomes a set of instructions that people follow and copy, paste things and so on. Yeah, so we're looking forward to all the phase two stuff. I think when this is delivered, even as soon as pack zero three three, it'll just make the experience a lot easier for you and hopefully make multi-architecture real. I am a CMB contributor. Unfortunately, I was not able to be with you today on site, but here are the key takeaways from this amazing session. Support for ARM architecture is really important for CMB tooling ecosystem. But today build packs authors and enterprises struggle with hacky CI city processes to do it. We are working towards making ARM64 and multi-arch images a first-class experience in vehicle bags ecosystem with functionalities that will allow you to create and maintain multi-arch images effortlessly. As always, we are happy to hear from you and your feedback is very valuable for us. Please keep in touch in our social medias or join us in our weekly working group meetings. Yeah, that's all we had for the talk today. So thank you. Again, this is the QR code for the rating. Please join us in the Slack. I would say come to the booth but I think that is closing the next eight minutes. Reach us out on Twitter and then the build packs kind of GitHub group. And yeah, I guess we're open for questions. People have been excited to hear that you're saying you're focusing on this target base builds and away from the stack base builds. One consideration I wanted to ask that you consider using stacks for avoiding duplicating files and artifacts that are architecture neutral. So if you have no data files that are going into the image that are pretty much identical for Intel as well as for ARM, how is Build Pack going to avoid this wasted duplication? So let me repeat the question and see if I understand. So you're asking about like artifacts or things in the service code that are either architecture neutral or something that maybe you don't want in that architecture and how to... No, that are architecture neutral. So if you have data files, if you have like huge static files that are simply the same for both architecture. So are we going to avoid having to have that stored twice in the image? Oh yeah, store in each of the two images. Yeah, do you know if there is a solution to this? I mean, it's just player sharing, I assume is the only thing. Yeah, I mean, one of the advantages of Build Packs is that they can like given a Build Pack that's creating a layer for whether it's, whether it's compiling Java code or installing the JDK, it can produce a layer with a SHA that is identical to another execution of the Build Pack that did the same thing, right? And so it doesn't matter where that fits into your image, that can be deduped. So this all depends on the design of the Build Pack, but if the Build Pack is written in a way that it takes that artifact, that large file, stores it in a layer consistently just by nature of the Build Pack execution environment, which does things like strip timestamps so that you will actually get the same SHA, that can happen just automatically, so to speak. But it does require effort on the side of the Build Pack author to make that happen. All right, well, we will be around for the rest of the day. Please grab us if you'd like to chat and we're always happy to talk about Build Packs. So thank you everyone. Thanks, Naran.