 Hi all. I'm Aiden. I'm one of the Billpaks.io teamleads, and I want to talk today about multi-architecture images. Surprisingly, or I don't know what, I don't really have any answers in this talk, but I would like to ask a lot of questions. But firstly, as I said, I'm kind of going to introduce myself, and then I'd like to get a picture of your general knowledge of Billpaks. I've spoken to some people already just to see where things are. So as I mentioned, I'm Aiden. I'm part of a team in Bloomberg that develops our platforms in which our AI engineers at Bloomberg depend. And for us, Billpaks are a critical component because they allow the Bloomberg AI engineers to achieve high velocity when they iterate on experiments. So I'm also a Billpaks.io teamlead where I mostly help with documentation. So if you find any problems with the documentation, you can just shout at me. In this talk, I want to give a brief overview on the state of Billpaks in 2022. I want to present some motivation around why we might want multi-architecture images. And then I'm going to present kind of three high-level approaches to implementing multi-architecture images. As I said, it's important to note that we don't yet actually have a solution for this. But I'm trying to ask a lot of questions, and I'm not going to get down and deep into the code in this talk, so it should be high-level. The key focus though is to accelerate the discussion around multi-architecture images and to encourage kind of design questions around it. So I've spoken to some of you, but could I get maybe a hands up if you're kind of interested in Billpaks but haven't yet done very much with them? Oh wow, interesting. That's over half the talk. So certainly I've got some stuff from this talk, certainly early on that will hopefully wet your appetites. And you can talk to me later about more details of the Billpaks.io booth. Hands up here if you've got a reasonable amount of experience with Billpaks. There's a couple of people here who have a reasonable amount of experience. I'm certainly going to be leaning on those of you to contribute to this talk or the questions later on. And then I do recognize some faces, Stephen and Matt. You are the Season Billpaks users, but are there any kind of Season Billpaks veterans out there in the audience? Oh, you're hiding in the back there. Fantastic. Matt, you just, Matt's given me kind of half. Yeah, he's only a K-Pak developer, but hey. So what our Billpaks is kind of an interesting question to ask in the first... The way I like to talk about Billpaks is to say that there is a declarative way to translate application source code into a production image. Right, and I can give you hopefully a little demo of what I mean by this. I'm going to start with a fairly standard Hello World Python project. Yep, it's got an example.py. It's a fast API service. What I'm going to do is... What am I going to do? I'm going to CD into the directory. And then I'm going to eventually run PAK, a CLI tool to build the output image. So again, I've started with a standard Python project. I've run the PAK tool. The Billpaks process will introspect your project source code to determine what Billpaks contribute to the build. And then during the build phase a bit later on, what it tends to do, or Billpaks kind of commonly do, is they provide an application runtime. In this case, it's going to be a Python runtime. They install application dependencies. In this case, it's going to be installing some pip dependencies that were in the requirements.txt file. And then it's going to apply some kind of configuration settings. In this case, I've used a Heroku-style proc file. So it's going to configure an entry point into my image using that proc file. If you don't know what the proc is, still proc file is, you can talk to me later or ask a question about it. So the sales pitch for Billpaks, that was a build done, dusted. But the sales pitch, I don't need that demo slide, that was only for a fallback, is that we tend to produce or we try and produce small and targeted production images. The builds are replicatable, meaning that if you rebuild an image using the same application source with the same dependencies, with the exact same dependencies, it leads to a byte-for-byte reproducible image. This is important in some regulated industries. We've also got a nice property of Billpaks, the idea of being rebasable, the ability to switch out our run image with a small registry update. If you want to talk about rebase, or if you see it in a bit more depth, have a visit of the Billpaks booth later. I'm not going to go into it in this talk, but it is a really neat operation. So Billpaks, or more properly known as Cloud Native Billpaks, is a CNCF project. Go team CNCF. Come on, people. Don't you want to get it? Yay! Right. Cloud Native Billpaks are the third generation of an approach that was originally pioneered by kind of Heroku and People at Pivotal about a decade ago. So the ideas that are in this have been around for a long time. We do, though, have some really nice emerging features in 2022. We have started, or we've actually released, some experimental support to configure build and run images using a restricted Dockerfile syntax, which is quite neat. We've got some better support this quarter for user profiles on an image and there's hopefully soon some easier tooling to start writing new Billpaks coming on stream. But in addition to the individual open source contributions, we have had, in this quarter, contributions from 13 different companies, which shows you that it is a live project with a lot of different contributors from multiple vendors. So it is, you know, it's precisely what we'd like to see in a CNCF project being multi-vendor. But what do we actually do at Billpaks.io? We actively maintain a number of specifications. As an end user, and most of you are here to think about it from an end user perspective, you should never really need to read these specifications. However, as a Billpaks author, you might from time to time have to dip into the Billpaks interface specification. And that kind of leads me to talk about the kind of people that we consider when we're developing these specifications. We keep in mind kind of three target user groups. First being, and most importantly, frankly, being the application developers. Application developers, we would say, use something we call a platform to build an image. You've already seen an example of a platform in the demo that I showed. PAC is an example of Billpaks platform. And application developers, unsurprisingly, just want to build an image. The second audience would be Billpaks authors. These people are open source contributors. Or maybe they're developing some company internal Billpaks. They tend to want to provide composable functionality to their application developers. For them, obviously, we provide the specifications. We provide the specification of the platform. And we provide a library, LibCNB library in Go, which allows them to write Billpaks for their end users. Python and Rust, LibCNB-like bindings are available from other sources. And you can also write Billpaks in Bash, or the technology of your choice. Finally, the use case for platform operators. These are the people who run the platforms to build the production images. Often, these users want to enforce project-wide or corporate-wide policies. For example, they might want to say that you can build images on our platform, but we're going to enforce the policy that you can only ever use an internal mirror to resolve PIP dependencies, or NPM dependencies, or so on and so forth. So these are the kind of free user groups that we keep in mind when we're developing the specifications. And there are multiple implementations of these. There's not just one implementation of the Billpaks platform. It seems appropriate, open, given that we're at KubeCon, with K-Pak, which I'll say a little bit more about in the next slide, but it's a Kubernetes controller, a Kubernetes operator, for building images. It's currently maintained by VMware as an open-source project, and I'm kind of questionably going to look at Matt McNeugh here and say that CloudFoundry's Coreify is derived from K-Pak, or based on K-Pak? Kind of, maybe, nodding. He's kind of nodding. I've got to take that as confirmation. Other platforms are PAK. It's a command-line tool provided by us at billpaks.io on the CNB project. And PAK tool is used with a lot of other projects. It's used quite often in GitHub Actions. It's used in CircleCI. It's used in a lot of places where Billpaks are used to build an image. Tecton's an open-source CI CD platform. It's maintained by the Continuous Delivery Foundation. Spring Boot, interestingly, is also a Billpaks platform. They use the Billpaks steps underneath to output an OCI image for your Spring application. And then there's other platforms out there. Salesforce's functions use Billpaks to power their functions as a service platform. So there's a lot of implementations and a lot of different uses of things out there. And kind of newsflash. As of two weeks ago, there was an open proposal to donate K-PAK to the billpaks.io CNCF project. So thank you very much, Matt, and all the other people at VMware. This is kind of cool. We're really excited about this, and we hope to be able to accept the proposal soon. You know, modular, all the work that has to be done. There's always work involved in these things. So I've said that there are many platforms for Billpaks, and there are many platforms that use Billpaks to produce an output image, and there are many implementations themselves of Billpaks from multiple different vendors. PIKETTO is an open-source and vendor-neutral project that implements a set of Billpaks. The Salesforce people have Heroku, Billpaks for Heroku, which target the Heroku Cloud or the Heroku platform. Google has a set of Billpaks that they've authored, which largely targets Google Cloud Run. And the VMware folk have a set of Billpaks, which some of them are derived from the PIKETTO Billpaks, and they target VMware's Tanzu Cloud platform. You can search for the Billpaks if you need a Billpaks to fit your needs at registry.billpaks.io, but it's often a case that other companies also have internal Billpaks. For example, at Bloomberg, we primarily use the PIKETTO Billpaks, and then we extend them with custom Billpaks where we need to have some kind of custom functionality. So it might be interesting at this stage, particularly given that most of us in the room are new to Billpaks, to have a look at how a platform would go about building an image. There's plenty of seats down the front, people, if you want to just filter your way in, or on this side as well. So interestingly, most end users will interact with a platform like PAC or KPAC, rather than interacting with the Billpaks themselves. And, yeah, suppose we have that Hello World Python application, and we like how the PIKETTO Billpaks, for example, build Python applications. Well, we tell PAC to use and trust the PIKETTO builder, which is what you saw in the demo earlier. I had PAC build example, and I passed in a builder as a command line flag, which was the PIKETTO builder. And most builders contain four things. So the builder image is the build image which our Python application is built, and I've kind of used a gray box here to represent the builder on the right-hand side of the diagram. The builder contains a collection of Billpaks. The blue boxes here represent the collection of Billpaks. And it contains a reference to a run image. That's the green box on the diagram. And finally, builders tend to contain a copy of the Billpaks.io steps binary. The Billpaks steps, I've kind of denoted with these kind of down arrows, these chevrons here that are colored purple and pink. We'll come to those in a minute. So it is worth looking at how PAC and the Billpaks steps interact to produce an image. And in the most straightforward case, and that's the only case I'm going to consider today, is the most straightforward case. The platform spins up the builder image as a container. It mounts the application source code and does whatever amounting it needs to do. And then it invokes the Billpaks steps in a particular order. And that order is the order that they're given here. We start with the analyzed step, and the key role of the analyzed step is pretty much just to bug out early if you don't have access to read and write to a registry. So, for example, we don't want to perform a full build if we can't access the run image on your registry. Now, I've colored, as you can see, the detect step in pink, like I've colored the build step in pink, because these are generally of more interest to the Billpaks end users. In the detect phase, what happens is that each individual build pack provides its own detect binary. The detect build step runs the detect binaries of each build pack and finds out which build packs contribute to the build. Now, we've seen this in the demo earlier, if I scroll back up. You see that one of the first phases is the detect and that in this, this builder contains a number of images that are build packs that I can't read. What does it say over there on the left-hand side? How many images are build packs does this builder contain? Nine, but only six of them contribute to the build. So the detect phase of all nine has run, but only six of them have recognized that this is a Python project and that we need to contribute part of the build for this Python project. The other three are Node.js build packs, and obviously a Node.js build pack or related build pack don't contribute to a Python project. Fantastic. Cool. So it finds out what build packs contribute to the build and then it outputs at the end of this stage a build plan and a build order for running the build packs in. There is a restore phase, which is next, and build packs support caching at many levels. I'm not going to go into the depths of this in this talk, particularly because of the variety of caching techniques that can be implemented in build packs, but the restore phase restores previously cached layers from volumes or from a registry. Next phase is the build phase. This is a particularly interesting one to those end users. It takes the build order computed by the detect phase, and then each build pack, each individual build pack contributes a build binary, and the build process executes the build binary of each build pack in the order given in the build order. So I think some details here are probably a little bit appropriate. You can see the build order that I've kind of put in the middle of this diagram, and what we can see is that each build pack is executed in the build order that's given, and the input to each individual build pack is a subset of the build plan that is useful to that particular build pack. We call this a build pack plan. It took me about three months to figure out that those were different things. So in this particular case, there's kind of a Python distribution build pack that contributes a C Python runtime as a layer to the image. There's a pip install build pack that's going to contribute at least a layer containing the application dependencies, probably some caching stuff as well, which I'm completely ignoring right now. And then there's the proc file build pack which contributes a layer containing the entry point of the application. And in all cases, a full software build is provided for each layer. Finally, we've got the kind of export phase, and given all the layers produced by the build phase, the export phase produces an OCI image on top of the run image that was part of the builder. Not all the layers are exported as part of the image. You know, I have ignored some details. There may be caching layers, there may be build-only layers. Those are not exported as part of the image, but they may be used for speeding up rebuilds in future. And the export phase does ensure that the cached layers are correctly cached. Now I get to talk about something really exciting. But I've only got a touch on it, and the person you really need to bug about this is a person called Natalie at the buildpacks.io booth. Now, I've given you one slide here on this, but this feature has taken many months to implement. So I'm really underselling her work here. But we released this month a new experimental feature. And the feature adds new build steps, which allows us to extend build and run images. The input to the extended phase is a very restricted dockerfile. We certainly do not allow and don't intend to allow the full syntax and expressiveness of dockerfiles. But currently, the restriction already allows you to change the run image the way we've currently implemented it. So we expect to increase the subset of dockerfile syntax that we do support. And going forward, we will have to support a mechanism for using native packages, meaning that if you're extending a debin or a bunch of images, you should be able to apt-get and install some packages. Or if you're extending a rail-based image, you'll be able to DNF and install some RPM packages. And that brings me to thinking about things in terms of platforms. The pack platform itself is... Oh, timing. Released on multiple platforms. The build steps, which I'll actually give a name to now. It's called Lifecycle. It's released on multiple platforms. And the build and run images are available on multiple platforms. The buildpacks themselves are, you know, primarily designed and they're implemented and tested to run on AMD64. I'm going to use the kind of go names for these platforms. So AMD64, what we might know is X8664 and other terminology. So the Pequedo, the Salesforce, the Google, the Tanzu, and our buildpacks.io buildpacks all support Linux on AMD64 and are all tested for that. It is unfortunately not the same case for Linux on ARM64. We find that still that pack, Lifecycle, the build and run images are available on ARM64 from many providers of buildpacks. But in general, there's only partial support for Linux and ARM64 from pretty much any of the providers of buildpacks. And the question is, why would we be interested in this in the first place? Why are we interested in kind of AMD64 and ARM64? Well, as I said, you know, the platforms, ARM, PACK and KPACK, are released on multiple architectures. However, with this talk, we're particularly interested in the output image. And many of us are interested in deploying output applications on both AMD64 and ARM64. This is largely because the cloud providers currently support both of these hardware platforms. So you can, you know, you can get a server from a cloud provider that is an ARM64 server. If we have support for two platforms, our AMD64 and ARM64, then in future, it should be easier to support other platforms like some kind of MIPS64 platform. And it's also the case that developers at the moment have AMD64 or ARM64 laptops. This nice shiny new Mac M1 that I have is an ARM64 machine. And we'd like to make the development process experience as neat and efficient for developers using non-AMD64 hardware. So I suppose I'm really scoping the questions around multi-architecture support at the moment to considering Linux and AMD64 and Linux and ARM64. So, how do we go about supporting multi-architecture at Billpacks? Is it simply the case that if we provide ARM64 Billpacks, then the problem is solved? Well, I kind of want to think about the problem from the perspective of each of our three user groups. From the perspective of an application developer, an application developer, as we figured out before, just wants to build a production image. And it seems reasonable to provide some way for the application developers to choose an output image architecture. In this case, I've given them a dash-dash platform, Linux slash ARM64. And if we continue to use Pack as an example, you can see how this might be implemented or this might be presented. And the output would be a collection of layers represented by a single image manifest. It might be useful, or at least that's what I'm intending to do. I'm just going to take a quick look at the multi-architecture of OCI manifests just to convince ourselves that the user experience could be as straightforward as what I'm claiming here. So, in this here, if you've not seen this kind of stuff before, I'm using Crane, which is a really nice tool, to view the manifest of the official busybox image. And I've filtered this in some way because the actual output manifest that Crane shows is hundreds of lines long. And what we can see here is an image index which contains multiple manifests. And each manifest in the image index contains a platform property. And so, from the perspective of our end user, they can run, you know, podman run busybox or docker run busybox. And the container engine chooses the most appropriate image for their particular platform. So, in general, we can produce multi-architecture images by providing manifest lists. And the manifest list on the left-hand side of this diagram is a list of manifests with the platform metadata. And from the manifest list, you can find the image for your platform. So, this is all part of the OCI specification for images that we all know and love that we've all been running for probably many years at this stage. From the perspective of Bill Pack authors, though, the question becomes a little bit more complex. And I'm aware that there aren't many of us in the room who are Bill Pack authors or want to be Bill Pack authors. But Bill Pack authors do need to support language-specific technology stacks. For example, Bill Pack authors might want to provide a list or a set of Bill Pack's to support applications written in Go. Now, the Go ecosystem is designed to really smoothly support this multi-architecture use case, which is great. However, it's not the case if you're providing a set of Bill Pack's that support Python or Node.js or Ruby or even Java or other technology stacks. For example, if you're providing Python Bill Pack's that provide a Python runtime, you need to have a Python runtime that's available for your AMD64 and your ARM64 platform. It's also the case that Bill Pack's that contribute dependencies may need to be aware of architectural differences. For example, many pip dependencies under the hood use GCC or something to compile native components for each platform. So our approach to supporting multi-architecture Bill Pack's also needs to be able to support Bill Pack's authors in maintaining their current high-quality of code bases. So we need to support them to test on multiple architectures. And finally, look at the multi-architecture images from the perspective of a platform operator. A platform operator probably wants to pick and choose the architectures that they support for production builds. And the question arises, do all operators have access to the actual target hardware at build time? So for example, or that is to say, would an operator want to build all images on AMD64 hardware that they have available but actually allow deployment on both AMD64 and ARM64? So of the three classes of Bill Pack users, platform operators, these people who run a K-Pack instance, for example, they need to figure out how to support multi-architecture Bill Pack's to build multi-architecture images. Bill Pack authors probably want a mechanism to distinguish architectures and then actually test their Bill Pack's on multiple architectures. And finally, last but the most important class of user, application developers need only be aware probably of the architectures that their platform is intended or their application is intended to support. I'm going to argue that there are kind of three main approaches to considering generating multi-architecture images. There's a cross compilation approach, an emulation approach, and a bare metal approach. And again, if you're new to Bill Pack's, maybe this kind of next four or five minutes of the talk is going to not particularly apply to you, but hopefully we learned something fun anyway. Ten minutes. Thank you very much. At the end of this entire process, whatever implementation technique we use, we want something that I'm going to diagram like this. I'm going to present it like this. We want an image manifest list, and we want that image manifest list to point at architecture-specific images. But in this instance, we want architecture-specific layers, the run image for AMD64, and the run image for AMD64. We want architecture-specific layers on top of this, but we also might want to be able to share some non-architecture-specific layers between the two images. So, what would happen or how would we go about doing this if we use the cross compilation process? So in this diagram, I've drawn kind of a representation of the builder on top, which I introduced a bit earlier. I've drawn below that in the bottom left-hand corner, a representation of how Pack, in this case, I've chosen one platform, to execute the build steps or some of the build steps. And I'm using that kind of stacked image layers diagram in the bottom right-hand corner to represent the output image. And assuming that we have an AMD64 host machine, we might only need to provide a single AMD64 builder, which points to multiple run images, one for each output architecture. And the host build packs could cross-complial native dependencies for the target run architecture. So internally, a platform like Pack could run a single detect process and then wouldn't build an export process per target architecture. In short, though, ah, click, yay. The cross compilation approach is most simple for platform operators. It's probably most simple for the people that we probably don't need to simplify the process for. The cross compilation approach really does probably make life difficult for end users who may find that their native dependencies or their Python dependencies, which have native dependencies, simply don't cross-compile, putting them in a situation where they have to talk to the upstream developers to make things easily cross-compile. So, you know, pros and cons, but I'm kind of leaning on the side of cons for this approach. What happened if we used kind of an emulation in place of cross compilation? Suppose, again, we're on the AMD64 machine, and the builder image then requires a run image per target architecture, a set of build packs per target architecture. So the builder itself is now effectively composed of two different builders and a lifecycle binary per target architecture. Again, the platform could perform a single detect phase, but it has to spin up virtual or emulated instances of the other steps for most output target platforms. Now, again, pros and cons of this approach. I'm keeping an eye on time. So while we have to provide build packs for each target platform, I would argue that we kind of gain platform parity for end users, and by that I mean that if your dependency is supported on the target platform, an ARM64 platform, it's highly likely to compile on an emulator for that target platform. Of course, the problem with emulation is that it incurs the emulator overhead, and we're restricted to the architectures that emulators such as something like QEMU support, which gives us the kind of final of the three approaches that I want to present today. That there's a bare metal approach where we assume that the platform operator has the hardware available for all target platforms. And like the case in the emulated approach, we need a builder to provide effectively two different builders, a single builder image that points to two different builders. And where the bare metal approach differs is that the platform needs to coordinate building between two different hosts. Now, again, what's clear in this case is that using multiple bare metal hosts will introduce new failure modes for platforms that we may or may not want to put into a platform. That is to say, the platform now has to deal with the networking failures when talking to multiple bare metal hosts. And this can impact both end users and platform operators. However, end users in this case would benefit from native compilation speeds on all platforms, which is certainly a bonus. So to summarize all this, oh, I need to look at this slide. What I've done so far is kind of outline the build packs process. So hopefully those of you who are new to build packs have learned a little bit about the build packs process. I've briefly, all too briefly, considered the new Extender functionality, which has taken us months to develop and is really a cool piece of functionality. And I've presented three approaches to implementing multi-architecture image builds. That is cross compilation, hardware emulation, or the bare metal approach of having multiple hosts. Now, why I've done this is because I want to accelerate this discussion around multi-architecture images. As I said, I have no actual solutions in this talk. We only have questions. So what are the next steps? Well, platforms, I think, need to be able to create OCI manifest indices, or OCI manifest indexes. This is independent of the approach that we take to generate the multi-architecture images. No matter which of these three approaches we choose, we still have to generate OCI manifest indexes. And depending on the approach that we choose, platforms may need to consume multi-architecture builders. And nicely, after a conversation yesterday with Steve, and I think we're closer to this goal than I originally thought we were when I started writing this talk. So where does that leave us? Well, we want your opinions, people. There are links here to GitHub issues that discuss different aspects of multi-architecture images. Please visit these. Give us your opinion. Give us some feedback. That's where I can talk. If GitHub isn't your communication tool of choice, find us at the billpaks.io booth where we can see you face-to-face and talk with you. Finally, thank you very much for your time. I hope this is the start of a conversation. And, you know, if you can't visit the BillPaks booth, please catch up with us, async, either on the BillPaks community on Slack, on the CNCF Slack, on Twitter, or, you know, contribute some code on GitHub. Thank you very much for your time. My timekeeping friend is around somewhere. Do I have time to take a question or two? Yes, we have time to take a question. Anyone got any questions? Hi. I'm going to give you a mic. Oh, just shout it out, and I'll repeat it then. Take a second to talk about why you want to use BuildPaks as opposed to just Dockerfiles. Right. The question is, why would I want to use something like BuildPaks as opposed to Dockerfiles? It's a common question that we get. I'm going to give you the kind of one-minute answer. Dockerfiles are great. I really like what they do. We have found, or what many of us have found, is that if we've got tens or hundreds of projects, it then becomes difficult to maintain, sometimes difficult to maintain Dockerfiles, particularly as things change over time. So, for example, Python 3.6 has recently fallen out of support, I think. And what we'd like to do is upgrade all the applications that use Python 3.6 to at least Python 3.7. Now, in the Dockerfiles approach, you write a bot or something that goes along, scans each Dockerfile and does the update in place, maybe submits a PR. In the BuildPaks approach, particularly when I use a centralized build form like K-Pak, what I basically say is that Python 3.7 now becomes the default builder. And effectively, because we've got S-bombs or we've got knowledge of what goes on in all of our images, we can say, let's rebuild all those images that used Python 3.6 to use Python 3.7 instead. So, there's advantages to the BuildPaks approach. They're faster in a lot of respects. It's easier, from my perspective, to centralize a lot of the policy that I need to roll out to the rest of my developers and my company. And it takes load, cognitive load, of my developers. Some of my developers or some of the developers that we have are, well, all of them are brilliant. But a lot of them are very specialized. And if I ask them to write a Dockerfile, of course they can because they're brilliant. But it's really taking them away from their bread and butter, not be some kind of AI machine learning stuff or financial analytics or that kind of stuff. And if I can take that cognitive load off them and put it onto a team of one or two people who can centralize all this and all this policy, well, we make a lot of problems go away. Does that answer your question? Fantastic. And talk to me at the booth if you want some more in-depth answers. He's also the timekeeper, so if he gives you the mic, he definitely has time to answer the question. Thank you. I was wondering how do you allow users to inject their own dependencies? You made the Python example, let's say, one of the 100 projects has some C library that they install in their Docker container. But how does that work if they all have a build pack that's centrally managed? Yeah, no, that's a really good question. I probably don't have time to get to the depths of it right now. But the way, three minutes, thanks, what you end up doing is writing your applications like you normally would. So in a Python case, because many of us are familiar with that, it's common to have a requirements.txt, or you put your dependencies in your pyproject.tuml file under the requirements thing, that new thing that I haven't quite figured out yet. Right, a C library. That's a more difficult question. Generally speaking, for Python dependencies, if they compile their own C libraries, that works. If you want to install a C library, well, I think the best answer is, those would normally be installed, well, it has the best way to answer this question. I've done it in two or three different ways. One way's a hack, which I'm not going to tell you about. The other way is that you might install the C library on the run image to start with. And now with the new approach of allowing Dockerfiles to switch the run image, you can switch between a lot of the run images depending on what's building on top of them. But previously, what I would do if there's a common C library that a lot of our application developers use, I would install that using RPM or apt-get on the run image that we shipped to everything. Fantastic. Thank you very much. One more question, please. Going to that example you had of running it natively and you've got multiple hosts and potentially running those builds in parallel, is the Mercept something that you have to, like, manually orchestrate together? Or is that something that potentially each build stream could do and potentially have the manifest be created dynamically? No, that's it. I think that the question needs to be asked. The way I'm currently thinking about it is that, yes, they can run in parallel. Each of the output images might get pushed to a registry, and then at the end of all that, you might have some kind of process that kicks in and computes the manifest list for all the output images from the different target architectures. But it does lead to questions like, if I want to build for two architectures and if one of the hosts fails, this is a failure mode that we previously hadn't had to consider in BuildPak's platforms. So how do we handle that? How do we port that to the end user? Is that something that we want to support? Our platform operators are comfortable with this approach. I think you're asking the right questions and I don't have any answers at this stage. Brilliant. Thank you very much for being a wonderful audience and buildpaks.io booth downstairs around with all the old CNCF projects. Come down and find us and get some nice swag.