 Hello, everyone. Welcome to our little talk. So obviously, as you've seen from the title here, we're going to talk about something pretty exciting, which is consuming macOS compute inside of Kubernetes. So this is something that we're super excited to talk about. So today, you'll be hearing both from myself and from my co-host, Madri. And so we're respectively working at the companies that you see up here. And so a little bit about myself and Flare to kick things off. So yeah, a little bit about Flare. So you have to forgive me for looking up at the screen. There's no mirroring here. So yeah, a little bit about Flare. We're part of Bazel's experts program, which is a community program organized by Google to sort of connect folks in the Bazel ecosystem with expert help. So we do a lot of Bazel consult and see. We're working with a lot of large organizations. Another thing that we do is we actually offer some value-added services in sort of a SaaS or infrastructure as a service model to folks using Bazel to build and test applications at scale. So we'll dive in a little bit into Bazel. I won't go too deep. I don't want to bore you too much. But suffice it to say Bazel is a great build tool that connects to remote clusters to do distributed builds. And that's sort of the focus of my company, Flare. And so one of the challenges here is we build these big distributed systems to do distributed builds. We also have to deal with sort of unifying lots of different compute types. For example, macOS, when we're doing iOS builds. So there's a lot of underlying complexity here to the product that we're building. And so we're, of course, thrilled to have partnered up with a load ult to help us with some of the underlying infrastructure stuff so we can really focus on building out our product. And so with that, Madri, do you want to introduce yourself? I'm Madri. I'm founder of Elotal, makers of Nodeless Kubernetes. The vision for Nodeless Kubernetes is we have transition from treating applications as pets to treating applications as cattle. The vision for Nodeless is to do the same for compute, so not to have pre-provisioned always on compute that's hand curated, but compute that comes up and disappears according to application lifecycle. So it works in two levels. The first level is at a single cluster where you just have a Kubernetes control plane that is deployed and no compute that is pre-provisioned at all. The right size compute for the pod comes up when the pod starts. So if an iOS build is scheduled, a Mac one metal instance comes up. And if you want an ARM compute shape for your app, an ARM compute comes up. And it could be in the shape of an on-demand spot or Fargate launch type on AWS and similarly for other cloud providers as well. And it also works in commoditizing workload clusters in a federation of workload clusters so you can have policy-driven application scheduling across multiple workload clusters so you're not treating individual workload clusters as pets. So having said that, I'll hand it over back to Zach to go more about the problem statement. Great, thanks. So yeah, a quick TLDR on Bazel and distributed builds. So like I said, Bazel is Google's open-source build system. Some of you may be aware of it, I assume. It was used briefly, I think, in the Kubernetes project, although I think they hit the eject button on that at some point recently. So this build system is universal, extensible, and fast if the parameters are right and that's where Flare comes in. Something that is a little lesser known about Bazel is it has great iOS support and broad adoption through the Mobile Native Foundation. And there's a lot of great work in the community to sort of make Bazel sort of the de facto for building large iOS apps at scale these days. And so another feature that I already touched on is that Bazel has what we call remote build execution, which references a set of APIs that are baked into Bazel to allow basically the Bazel client running on a local machine to work as a scheduler and schedule actions across a big distributed worker farm. And that's again, that's what Flare is working on. So yeah, actions executed by the Bazel client might run either locally, again, and this is a CI agent or a local developers MacBook, for example, or these actions might actually run remotely. And this would be where Bazel is sending those actions into the cloud to execute them. And then most importantly, maybe some actions are just not executed at all because cache results are heavily reused. So like I said, this requires a server implementing a few of their remote API protocols and that's what we offer. And one really important anecdote here is that these macro mode actions really need to be running on Apple hardware. So it might be tempting to try to run these actions on some Linux hosts and there's some emulation and things that folks are looking into but really long story short, these actions are meant to be run on Apple hardware from a licensing perspective, if nothing else. So that's really important for us. And so that's sort of the centerpiece of of course, the talk today. So we launched this company just a few years ago. So right around the time that these new AWS Mac 1 metal instances were coming out. So of course, we're super excited to see this. Especially when we look at folks building iOS apps at scale, this means potentially, no more big messy closets jammed full of Mac minis. There's fast provisioning now. In theory without limits, just AWS resource quotas. And so this is great because of course, provisioning new Mac minis can take quite a while if you have to set these up in your own data center. Of course, AWS is well known for auto scaling. We have that in scare quotes here because that's up for debate. But in theory, this is a feature that's provided. And then of course, you know, standardizing the dev sec ops on AWS is awesome rather than dealing with of course, any other infrastructure providers or bringing your own hardware. So yeah, so like I said, these instances are great. We're super excited to have launched our company requiring Mac compute right when these were available. But there's unfortunately still a few shortcomings that we had to sort of overcome, especially in the earlier days here. So you know, I think the first is the pricing model right now, even with committed use discounts. The pricing as you can see clearly here is a bit extravagant, you know, when you're dealing with a large number of nodes especially. So this is of course, one of the biggest sticking points that a lot of people have when it comes to adopting these Mac minis, they're expensive. Another issue is that there is actually some hard limitations on auto scaling. So one of these is of course, the 24 hour minimum allocation. So if you kind of squint at that really, what this is saying, well that's not really auto scaling at all, right? So you know, when we turn on a machine, we're stuck billing in 24 hour increments. So we don't want to turn on 10 machines just to turn nine of them back off and still be billed for the 24 hour period. So as a result, what we see is these instances are often underutilized after we scale them up. You might hang them around for another 23 hours after turning them on. One of the biggest issues right now that we see using sort of out of the box tooling is just that the existing auto scalers are really, they're meant for Linux and they're just not a good fit for the shape of the Mac compute. Oh, I guess I should explain the graphic here. So this is actually a production screen grab, of course, from an instance running as a CI worker, a Mac OS metal instance running CI jobs. So we can see here, of course, the jobs are coming in. We're spiking up close to 90, 100% CPU. But unfortunately, there's just these big areas of the graph that are totally underutilized, which is of course, one of the big challenges that we would like to address because we don't want to just be burning CPU cycles kind of like they mentioned in the keynote. So yeah, so with these Mac 1 metal instances, of course, management became a whole lot easier, but it's still not easy. Some of the issues folks working with these instances will have already run into is that, of course, even the smallest changes to the AMI might take hours to bake and deploy and roll out to all of your machines. Also, again, configuring the auto scaling, for example, for the CI and remote build workers that we're working with is still fairly complex, even with AWS primitives that we have in Terraform. It still can be a bit of a challenge. Obviously, we've got this big Terraform template. We need to get all that stuff just right, set up all the auto scaling groups, all that. Still not super straightforward. So to sort of recap on some of the problems that we've seen at our company while adopting the Mac compute is really one, configuration management, while it's easier, it's still a challenge. Two, auto-scaled instances are underutilized and are also expensive. And then three, what about Bazel? So some solutions that we've come across here working in this space would be, so as far as the configuration and management, well, why don't we just go ahead and use Kubernetes and cloud native best practices to manage these AWS Macs. And so, of course, that's where a lotals solution comes in. And then secondly, auto-scaled instances are expensive. Well, one of the hacks that we have here is actually utilizing the same workers both as the CI runners, so the CI agents that might be hooked up to a CI solution as well as executing the Bazel remote actions. And so that way the underutilized CPUs can now be joined in and work as part of the remote build farm when they're not actively running CI jobs. And then, yeah, what about Bazel? Well, now that we've got this sort of unified compute, we don't have to have a separate pool of workers running over here for our Bazel agents than another one for CI agents. It all just runs on the same infrastructure, all managed under Kubernetes. Yeah, so kind of a quick look at how the solution sort of evolved for us. So, of course, if you're working on doing iOS builds, you should probably have a CI system. So, of course, we start up here with that as sort of a bare minimum. Running CI CD on Mac 1 Metal instances is really awesome. These instances are fast. They're a little costly, but they're great. And then, sort of moving up its year, of course, we see using Kubernetes to manage these CI CD workers on top of that AWS Mac 1 Metal, that's pretty great. And then, for us, of course, total Galaxy Brain would be Kubernetes managed CI CD workers sort of running in conjunction with those Bazel remote agents. So here, of course, at the bottom, we've kind of got a quick look at the whole stack where we're running, again, the CI agents, the remote build execution workers, all of that on these great new Mac instances, all managed by Kubernetes. And that's where we are today. And so with that, I'll go ahead and hand over to Madri to talk about some of the benefits of using the solution. Thanks, Zach. So when Zach and others wanted to consume Mac 1 Metal on AWS, we wanted to do an objective evaluation of, hey, do we really need Kubernetes here? We are all here for KubeCon, so we all love Kubernetes, but for an end user, we wanted to do a thorough evaluation, as is this really required, is Kubernetes really required for managing these Mac 1 Metal instances, or is manual management good enough? So we did benchmarking along four dimensions. The first one is graceful termination of both the build agent as well as the compute node. The second one is build agent configuration and ongoing operations. And the third one is auto scaling of both the build agent as well as the underlying compute. And the fourth axis is how do you reclaim bad nodes, which happen to be a more frequent scenario for Mac 1 Metal instances as compared to ARM nodes or any other GPU devices or any other kinds of compute. So let's go into the details of each of these. Let's start with graceful termination. So for build agent, if you want to gracefully terminate it, if you're managing your Mac 1 Metal nodes manually, you would have to configure EC2 life cycle hooks and you have to maintain these life cycle hooks and you have to constantly monitor and edit these life cycle hooks, which is huge amount of overhead for operations teams. Whereas with Kubernetes, we all know that if you are running a build agent as a Kubernetes pod, even if you get a termination notification, the workload that's running inside the pod will complete before the pod is gracefully terminated. So you get that out of the box in a Kubernetes managed system. And let's move on to graceful termination of the nodes themselves. Again, if you are managing your Mac 1 Metal computes manually, you'd have to create, configure, manage, and update your EC2 life cycle hooks and it's tightly coupled to your build agent. Whereas with Kubernetes, you are able to cordon off the node and you can drain the node and you can gracefully terminate the node. So you get a lot of these graceful termination requirements for both the workload build agent as well as the underlying compute node out of the box if you use a Kubernetes based system. Next, let's talk about build agent config. So build agents would need certain kind of config information for where your log files are located and stuff like that. So if you're manually managing your build agent on a Mac 1 Metal, you would have to bake this config information into the AMI at creation time. And during ongoing operations, if you have to update the config, you will have to provision a brand new node and update the config on that node and create a new AMI out of it. And then you have to churn all of your existing nodes with the new AMI, which is huge amount of overhead for operations team. With respect to if you're managing it using Kubernetes, you are automatically able to do it using Kubernetes. So you get much more ease of operations for the config management because you can use config maps and secrets to pass in the config information. So you simply have to update your config map and secret and roll out a new deployment for your build agent. So it's a super easy operational simplicity associated with the build agent config management. The third dimension is auto scaling. So we wanted to also evaluate how auto scaling of the build agent pods can work in a manually managed Mac 1 Metal scenario and how auto scaling of the underlying compute would work in a manually managed versus Kubernetes managed scenario. Let's look at auto scaling of the build agent first. So if you want to configure auto scaling of the build agents, where you would want to scale up the build agents based on pending jobs in your build queue and scale them down based on reduction in pending jobs or during nights or weekends when your build workload is going to be minimal, you'd have to create auto scaling groups and you'll have to expose the metrics from the build agents through CloudWatch and you'll have to configure your exposed metrics through CloudWatch through a Lambda function or something and plug it into your auto scaling group. So it's quite a bit of operational overhead involved in setting up all of these plumbing networking in place. Whereas with Kubernetes you can have an HPA that is configured to scale your build agent pods up and down based on your build workload. So you already have the infrastructure in place, you simply have to create a brand new HPA. And let's look at auto scaling of Mac 1 Metal instances. Again, it's tightly coupled in the manual scenario to the build agent. So you'll have one-to-one correspondence between build agent and Mac 1 Metal if you're manually managing them. Whereas with Kubernetes and no less Kubernetes in particular, the auto scaling comes out of box because based on pending jobs in your Kubernetes cluster, no less Kubernetes cluster will spin up new nodes and scale down new nodes and it is cost aware and it's aware of the 24-hour billing cycle of the underlying compute. So you're able to get auto scaling out of the box as well. The last dimension is bad instances. We have noticed anecdotally in most of our customer deployments that Mac 1 Metals have non-revealed amount of bad nodes in the available fleet that is made available by the cloud provider. Once a Chanic Dotas in September of 2021, we noticed up to 10% of the nodes spun up weren't really functional. And if you were managing your Mac 1 Metal instances manually, someone would have to notice that, hey, my build provided, my build failed and have to triage the failed build and then figure out that, oh, the node is not really, it was never really able to come up on this fail node. Whereas with Kubernetes, we all know that if a node is unhealthy, Kubernetes control plane will never schedule the pod onto an unhealthy node. So it's taken care of automatically. So having evaluated manually managed Kubernetes, manually managed versus Kubernetes, we decided to build the following stack for managing Mac 1 Metal instances for build workloads orchestrated by a CI orchestrator using the Kubernetes-based framework. So at the top, we have Build Scaler, which is an HPA. So Build Scaler, what it does is it looks at pending builds in your build queue and it also looks at what percentage of your build agents are busy and looking at the metrics collected from the build agents, it will scale up number of build agent pods up and down based on pending jobs in the build queue. So if there is an unexpected spike or if there's a spike in the number of build jobs that have been provisioned, it'll increase the number of replicas for the build agent pods up and it'll scale down the build agent pod count down if not that many build agents are busy. And going down one level, the next level is node-less Kubernetes. Node-less Kubernetes takes care of auto-scaling the underlying compute based on pending pods in the control plane. So if there is a spike in the pending pods based on what HPA has advised, Kubernetes control plane, node-less Kubernetes will auto-scale the Mac 1 Metal nodes up if there is an increase in the number of pending build jobs. And the third level is Kubernetes control plane itself which gives us a lot of the graceful termination and bad node management and monitoring and all of those nice Kubernetes goodness out of the box. And at the fourth level is the Mac 1 Metal node itself. So Mac 1 Metal node, when it's running in the node-less Kubernetes mode, it is going to run a Cubelet stack. So it's going to run a Cubelet and a CRI for Mac 1 Metal and that CRI is going to run your build agent pod and the build agent pod is going to run both your build agent as well as the Flare build executor. Now let's look at each of these four layers in action. So let's start at the HPA build scaler. Build scaler is an open source project from Lotal. The links are provided at the end of this and in the final slide. So here you see that the running job count is listed in red. So you see that the running job count and the total agent count are tracking pretty synchronously, which means that build scaler is spinning up new agents, creating new agent pods based on pending jobs in the build queue. So if there is a spike in the number of builds being submitted, new build agents are being created pretty much in sync. So you don't see any amount of lag between the build agent pods and the pending builds in the build queue. The second layer is nodeless Kubernetes where we are going to look at said provisions just in time compute when a pod starts up and it terminates the compute when the pod terminates and the stack is comprised of the compute auto scaler that auto scales Mach 1 metal compute and on the node you have a cubelet plus the CRI that is Mach specific. And it's a free tier where you can provision and manage up to two Mach 1 metal instances the entire Kubernetes stack for free. So now let's look at nodeless Kubernetes aka Luna in action. So what we have here is a screenshot of a production nodeless Kubernetes and managed Mach 1 metal environment. And you see that the cost, the price of Mach 1 metals is trending up and down based on time of day, day of week and day of year as well. So let's try to superimpose what we saw with the HPA with the pending build jobs and pending build pods versus the cost. And you see that when there are more bills you're paying more and when there are less bills you're paying less. So the lulls in the bills right here correspond to the low amount of cost that is being spent on your Mach 1 metals. So the nodeless Kubernetes component is smart enough to understand that your Mach 1 metals are charged at a 24 hour cadence. So it does predict that, okay, I have finished my build. I just provisioned my Mach 1 metal but there are a lot of pending jobs. So I'm going to keep this Mach 1 metal instance on to schedule future bills. So it doesn't churn the Mach 1 metals as often as it would churn an ARM based VM or an Intel based VM. So now let's look at the entire thing in action. So we're going to start with looking at build scaler and nodeless Kubernetes in action first and then Zach is going to talk about flare components and the build, a flare build agent and how you're optimally utilizing the compute that you're provisioned for the Mach. Thanks Zach. In the top window, we are going to execute the operations in this architecture. The bottom left window is where we are going to watch the nodes in the system. The bottom right window, we are going to watch the parts in the system. So we currently have one Mach 1 metal that is running in our environment and we have the Mach auto scaler which is nodeless Kubernetes component. That's the HPA that is scheduled that's configured in the environment. So if we look at the HPA, the HPA has a target of 90 and the min number of parts is one, max number of parts is three and we want to have the current replicas is one. What that means is that we want the HPA to scale from one part to three parts. Currently it is at one spot, one part and we want it to perform a scale up operation. If the mean utilization of the build agents is at 90%. So if the mean utilization of the build agents across two build parts or three build parts, if it hits 90%, that's when we want to perform the scale up operation. So let's go ahead and schedule a build. If we kick off a build, what we would want to see is that the HPA should realize that, a new build came into the build queue and we want to scale the number of the build parts from one to two because the average utilization of the current build agent should go above 90. So let's look at the new pod being scheduled and if you look at the bottom right window, the pod is in pending state because it doesn't have a Mac 1 Metal Compute to run on. So here is when no less Kubernetes kicks in and it sees that, oh, there is a pending pod in the Kubernetes environment and there are no compute nodes available. So it'll go and schedule, create a brand new Mac 1 Metal instance and that Mac 1 Metal instance you see in the bottom left window, it's ready and that's where the pod is scheduled. Zach? Yeah, cool. So here's a quick look at of course, some of the metadata coming out of Bazel and so here we're taking a look at of course at the CI output here and so I'll just explain a little bit more about what's going on under the hood. So what we see actually happened here was the build ran in the CI environment but it actually, you can see there if you look closely 11 actions were executed remotely. So we had a lot of cash hits because this was a pre-built example here but we can see that we actually had a lot of, well 11 actions were executed remotely. So this is a build of Bazel itself, the open source project against one of our environments and so we went ahead and clicked a link there to link off into some of our UI that's actually gonna allow us to see a little bit more deeper into what's going on behind the scenes in that Bazel build. So we can see a bunch of metadata about the build there, the build logs, some information about the cash hits and all that stuff. So this is powered by an API built into Bazel to expose that data to us here. So that was an example of like a success case. This is an example of when there's some failures in Bazel. So again, in the CI system, we've hooked into some life cycle events in Bazel, we've extracted and actually parsed some of the errors that were dumped out during this build, bubbled those up right in front of the user. So in this case, this failed because we tried to execute a macOS build of Bazel without any macOS workers available at all. So of course the build failed. And so again, clicking that link here, we go back in, we see the build details here in our platform. And we actually see that we extracted this error. So we'll take a dive in there to kind of look at some of the information there that we are gathering about errors. So when errors occur in the system, we're actually, like I said, we're extracting those with the proprietary algorithm and while the stack trace doesn't look great, this is the stack trace that was dumped out of Bazel. And so we're actually kind of cataloging this error and so we can see the last time that this error was encountered in the system. So it's sort of like, you can think of it like sentry, but for build time errors rather than runtime. So yeah, so that's pretty much a quick snapshot of what the UI is able to do and some of the metadata coming out of Bazel. So I'll hand it back over to Madri, I guess to talk a little bit more about what's going on here. Yeah, so you see that, you see the build, once the build finished on the bottom right window, the build agent is terminating, but you might wonder, hey, why is the compute node not being terminated? The compute node, the Mac one metal instance that's provisioned in the bottom left window, it's still in ready state. So no less communities is smart enough to know that this compute node Mac one metal that was provisioned is going to cost you a dollar an hour for 24 hours, irrespective of whether you're going to terminate it now or 23 hours later. So based on the pending parts in the environment and the past trends, it does try to keep the Mac one metal on for a little longer. If it predicts that another new build job is going to come in and instead of provisioning a third Mac one metal, it's going to reuse this existing Mac one metal. So we had to extend the monitor instead of mirroring. So that's why we're kind of like trying to figure out what's going on by looking at the screen and not looking at our monitors. Any questions about the whole HPA and Mac one metal management while we figure this out? So we are working on supporting M one instance types that have been come into market. So that is a work in progress currently. We do want to add that support pretty quickly in the environment. So that should be coming out pretty soon as well. So the whole idea of node less Kubernetes as the end user of Kubernetes platform shouldn't worry about what are the newer better kinds of compute shapes coming into market. So I don't know like this. Yep. Zach, you want to talk about what's up in Bazel environment in Flare? Yeah, yeah, sure. Yeah, so of course on the future roadmap, we've got some items we want to get to at some point. So of course one of those would be some Bazel specific optimizations for our use case. So one of the things that we want to do is maybe find a way to intelligently share some of the Bazel agent outputs directly with the CI job and sort of short circuit some of the round trip network calls. That would make it maybe a potentially impact there. Of course, the other big item that we're really excited about is that a little open sourced the build scale or scalar framework itself. So that's of course super awesome. So of course there's a link here. We'll share the slides, definitely check it out. And then some other stuff of course, the upcoming M one support, we're super excited about that. From a Bazel perspective, of course, these M one chips are really, really great for Bazel builds. Things are quite a lot faster. There's of course underlying complications there, but we're glad we've got a little here to help us through some of those issues. So I think that's really it as far as future roadmap from our end. Is there anything you want to add? Yeah, that's about it. So build scaler, it is designed in such a way that it can pull in external metrics from any CI orchestrator. So currently it supports Billkite Circle CI Flare Build, but it can be extended to any CI metrics provider. So it's basically getting in the metrics from the CI agent and converting it into metrics that can be injected by the HPA. Cool. Yeah, so a few references, of course, you can find more of our product info out there on the web, on our website. And then, Madri, run us through some of the links here. Yeah, yeah, that's about it. So there's a free tier for Mac 1 Metal Nodeless Kubernetes, like I mentioned earlier. You can run it for up to two nodes. So build scaler is open source, which is the HPA. That is open sourced. And the free tier for the Mac 1 Metal Compute is open sourced. So the only thing that's not, free tier is free, obviously. And the only thing that's not open sourced yet is the Mac Auto Scalar Component. And that is TBD. Cool. Yeah, of course, reach out to us with any questions. You know, we got our emails here. And then I know there's some online folks. I don't know if there was meant to be like an online Q&A at all. I don't know if anyone's monitoring that. But yeah, I guess that's pretty much it, obviously for our talk. I think come find us after we can chat. Do we have time for a quick Q&A here? Go ahead. Oh, yeah. Go to the mic, please. Yeah, I can punch your question. So I think the question is, this is primarily for OSX, OSX builds, is that right? Yeah, yeah. And the follow up question is done. In my use case, I've already built my OSX software, but I just want to test the configuration. And so my use case is like to install and reinstall and reinstall the same software. Does this system allow that to keep reinstalling the software that I have on the Mac hardware? Yeah, that's a really good question. So this system is basically an application of Kubernetes and no less Kubernetes for four builds. So the way, one of the slides that we had in the past talks about the build config. So this one would be, so you see the build agent config where you want to be able to inject different build agent config data into your current part. That would become much more easier by using this system than doing it manually because with Kubernetes, you're going to be passing in all the config data as config maps or secrets, stuff like that. So you'll be able to inject varying configurations dynamically at runtime. So I can use the OSX hardware for anything I want to. Yes, exactly. Yeah, sure. Any other questions? Yeah, that's a really good question. So the question is what CRI are we using? So this is a CRI that we built at Elotl for Mac 1 Metal. So it's a CRI that basically takes in the CRIs basically image service and container service implementations, right? So that talk to the Kublet agent. So the CRI that we built is a custom CRI for Mac 1 Metal. That's part of the closed source one, open source strategy TBD. Any other questions? Yeah, so are there any plans from your end to support physical Kubernetes nodes that are running on local hardware or hardware in the data center? Because if I have a continuous base load, that will be much, much cheaper than any AWS instance. Yeah, yeah. So the nodeless Kubernetes stack, which is comprised of the CRI on the Kublet node and the autoscaler are both applicable to on-prem. So the stack would work as is on an on-prem data center. Yeah, one additional comment I would have on that. So from a Bazel perspective, of course it's important for us to allow folks that have big existing fleets of Macs to bring that hardware to our solution. So that's definitely some functionality as sort of core to our offering. And that's, I think one of the original reasons we reached out to Elotals, because we were looking for a way to have sort of hybrid cloud setup support out of the box. Do you have AMIs for AWS Mac with pre-built environment for this stack? Yeah, yeah, yeah, yeah. So the nice thing about Kubernetes in general and nodeless in particular is that since we can define, hey, I want Xcode version ABC and these other build packages or any other dependency packages, there are pre-built AMIs already available. So if you mention the dependencies that are needed for your pod for running the workload in the pod manifest, the right AMI is picked at runtime based on that information. So is it managed by Flare system or? Yeah, so that's a good question. So if the AMI is being used for Flare build plus other kind of build agents, then it's managed by Flare and us. If you're using it for non-Flare build related use cases, then it would be managed by us. So it's basically an AMI that's already in our fleet and we whitelist your AWS account to consume it. Thanks. Sure. Any other questions? Awesome. Thanks so much for joining our talk. Zach and I are super happy to share. Thanks a lot, everyone.