 Hi, everyone. Today we're going to talk about building a managed API using cubicorn and this journey that Robert and I went on. If you want to go to the next slide. So to get started, I'm going to tell everybody about myself. I co-authored this book called Cloud Native Infrastructure. We're doing a book signing later today. Justin, my co-author, is a really awesome dude if you get a chance to go talk to him. He wrote a lot of the stuff in the book about containers in orchestration and he's really awesome. Also, I'm a gopher. So I contribute to go. I just got my name in the go authors file earlier this year with some contributions I made to go laying in depth. I'm a Kubernetes contributor. So I've worked in a couple of repos. Probably my most notable one was last year. Earlier this year I did a lot of work in cops with Amazon doing some private topology stuff that now you can see in Amazon's EKS solution that they released last week at re-invent. I wrote this program called Cubicorn one weekend and it's turned into a fairly successful open source project. We're going to look at it and talk more about it later on. And I already mentioned cops. And then also the most exciting part is I just started a new role as a senior developer advocate at Heptio. So my job for the foreseeable future is going to be doing this, talking to the Kubernetes community about awesome things that we're working on in Kubernetes and helping you guys understand how and when to use new and exciting things in Kubernetes. Do you want me to, I can do you too if you want. Yeah, okay. This is exciting. I'm just going to keep talking. So this is Robert Bailey. How long have we been working together? Like a year and a half now probably? So he works at Google. He's up in Seattle. He's a colleague of SIG Cluster Lifecycle. So show of hands. Who here knows what SIG Cluster Lifecycle is? Who here has been on a SIG Cluster Lifecycle call before? Awesome. Yeah, I recognize most of those people. So what we do is we sort of manage this idea of a Kubernetes cluster beginning, middle, and end. I like to think of it as cred operations for Kubernetes clusters. How do they get created? How do we manage them over time and upgrade them, et cetera? And then ultimately, how do we delete our SAD in dying Kubernetes cluster? So he's a founding member of the Google's Kubernetes Engine, which is pretty cool. Like this is one of the people who helped write GKE. That's pretty amazing. And he's a reluctant owner of kubehub.sh, which is very necessary evil in the life cycle of where we got to where we are today, and a lot about what we're going to be talking about with this API that we're going to propose. So you want to install k. Who here has installed Kubernetes before? And we all saw Kelsey's talk earlier today. It's very easy now. But who here has used one or more of these deployment tools? Who has used one that's not on here? Yeah, so there's even more out there. I've worked on a couple of these. I've used most of them. Cops and kubicorn are the big two. But there is one underlying weird problem with all of these, which is as a user, when you walk up to one of these tools for the first time, it always looks and feels a little bit different. You do really have to go through this exercise of learning, OK, for this specific implementation, what are the expectations for me as a user? What do you need from me? What do I have to give you? And what do I have to configure? Maybe I have to write a file or change some configuration values or edit some YAML. We all love doing that. But the point was it was a different experience for every one of these. But they're all trying to solve the same problem. And that's what we're hoping to solve with this cluster API. So infrastructure should be boring. And this is a word we hear a lot. I feel like the word boring is a magic word in Kubernetes right now. And if infrastructure should be boring, Chris, why did you just write a whole book on it? Well, the whole book is about how to do infrastructure in a cloud-native way. And just like Kelsey said this morning, when it's all done, it should be boring. You should kind of just watch some logs. And when something happens, it happens. And then you don't really touch anything and you're kind of done. So that's sort of the desired state of where we would like to get with infrastructure. And we think the cluster API, again, we're going to kind of define what it is and what it looks like a little bit later when Robert starts talking, is a really great way to get there. Again, looking at boring, if the infrastructure is boring, this implicitly means that a cluster life cycle should be boring. So creating a Kubernetes cluster, upgrading a cluster, updating a cluster, mutating a cluster in some way, whether that way is I want to upgrade the control plane or maybe I want to change some configuration, or maybe I even want to adjust the amount of nodes that are running all of my lovely application containers that are doing all those things that make me so much money as a software engineer. So reinventing the wheel. Oh, also real quick. All of these pictures we're looking at are pictures that I took from the top of mountains that I've climbed. So we've gotten some feedback that this is hard to read right here. Like, sorry, not sorry. Look at my mountains. So just deal with it. So anyway, this is Mount Democrat. It was one of my favorite ones to climb. If we're looking at reinventing the wheel, you can see all of these similar things that all of these cluster tools usually end up coming to as they mature as a piece of software. The first one is installation. What does it look like to stand up the control plane? What does it look like to stand up nodes? The configuration of these components, we usually try to make it easy for humans to define something that's going to mutate how the cluster is going to look and feel after it gets up and running. The problem, again, is those shortcuts that we're writing are all different based on the installation tool. We're looking at cluster upgrades and component upgrades. Maybe that's rolling your app. Maybe that's rolling resources in your cluster and the Kubernetes components themselves. Or even adding a node. Like, this is actually a pretty challenging thing to do if you've ever actually looked at what needs to happen in order to bring up a new virtual machine that is working as a worker node. You have to get TLS configuration in there somewhere. If you're using TLS, you've got to get a KubeLit up and running. Usually that's watched with SystemD, and we all know how much fun SystemD is. So there's a lot of noise that goes into that. And then auto-scaling, which is, again, kind of a fancier way of rebranding, adding a node to your cluster that just says, I have some logic. I really don't care what it is. And it's going to change and mutate the amount of nodes we have in our cluster. But again, these are all different, and a different user experience based on your installation. And what if I want to move to another cloud? So right now, if we walked on this journey of, I'm going to use cops. And I'm going to go install a cluster in Amazon. Cops has an API. You define what you want. It's declarative, just like Kubernetes. And you go and you get this really awesome cluster in Amazon. And it's great. And it works. And it scales. And it's production ready. It runs in HA. It's baller. But then, what if you want to move that to DigitalOcean? Or what if you want to move that to Google? Or what if you want to move half of it to Google or DigitalOcean and the other half to bare metal? Or maybe you want to move one of your nodes to a Raspberry Pi on top of a mountain, and the rest can go in Microsoft's Azure. The point is, is we don't really have a good way of doing that. And last I checked, the whole point of Kubernetes was to be cloud agnostic and deliberate us from these decisions of which cloud should I be running in right now. So next. So the birth of Kube Admin. Who here has used Kube Admin or even know what it means? OK, so most of you. So Kube Admin is this tool that we started at about a year ago. And it was sort of designed to do the same thing, but with a different part of the stack. Right now we're talking about how we would represent infrastructure in Kubernetes. Kube Admin solves the problem of like first, OK, let's unify bootstrapping. So we don't care if you're on Ubuntu. We don't care if you're on Cent. We don't care what operating system you're running on. I was even successful in getting Kube Admin to run on my Arch Linux server at home. And it allows us to get all the Kubernetes components up and running with Docker. And it does a really great job at that. So that, again, we created one bootstrapper to rule them all. But it did not do machine provisioning. And so on our next slide we're going to look at this really often diagram. But it's really important to have this understanding of these two fundamental layers of Kubernetes. One is this virtual machine underlying resource stuff. These are fake things that I imagine you could touch. And the other one is the actual Kubernetes components themselves. Who knows what the Kubernetes components are off the top of their head? This is your quiz for the day. I'm going to make everybody rattle them off. It's the API server. It's at CD. It's a scheduler manager. It's all of the things that you run, the bits of software you run that make Kubernetes do what it needs to do. Cube Admin gets those up and running, but it does not get the infrastructure that that runs on top of in place. So we're going to look at this slide here while that was a really great transition. So we can see that we have these three layers over here on the right. Layer three, layer two, and layer one. The Cube Admin layer, if you look, it's kind of right here. I'm trying to point. I wish I had a laser. And you can see that runs on top of masters and nodes. But either way, underneath all of these logos at the bottom, that's our infrastructure layer. This is the stuff I, as a DevOps user, and as an infrastructure engineer, care about. And a lot of software engineers kind of don't really want to deal with that. And so that's why I make the money I do. So anyway, Cube Admin's up there. But what solves this? How do we unify this layer? How do we say infrastructure, you're the same? Want to hit the next slide, Robert. So here's an example of Cubicorn. This is a tool I wrote. And we're going to use it as an example of one tool that solves provisioning clusters. And then we're going to compare it to another tool that effectively does the same thing. But we think about it differently. So if you want to go, in this example, we're going to take Cubicorn. And we're going to, using the Cubicorn API, that only matters to Cubicorn, or Cubic ORN. And we're going to create some resources in Amazon. These resources could be virtual machines, load balancers, subnet security groups, whatever. Also, we want to go and create some stuff in GCP. This is another exercise of learning Cubicorn and creating stuff in GCP using the Cubicorn API that only matters to this one piece of software. And we're going to look at the autoscaler. It's going to do effectively the same thing. And one more. And then we're going to create resources in GCP and AWS. But here's the problem. Everything over here is fragmented and segmented in its own independent way of creating these resources. The autoscaler is another way of saying creating a resource. Because if you think about it, going from 0 to 1 shouldn't really be any different from going from 1 to many. So an autoscaler effectively is another recreation of Kubernetes infrastructure management in general. This is the whole point of SIG cluster lifecycle. We're not called SIG cluster creation, right? Let's go ahead and hit the next one. So what if we had a way to create a declarative representation of a cluster and apply it consistently across the clouds? We would let Kubernetes provision this infrastructure, which this is always an interesting problem for me for us to solve. We'll get to that in a sec. Oh, thanks. Because in order for us to get Kubernetes up and running, we need infrastructure in place. But if we're using Kubernetes to create infrastructure, like which one comes first? How do we solve that bootstrap problem? So as software engineers, we get to start looking at how are we going to create infrastructure and then simultaneously bootstrap it. And we get to re-walk the steps of our forefathers who tried to compile the first compiler for the first time without a compiler. Like, how did that happen? So anyway, number three, let Kubernetes handle changes to the cluster in a cube-native way. So Kubernetes, the cube-native way, when I say this, what it means to me is, I as a user, I go declare something. I say, go make this so, and then there's these little loops that are chugging away behind the scenes that go and enforce and reconcile whatever it is I said. I want to make so. And in this case, we are proposing to do this with infrastructure. So I've introduced the problem space. I've told you guys a little bit about what the cluster API should be. And Robert here is going to take over and kind of tell you guys about what the journey was like for him as somebody who wrote this thing. Awesome. Excellent. All right. Thank you, Chris. So what's next? So the cluster management API. And so the cluster management API is sort of set up at the beginning of this talk is a declarative way to create, configure, and manage your cluster. I think Chris sort of mentioned this, but it's not called sig cluster creation. Like creating a cluster is relatively easy. There are more than 40 getting started guys linked from the official Kubernetes documentation. There were a number of them mentioned on an earlier slide. And a lot of those guides sort of get you up and running. Like even Kelsey Hightower's Kubernetes the hard way gets you up and running, but doesn't really talk about the rest of the lifecycle of your cluster, like what happens when a new Kubernetes release comes out? If there's a security patch, if there's a software fix, you need to keep using your cluster over time. And so the cluster management API is really designed not just to create your cluster, but to manage your cluster over time. So if you look at the very high level view of what a Kubernetes cluster is, most people think about a cluster as having some master and some nodes. But really what we have is a control plane and some machines. And the machines become nodes by virtue of the fact that they're running cubelets, but they're really just machines. And in addition to machines and the control plane, we have some cluster-wide properties. So there's some generic things that need to be sort of spread across the whole cluster that aren't really part of the control plane and aren't really part of the machines themselves. But our properties need to be sort of distributed across the different parts of Kubernetes that they all work together. So when we think about configuration, there are these three sort of different pieces that we need to configure to have a declarative way to describe what a Kubernetes cluster is. So for the control plane, we're going to lean pretty heavily on something called component configuration, which is using Kubernetes primitives, like config maps, to configure Kubernetes resources. So all of the control plane components are themselves pods that run inside of Kubernetes. And so we can use Kubernetes primitives to configure those as we configure any application in Kubernetes. But the machines in the cluster properties are sort of outside of what's running inside of Kubernetes, and we need a different way to describe those. So the first thing we need to describe is the cluster-wide properties. And so we took a look at many of the different installers in the ecosystem and tried to figure out what are the generic things that need to be sort of filtered into the entire cluster. And it turns out there aren't really very many. And the ones that we came down with are really how you define networking. So you need to define what your pod network looks like, what your service network looks like, and how you want to describe DNS. Because those parameters get distributed out to different parts of your cluster. They go to the kubect, they go to core DNS, or kube DNS. They go into the API server. So those need to be defined at a high level so they can be distributed. We also, when we looked at this, there are many other properties we could have elevated to this level. But we decided to go as minimal as possible because it's really easy to add things later, and it's really difficult to take things away. So if we put things in here that don't really belong at this high level, it's going to become hard for us to then move forward and take them out in a future version. The other thing you need to define is what your machines look like. So machines are the underlying physical or virtual machines that will turn into nodes in your cluster. And so if you guys are familiar with Kubernetes, we already have this notion of node in Kubernetes. And a node has a spec and a status. But unlike other objects in Kubernetes, the spec of a node is not actually declarative. There's no way to create a node object in Kubernetes and put a spec in there of what this node should look like and then have a node show up. So nodes are always reactive. Nodes show up and they give you status of the machine that the node is running on. So one of the principles we started with with this project was that we didn't think we could modify a core Kubernetes. A couple of years ago it was decided that cluster infrastructure management was outside of the core of Kubernetes. We've sort of taken that to heart. And as we define this API, we've kept it outside of Kubernetes. And so what we've defined is a machine. And a machine is basically the declared specification for what a node should look like when it shows up. But since it's a different object, itself has a spec and a status. The status of the machine just points to a node when that node shows up. And the spec tells you what type of node you want. And again, we looked at sort of common things that we'd like to define across the different installers that we have today. And we decided there's some really basic things that we know we want to have. So we know we want to have names, like machines need to be named. So we've used the typical Kubernetes metadata mechanism for naming machines. And you can see in this example here, we're actually using a generate name. So we're letting the Kubernetes API machinery actually pick the name for us. We don't really care what the name is, but we want it to follow this pattern. You can also specify labels and other sorts of things in the normal Kubernetes fashion. And then you can define desired versions. So we know we want to run this version of the Kubelet. We want to run this version of the container runtime. It doesn't show here, but you can also specify a specific version of your control plane if you set the role to be a master. And then finally, we have roles, which are right now, you can be a node, you can be a master, or you can actually be both. And this can be extended in the future if you want to have different types of machines in your cluster. You can imagine wanting to have a machine that just runs STD. You want to have a separate STD cluster outside of the machines that run your API server, your controller, and your scheduler. And then lastly, you'll see this section in the middle, which is the provider config. It turns out that trying to make a API abstraction across lots of clouds is very difficult. Many people have tried this, and many people have failed in the past. And so rather than try to reinvent that wheel, what we decided to do was to allow a generic string to be placed in here. And in this example, this string itself is actually a Kubernetes API type. So you can see the common fields of API version and kind, and then some GCP-specific fields underneath that that allow us to actually provision the underlying machine. And this allows us to actually harness the full power of the underlying cloud. So if you're running on Amazon, you could do things like use spot instances. Even though spot instance isn't a common thing that you can get across all infrastructure providers, you can still take advantage of the power of that particular cloud. And so what does this look like in practice? So we have this cluster management API, which abstracts away the basic infrastructure, and you update your installer tools. So you use Cubicorn, you pass at these YAML definitions of what my cluster should look like. It talks to the underlying infrastructure and provisions or cluster for you. Likewise with COPs, you take that same YAML configuration, you can give it to COPs, and you end up with a cluster. Or you can give it to GKE, or you can give it to AKE, or you can give it to EKE. And in the end, you end up with a cluster that exposes this API that is consistent. And what you can do with that API is then you can write generic tools that will work across a cluster created by any of these different cluster creation tools. So for things like cluster autoscaling, the cluster autoscaler can target the cluster management API instead of the underlying cloud infrastructure APIs to allow you to make your cluster autoscaler portable across clusters and across environments. And likewise, you can take other functionality that deals with machines, like checking the health of your machines and fixing them when they break, or running upgrades across your machines. And you can build that as common tooling that works across multiple environments rather than re-implementing it. So right now, if you want to do upgrades with COPs versus upgrades with Cubes for A or upgrades in GKE, they're all completely different APIs. And this allows us to have a single mechanism to upgrades. You can still write different upgrade policies and different upgrade tools, but they can all be reused. And they can be shared across a much larger community of Kubernetes operators. So with that, we're going to switch into a demo and show you what this actually looks like in practice. So I've pre-created a cluster using a cluster API. It takes about 7 and 1 half minutes to come up, and we have a relatively short talk here. So I wanted to get that out of the way. Can you not see that at all? OK, let's see. Legible? Not so much. All right, let's create. I thought green and black would look better up there. Is white better? Can you see white? Can you read that in the back? Better? Bigger? Bigger? Is that legible? More? I'm seeing thumbs up in the back more. Excellent. All right. OK, so I've created a cluster. And we've installed machines here as a CRD. So there was a lot of talk this morning during the keynotes about CRDs and how that allows you to have a lot of extensibility in your Kubernetes cluster. And we're taking advantage of that for our demo here. And you can just reuse the same Kubernetes tooling that you're used to. So you can run kubectl. And we have a new verb in kubectl called machines. And you can say, get machines. And it says, great, like here are two machines you've created as part of your cluster. You can then inspect a single machine. So let's see what this machine looks like. And this is very similar to the machine that I showed in the slide. Except in this case, we have extra metadata. I'm not hoping I'm not scrolling too fast. Because the Kubernetes API machinery adds additional metadata to all of the Kubernetes objects. So you get creation timestamp. We have the generate name. But we actually get an actual name generated by the server. We added some labels in this case. And we get a couple of extra fields. As I mentioned, we have the spec. And so this is what was from the slide where we've got where this machine should live and what role and what versions we want on the machine. And finally, since it's a Kubernetes object, we also have a status. And in this case, the status is a reference to a node. So again, machines and nodes are two different types. But they're linked through this field called node ref, where you can look at a machine, and especially if you're writing a tool, you can look at a machine with a tool, and then find the node reference, and then go describe that node. So in this case, it's just as easy as running kubectl describe node, pasting that node name, and then Kubernetes says, great, here's your node. So this was the node that was created from that machine definition. And so if I want to create a new node, I can have a new YAML file. So I'm going to create a new node that's going to be called GCE singleton. And all I have to do is kubectl create dash f, my machine.yaml file, and it says create a new machine. And then if I get machines, I will see now there's a new machine created. So what does it actually mean? So what happens when I create a new machine is that there's a controller, in this case it's running inside of the cluster, that sees that a new machine gets created, and goes and actually starts provisioning a VM on the underlying cloud, which is in this case is Google. So if I look at the VMs that are running in my project, you can see the two VMs that were already there, and then there's a new VM that showed up. And that VM gets provisioned, starts up, it runs kubectl join, and after about two minutes, you'll have a new node running in your system. So if we run kubectl get nodes, right now we still just have two. And if we wait another minute or so, another node will show up in the system. So this is great. So now I can easily manually scale my cluster, right? So if I'm a cluster operator, and I say I wanna add more nodes to my system, I can say, great, kubectl create-f, kubectl create-f. That gets tiring pretty quickly, creating nodes in that fashion. And if you think about Kubernetes, what we like to do is build sort of higher level primitives on top of our lower level primitives. So what's the obvious thing you wanna build on top of a machine is a set of machines, right? So in GKE, we call these node pools where you can have a bunch of nodes that have the same sort of templates, right? They all look the same, and you can scale them up and down. Cops introduce a very similar notion. I don't think they call it node pools, but it's node sets or node groups. Instant sets? That's off, but okay. Chris has instant sets. Something like that. And sort of, this is a pattern that we've seen promulgate throughout the community. Instance groups, that's what all of us, instance groups. So what we've done is rather than make this first class API, we've actually written a client-side tool to sort of pretend to be a set of machines. And so if I check out my client-side tool, I can say, I wanna get all of my machines that are in my set node. And set node is a label we've applied to machines, so we're just doing label matching here. And we see that there is one machine that's in a set. That's great. So when I did a cube called a gut machines, there were three machines. There was the master. There was this one machine in a set, and then there was a third machine that I created separately. And if I want more, I can say scale. So let's say I want five. And it says, great, I found one machine. That means I need to create four. It's gonna tell you the names of the four machines that it creates, and then those will get provisioned. Let's say I accidentally do that again. Like, that's fine. It says, great, they're already five, right? So it's an item potent request. It says, we've already done that work for you. We're not gonna churn the system. And then if I look at my machines, it's, now I see five machines, right? So I've got the original machines that I got when I created the cluster. I had the machine that I created a couple of minutes ago, and then the four new machines that were created through scaling the cluster. And this client-side tooling is very, very, very simple to write. So it's effectively doing what I did originally, where I was saying kubectl create with a YAML file, except in this case, it's inspecting machines that have this label and just making extra copies of them. And so this works great. So now you've got a larger set of machines. So we can see this machine has come up, the one that I created individually. And next we will get four more machines to join the cluster as well. Okay, so we talked about manually scaling your cluster. We talked about scaling your cluster through a little bit of automation through client-side tooling that allows you to scale up and down really easily. Some other things that we mentioned that would be really useful to do are upgrades, right? So upgrades is sort of everybody's favorite topic. It's something that's very difficult to get right in Kubernetes. We spent a lot of time on the Google Container Engine system trying to implement better upgrades that are less disruptive, that aren't gonna bring your application down, that are gonna do things like respect pod disruption budgets. But it's something that has to be re-implemented in every cluster management system. And what we like to do is we like to sort of consolidate a lot of the knowledge around how to do really good upgrades in Kubernetes and make that available to everyone. And so here what we've done is we've written a tool that will upgrade your system. So we're gonna wait for all these nodes to show up. It looks like they're almost there. We've got four that are just joining. It takes about maybe like 15 more seconds to become ready. And then we're gonna upgrade. So all of our nodes right now are running 174. So that was the initial version, that's what was specified in our YAML file when you looked at the version of the Cubelets. If we actually look at the control plane, we also specified 174. So this is our master node. You can see that it's got the roles master. So in addition to setting the desired Cubelet version, we're also setting the desired control plane version. So we'll see if everybody's ready. All right. So now we're gonna upgrade. So we're gonna upgrade our cluster from 174 up to 183. And what the Upgrader does is it inspects the current version of the cluster and decides what to change it to. It first upgrades the control plane for the cluster. So it's gonna go and modify declaratively the control plane version in the machine that represents the master in this case. And it's gonna set that to be 183. And then it's gonna wait for the controller to come in and actually upgrade the master from 174 to 183. In this case, we're doing that using a Cubabmin upgrade, which is a new feature that was added with Kubernetes 1.8. And so on the master machine, we're running a Cubabmin upgrade that will upgrade all the control plane components using Cubabmin, which is great because that means we don't have to go and figure out ourselves, what does it mean to upgrade from 17 to 18? Like we figure that out once for Cubabmin, we're reusing all of the work we've done there and not reinventing that wheel. And it turns out a lot of times when you're moving between minor versions of Kubernetes, it's not as simple as just installing the new software version. You often have to do things like add new RBAC roles or change what system components you're running to actually be successfully running the new version. So now we can see that we've finished upgrading the control plane. So if I can manage to create a new shell that you guys can actually see. If we look at the version, we see that server version is now running 183. So the control plane has been upgraded. So what it's doing next is it's gonna upgrade all of the nodes. So there were six nodes in my cluster. There was the one I created, I scale it up to five plus the singleton. And it's gonna just upgrade all of those in parallel. So I guess I should have left that window open so we could watch what's going on. So all of our nodes have gone into the not ready state. This is because for demo purposes, we're trying to upgrade as fast as possible because we have a very short time segment. And what we're doing is when we see that we want the desired version to be different, we immediately kill all of the VMs and create new VMs underneath them. So in this case, all of the nodes are not ready. If we watch our VMs, we should see that we have new VMs provisioned for all of those nodes. So we immediately kill VMs, we create new VMs. The VMs come up, they run their startup script. The nodes will join the cluster and then we'll have new nodes that instead of running 17, we'll be running 18. If you looked at, if you were watching closely and you looked at the output of the first tool that we ran, it was scaling, all it did was very quickly modify the declarer state of the cluster and exit. And then we were sort of going back and looking to see what had changed. In this case, we decided to make the tooling a little bit smarter. In this case, the tooling is actually watching and waiting and telling us output and telling us when it's done. So it's watching the machine objects. The machine objects are recreated very quickly. It's following the links in the status of the machine objects to the corresponding nodes and checking to see when those nodes are ready and running the desired version. So this sort of shows you different ways that you can write tooling around the cluster API. You can write tooling that is sort of fire and forget and just expect the system to reconcile to the desired state. Or you can write tooling that's actually following along and watching what's happening, which in some cases can be better if you want to actually see progress. In some cases you want to fire and forget and move on to your next task. I think it's time for questions. Are we out of time? We have two minutes left. Excellent. So we had one more demo, which was repairing nodes, which we apparently don't have time to run. So if anybody wants to see that, please come see me afterwards and I can show it to you. Yeah, absolutely. And we have an example of this running in Cubicorn as well, if anybody wants to see that. Your hand went up first, so let's go with you. So in your example, you just basically killed all your nodes and created new ones, but you can use the tooling to do a rolling upgrade where you essentially drain. Yeah, so we had something originally that did things sequentially and it was taking a long time, right? If you've ever tried to update a cluster, it generally takes quite a while and it's not very amenable to a demo. My second question is, with this tooling, is it possible to sort of extend how machines get built so that you could add to your own services? So that's something we haven't implemented yet, but I think it would certainly be possible. We'd have to extend the definition, in particular like the roles section at the end is very Kubernetes specific right now. So just, you can say it's a master or a node and that influences what gets created, but we could allow you to put, you know, I want STD or I want Spark or I want something else. Yeah, there was some OS level security stuff or logging that you wanted to get done. Yeah. Yeah, and I would say the one thing that is important to realize there is because this is a controller model, you as a software engineer are flexible to make that controller do virtually anything you want. You can make that read from other data sources or make that do literally anything you want. Anyway. So for clarification, you said you're using a CRD, there's a controller that declares the new resources. Does that mean you have to have a Kubernetes cluster running already and how do you bootstrap if that's the case? So the bootstrapping we're doing right now is client side, we're creating the master, we're creating a VM, we're running QBabmin init and then we're installing a CRD and we're installing a controller from there. I think there are a couple of changes we're planning to make in the near future. One is to switch from using CRDs to using API aggregation because we believe, we've heard from other people that have used CRDs that with API aggregation you get much better support for forward and backward versioning compatibility if you want to mutate your API. And it also allows us to run the aggregated API server either inside or outside of the cluster. So then you can stand up your aggregated API server and you can actually run the machine controller when you don't have a cluster running or if your cluster's API server is broken you can still run your machine controller which is really nice. We've also talked about improving the bootstrapping process because the code that actually is instantiating that initial cluster with the master is very, very similar to the code that creates more nodes in your cluster and it really should just be the same code. So you would effectively run the machine controller locally to bootstrap into a cluster and then promote that either into the cluster to sort of self-manage or next to the cluster to manage outside of the same failure domain. Awesome, and I think that's all the time we have for it, unfortunately.