 Welcome to the SIG Cloud, wow, saying words is hard. Let's try that again. Welcome to the SIG Cloud Provider Update for KubeCon EU 2023. This is super exciting. We have a packed room, we have a packed agenda, we have a packed stage. Let's get into it. I'm Bridget Kramhout. I work at Microsoft Azure. I've been working in the Kubernetes space for, it seems like 800,000 years, but I think some of those are pandemic use, which are either shorter or longer, not really sure. What did I promise? I promised something probably. Yes. I am very interested in hearing what all of you are doing with your hybrid Cloud, which someone just reminded me this morning actually exists. I'm Michael McEun. I'm a software engineer at Red Hat, mainly working on Cloud infrastructure and auto-scaling and all those fun things. Yeah, I've been working on Kubernetes for more than a few years, and yeah, just here to have some fun. I'm Joel Speed. I work for Red Hat as part of OpenShift working on the Cloud infrastructure stuff. I started with Kube about six years ago as an SRE, and then at some point decided to go into software. Yeah, that's how I ended up in Cloud. I was on call until 2015, and at that point I was like, I probably should let someone else be on call. Like Joel, why not? Yeah, I mean, that's one of the reasons I moved into software, is to avoid on call. All right, where are, why can't I not, we didn't assign, who was going to set the foundation? All right, Cloud provider, it's important. Clearly you think so, let's get into it. The problem is I'm having difficulty seeing, so I'm going to be that person who looks like this and say, ah, yes. So Cloud provider might mean different things to different people. I need to actually be able to see something, unfortunately, to see which slide I'm on, I'm sorry. Yes, it mostly refers to the infrastructure it's deployed on, but it actually also refers to very specific software bits. And there's a lot of disambiguation that we kind of have to do in this space, so hopefully that helps. Yeah, so like the important thing to remember here is that, you know, Cloud provider can mean a couple of different things, right? And so people use it generically, but what we're going to dive into here are some of the terms that you're going to want to know when you're looking at the Cloud provider that we call the component inside of Kubernetes. So just a couple definitions, KCM, many of you are probably familiar with this, the Kube controller manager. This runs some of the core controller loops inside Kubernetes for things like namespaces, replication controllers, those kind of things. And in kind of the, I don't want to say legacy, but currently there are also what are called cloud controllers inside of that Kube controller manager. And so part of what we created is the cloud provider interface, which is the CPI, and you might have seen this around. And you also, if you're thinking, wait a minute, is that the same as CPI or CAPI? No, that's cluster API, cluster API providers, or wow, words are hard. That's cluster API that is in fact used to manage Kubernetes via Kubernetes. It's not this, it's lovely, it's not this, there are talks about it at this conference. Right, totally. So cluster API is a separate project. When people are talking about CPI, what they're talking about is the programmatic interface that was created so that we can make the cloud controllers a little more abstracted. And the part of the reason we did that is so that we could create the CCMs, which are the cloud controller managers. Now there's a lot of acronyms going on here, but the key differentiation between the KCMs and the CCMs in Kubernetes going forward is that we're removing the cloud controllers from the KCMs so that they can be deployed separately as CCMs on their own. And I think if you're wondering, like why should you care about this? This probably applies to you if you're, especially if you're using hybrid or if you're using a provider or maybe not using a provider. Just looking to see this. Yeah, we've been refactoring these CCMs to make everything out of tree. And so that's a change that is coming now that is in progress with certain providers. Yeah, absolutely. In fact, that journey kind of started back in 2009 with this cap by Ben the Elder. I'm sure you've all heard of him. The idea was to try and remove all of the cloud provider specific code into their own repos so that we can have their own release cadences. The dependency management should be much easier in KK going forward. But yeah, we've kind of called this process moving out of tree. So in tree providers are the cloud loops in the key controller manager and out of tree refers to the cloud loops in the CCM. And there are trade-offs here. We're not gonna say that this process is going to be easier or perfect because the trade-offs is of course, of course you can then get inconsistencies between the cloud providers. We'll talk more about that later. But you also have the benefit of when your cloud provider wants to release something they don't have to wait for the next Kubernetes release. Yeah, and if you wanna know more about those Justin Santabarbara did a talk yesterday about that. So look that up because he's got a whole thing about breaking this stuff out. Okay, so what are the cloud controller managers? Like at a very kind of basic level or just a, you know, if you look at them these are containers that are running within your Kubernetes cluster. They generally have four controller loops in them although they may not express all of those but they're in general the node controller, the node life cycle controller, the route controller and the service controller. And we'll dive into deeper what these are but just loosely the node and node life cycle controllers help to perceive instances in the underlying infrastructure and so that they can see information about their presence or absence. The route controller helps to make sure that networking is consistent for containers that are across different nodes. So when nodes come and go the routing is updated and then the service controller is responsible for looking at service objects of typed load balancer that are created so that they can create the underlying infrastructure for it. And this is just a quick diagram to show you kind of like conceptually how it fits into the Kubernetes cluster. I want you to note here that, you know what you're looking at on the left-hand side these are like an abstraction of the pods that you'll be running in your control plane. Ideally you'll want multiples of each one of these especially depending if you're running in a highly available scenario. So, you know, although you see one CCM, one KCM in production that might be three or it might be five whatever your needs are for that kind of redundancy. So what we're looking at here is you can see the Kubelets, you know talk to the Kubernetes API and the Kubernetes API can then be perceived by the CCM, right so it's optional how some of those things are and they talk to the cloud provider and then kind of update information in the system. And we're gonna dive into kind of what each of these components does in the CCM. So the node controller is kind of the first one today. This is responsible for going and talking to the cloud provider, getting various bits of information about the instance and then applying that information to the instance itself. So you may have seen region and zone labels, you may have seen the IP addresses, host names, the instance type, all of that information on your node object is applied by the node controller. Now one thing I did wanna mention as well is there are two versions of a lot of these labels. There's the deprecated old beta topology labels for region and zone and then there's the new ones. And sometimes they aren't implemented the same across different providers. I mean that is actually a really important point and that's why we're talking about a lot of stuff in generalities but when we dive into the specifics different cloud providers have different nuances of how they implement these and different exciting bugs that you can find, report and talk about. Yeah. The second is the node lifecycle. So what happens when you go onto a cloud provider and you delete say the EC2 instance, there's something in cube, that's the node lifecycle controller that recognizes that and deletes the node object so you don't schedule anything in there anymore. It also picks up if you go and shut it down so that it will take the node, make sure nothing new is scheduled there. So that brings us to the route controller. The route controller is, I find it interesting and complicated but what it does is it helps to make sure that pods and containers across different nodes can communicate with each other. And so you can imagine Kubernetes runs a container network inside of it on top of the physical network that it's running. These things might be different on different cloud providers depending on how the nodes or the instances behind the nodes are deployed. And so what the route controller is doing is it makes sure that when a new pod comes into the system and a new network citer is seen that the route tables are updated so that those pods can communicate with each other. And then the service controller is another networking controller that I, again I find fascinating. It interfaces with the underlying cloud provider who provides services like elastic load balancers, right? So when a user creates a service object in Kubernetes of type load balancer and the cloud provider implements that, what's actually happening under the covers is the cloud provider is putting a load balancer in place for you. And that'll be different depending on the cloud provider's implementation of it. Okay, so deprecation status. Yeah, so the entry cloud providers for AWS and OpenSec so far have actually already been deprecated out of core Kubernetes. The other cloud providers are not far behind. What that means is that newer versions going forward, you will not have entry available to you. You'll only have out of tree available to you. And so if you're thinking, well, I use one of those cloud providers and nothing has changed, maybe nothing has changed because you haven't necessarily moved to the versions of Kubernetes that it's been deprecated at. But this is important to pay attention to as we mentioned earlier, just cause this is changing. The provider, the entry providers are planned for removal. That's not scheduled yet, but it is going to happen sometime in the next several Kubernetes releases. I don't wanna give a specific release number cause we haven't set that in stone yet. It will be very public and available to for you to find out when that does happen. It's probably important to take a look at the provider that you're using and make sure you try the out of tree provider. The other reason to try that too is entry providers, I'm very well aware trying to get bug fixes merged. We are only merging bug fixes at this point. So any new features that you want are only going into the out of tree providers. Yeah, I'll just add one note on kind of the complexity of removing and the deprecation status. Like this is something that SIG cloud provider is looking at very closely. And we have a lot of end-to-end tests for the cloud providers, right? Especially the cloud providers that we consider entry that were created as part of Kubernetes, as part of the core Kubernetes. And so as we're removing them, you can imagine we're finding things that maybe we didn't realize was like, oh, that was a load bearing wall, oops. So that's a little bit why the deprecation status is, it might seem a little rocky and we can't say, oh yeah, like it'll be out next release. This is a little bit of a complexity. We don't want to break anyone because it turns out people use their clouds for things that matter to them. Like doing work and stuff. Cool, so now we're going to kind of move on to the operational aspect of this. So one of the things that my team at OpenShift have been doing is integrating these cloud providers into OpenShift. And so we've been working with a lot of them and we found a bunch of issues along the way. So when do you want to run these? You want to start these as soon as you can in your cluster. You kind of need to think about them as part of the bootstrap process. The same way that you might run KCM, these need to be run at the same time. When you're using an external CCM, the Kubelet will actually taint the node with an uninitialized taint. And it's the CCM's job to remove that. If you don't have the CCM running at that point, there's nothing in a schedule on your node. Even your CNI might not schedule. So really important to get that running as early as you can. And presumably if people are using a managed service, they don't have to manually change this, do they? No, normally in a managed service, this would be handled for you transparently and you probably won't notice. So EKS, AKS, all of those. You probably don't need to worry about this, this particular bit. But in a hybrid or self-managed scenario, this is a detail. Indeed. Where do you want to run these? Well, as I said, this is kind of like a KCM, but really you could run it anywhere. But if it were me, I'd run this as if it were Cloud Control Manager. You can use a deployment if you want to. You can use static pods, but you probably want to run this on the control plane because then you isolate all of the cloud interactions to the control plane. One caveat to that, some providers, for example, the Jure, do you have a daemon set that runs across all the nodes? So there's some slight differences there again between different providers. So how do you want to run your CCMs? This is kind of tailing on what Joel just said about daemon sets, right? There are a lot of different ways that you could deploy these. Like you could use a deployment, you could use a daemon set. Sorry, I'm just noticing you're holding the mic a tiny bit away from your mouth. Oh, sorry. Let's just make sure people can hear you. All right, cool. So probably, sorry, I'm a little lost here. Deployments, daemon sets. Yeah, so when you're deploying these, what you want to think about is, first of all, they'll need to tolerate the not ready taint. And this is like, not ready taint is a networking taint that gets removed by the container network interface. And so when you think about deploying the CCMs, there's a lot of tolerations that they're gonna have. And likewise, your CNI might have to have tolerations as well. And so the CCM needs to be deployed very early, like Joel was saying, but it also needs to have the CNI available in some cases. And so you'll have to be aware of this. Like the CCM might need to be deployed in such a way where it can use the host networking until the CNI is ready. And because the CCM removes the uninitialized taint and the CNI is gonna be also dependent on some of these things, you're gonna have to be careful about making sure the CCM tolerates this and that your CNI also tolerates the uninitialized taint on these things. So it's like the stack of turtles on your cluster API hoodie, only the turtles are very active and interested in everything and moving. Yeah, totally. And I mean, I'm just trying to look at the notes here we left for ourselves. Optionally, you can use the service account credentials flag to help with some of the setup for these things. So you either wanna create a special service account for these when you're creating them or you wanna use the one that comes out of the service account from Kubernetes when you create it. And I think really the key here since I'm really fumbling my words is like these Kubernetes CCM docs at the bottom do a much better job of explaining this than I just did. So please look at those after the talk. You're not fumbling your words at all. You probably actually drank caffeine this morning. And this is actually really complex stuff. So that's the takeaway there is if this seems like a detail that you need to care about, these are docs you should read. Yeah, totally. Go back one leg. Just setting up the CCMs. This is where we run into a lot of issues because again, the way that the nodes need to be initialized there's all these really low level no schedule kind of taints that you need to be observant of. And that's where we've gotten tripped up with this. And that's why you're at this talk. So you can figure out which stuff you really need to know. Okay. I do. So you want high availability, right? We've talked about being on call. You want to avoid downtime. And in this particular case, yeah, you might want multiple replicas of the CCM because redundancy is great. You can use leader election to avoid having like collecting control loops. And a pod disruption budget is probably also useful because you don't want a scaling event to lead to you having absolutely nothing controlling your cluster that seems risky. And there are a number of warnings here that you can see on the slide of things that could happen if you have no CCM running. If you're thinking like, wait, have no CCM running? Yeah, I mean, it's possible, but you probably will have a lot of delays in initializing nodes. I can tell you all about the slide because of how you spelled initializing. Delays in removing nodes, delays in configuring your load balancer depending on your load balancer. Right, yeah, exactly. And so like the point of this, when you're talking about a highly available situation, right, is that you want to have more than one CCM running, right? So that if something happens where the CUBE scheduler needs to move one to another node or a node gets deleted that it was on, maybe you're replacing a control plane node or something, you have an extra available to continue that because if you don't, like Bridget said, you might experience some of these types of disruptions. Okay, so migrating from in tree to out of tree. Now, thankfully, the SIG Cloud provider has put together a really nice way to do this. There are some command line flags you can use in the KCM to kind of initialize this process. But what you really need to remember here is that what you're gonna be doing is you're gonna have to be restarting the KCM processes if you're doing this in a live cluster as you start your new CCMs. So, like I said, the easiest way to do this is to follow the docs link at the bottom here and to use the command line flags to allow you to enable automatic migration. But it's also possible to do that without using those flags. And in order to do that, what you're gonna have to do is really get into like switching the leader migration over from the KCM so that the new CCMs can take over that leader migration and then you'll have to restart the KCMs, removing one of the flags that it was started with so that the CCMs can then take over that job. So like we talked about at the beginning, the KCMs have the cloud controller loops in them now and we're extracting those to the CCMs. And so when you're doing a live upgrade from one version of Kubernetes to another, and what you wanna do is you wanna migrate from the entry to the out of tree. If you're not using the mechanisms described in the documentation and you're managing this yourself, which is what we do on OpenShift through an operator that we've created, then you'll have to be aware of kind of the replica counts of each, how the leader migration moves over and then when you bring down the KCMs and bring up the CCMs, you'll have to watch this process yourself. Oh, and I think that operator is open source as well, right? Yeah, absolutely correct. We didn't put a link in here, but there is an operator that we have and this is gonna be a mouthful, but it's called the Cluster Cloud Controller Manager Operator. Say that three times fast. Yeah, I will not, but internally we like to call it the three CMO, which I kinda love because I'm a Star Wars fan and it sounds like three CPO, so I don't know, I love that. But so that if you're thinking to yourself, I either, oh, I need that one, or I wanna look at that one to get ideas to write my own. Right, so especially as we're talking about operationalizing the CCMs, looking at that operator would give you an example of how we're doing it on OpenShift and how we're using automation to make sure that users never really have to worry about the complexities of this. And as I said earlier, we've sorted a lot of problems along the way, so I kinda wanted to give you some ideas about where you might wanna start if you do start seeing issues when you switch to CCM. So first thing, just check if the node is uninitialized. If it's not, then is your CCM actually running? Is there an issue in the logs? Did you get the node information on there? Is the topology information there? If it isn't, your scheduling of pods is probably not gonna work correctly. Is the node readily? As we mentioned earlier, the conflict, clear, I cannot talk today. Conflicts between the CNI and the CCM sometimes, so go and check on each of those. And here's the kinda weird one. Is your node registered with the load balancer? So we mentioned that load balancers are managed by the service controller and there's differences in behavior, depending on if you have external traffic policy, local and cluster. And in some certain scenarios, you might see that actually, if your node isn't connected to the load balancer, you might not have any outbound connectivity. Which we have run into. Which we have run into. Now, this was a bit of a surprise for us back in February when we were looking at this. Exciting surprise. Yeah, we could call it that. So in certain scenarios, on Azure, your instance relies on the public load balancer attachment to have outbound connectivity to the internet. And if you wanna read an essay on this, I did write one, it's linked there. It's actually fascinating, has diagrams. Yeah, true. But the point there was that the CCM was removing the instance from the load balancer before the CNI came up. And so we lost internet connectivity and then couldn't pull the image. Which was kind of frustrating because it just meant that the machines got wedged. And what seemed more interesting is that this only affects control plane machines and not regular work nodes. So very confusing issue. Have a read if you wanna laugh. But I think that's a great example of how across the ecosystem, we're all working together and Red Hat and Azure people are jumping in to figure out, how do we solve this? Cause of course if it were easy, it would have already been done. Yeah, exactly. Yeah, so the short message here is go read the issues, right? Like the real point here is this is an example of one category of problem that you might see when you're working with your cloud provider and your CCM, right? So it's really important to understand that although we have a common interface, the CPI for all the cloud providers to use, the cloud providers are expressed in very different ways and they have different features that they make available. And they're talking to different underlying infrastructure. That's implemented differently. Exactly. And so you will have to look at the specific cloud provider that you're working with, be aware of their change log, be aware of issues they're reporting because these things can really trip you up when you get into some low-level networking issue where the CCM is getting hairpin talking to the metadata server or something like, it gets really gnarly. If you're troubleshooting that in production and you think it's just you, it probably isn't, go look. Definitely. All right, so let's talk a little bit about building your own CCM. How many people here are interested in creating a cloud provider for their infrastructure? Oh, cool. Wow, we've got like 10%, 15% of the room, which is more than I assumed. So let's get into it. Yeah, I'm optimistic about this. I think it's really cool. So SIG cloud provider maintains a common library that we've created to make this easier for new cloud providers who wanna make these things. And this is the link to go check it out. That's where you really wanna start your journey by looking at that code. And then what you're gonna look at in there first, and that code is a library. It doesn't build a binary for you. But what you can see is in the root of that repository is a file called cloud.go. That file defines the interface, the CPI that we were talking about earlier, that you need to satisfy in order to make your CCM work. And it's a short enough interface that we're able to just print it here. I like this. You're like relatively brief. Relatively to what? Yeah. Okay, okay. So the important thing here is not necessarily that I'm gonna go through each one of these and talk about what they do. They're well commented in the code. But just to give you an example of really, this is what you're looking at the work that you'll have to take on as you're implementing this for your cloud provider. So you might have noticed on the previous slide that many of those interfaces return some sort of structure and then they also return a Boolean value with them. This is by intention so that cloud providers can signal which interfaces are optional for them, right? So for example, the load balancer interface, if your cloud doesn't have any sort of load balancing services that customers or users might be able to use, that function would return false. And then a user would not be able to create services of load balancer that will affect the infrastructure, right? Like you may still have another controller that creates maybe HA proxy pods in your cluster or something, but it wouldn't be using the infrastructure as load balancing services. And I don't know, I know when we were talking about this we talked about like for example, when it became possible to create services of type load balancer with mixed protocols and then some cloud providers didn't support it yet. We actually have some bug fixes that are getting back ported right now that I think just got merged that are just in there to fix exactly that. Right, yeah, so there are, you know, these are some exactly like dual protocol, dual protocol load paladin, dual protocol load balancers is something that I think currently is not handled. I think it's turned off with the links that I was looking at that. Yeah, exactly. The other thing to keep in mind here too, is that, yeah, like on the optional interfaces, this is something we talked about yesterday and Bridget kind of posed this question as we were going through the, we were practicing our talk. And so I went and looked last night, like you might be wondering like, if I'm running a CCM, how do I know if it implements the load balancers, you know, interface or the instances interface, right? And I thought it was a great question. And I was like, wow, like I, you know, off the top of my head, I don't know how you would check that because you're looking at a very, you know, really niche component. Use the source, Luke. Yeah, but I did use the source and I went to Dagobah and looked at the code. And what I found is that every single one of these interfaces will emit a log message when it's not implemented. So if you look at the logs coming out of your CCM, when it boots up, you will see which interfaces it does not implement. So instances or instances V2, you might have noticed on the previous slide that there were two of these that looked very similar to each other. And I wanna highlight this because the instances interface is the interface that was created for to satisfy the entry KCMs. And when you use that, there is certain other information you don't get. So if you're using the interfaces or the instances interface, you will use the zones interface to return information about where those instances are. Conversely, if you're using the instances V2 interface, which has been designed to make the CCMs have an easier interface, that zones interface will go away. And it is deprecated at that point. The instances V2 interface is what will return all your zone and regional information. It's also worthwhile to note that the instances V2 interface has been designed in such a way so that you can, I guess, make the communications with the metadata service in the cloud provider more efficient. And it's, I feel like I'm not 100% sure how somebody would know which one are they supposed to use. Yeah, so if you are making a CCM now, use instances V2. Do not use instances. Instances, it's still there to support the legacy migration. It is marked as deprecated in the code, although there's no firm date on when it will disappear. But you should look at instances V2. It has the same functionality and better performance with the underlying cloud. And to kind of wrap up that point as well, you don't need to do zones anymore. Like if you're building a CCM from scratch, instances V2, just ignore the zones thing, return an empty interface, you don't need it. But if you do wanna start a new provider, there is actually an example. So if you go to the cloud provider repo, there's a sample directory in there and that will show you how to get set up. It will give you the bootstraps kind of like bare controller for you to plug your interface into. So if you did want to do this and you haven't started yet, I'd recommend going and having a look at that sample. And the other great thing too is to look at the examples of other cloud providers that have been created. Now this slide is kind of dense here because there's a lot of information I wanna expose. The Kubernetes and Kubernetes SIGs organizations on GitHub contain code repositories. And if you search in both those locations for cloud provider dash, you will get a bunch of returns. In the Kubernetes org, you might see that the AWS cloud provider is there. And then if you look in Kubernetes SIGs, you would see that the cloud provider Azure code is there. And so you might be wondering why the difference? What's the split here? This is kind of just an organic growth from history, right? We put them in the Kubernetes org and then someone came along and made it says, hey, we should probably be putting these in Kubernetes SIGs. So if you're making a new one and you're thinking about contributing to the Kubernetes SIGs, that's where you wanna be looking. When you're doing research, looking in both places is great. They're both equally valid. They're all tested in similar manners. So they're both equal. One is not greater than the other. There is one thing you need to look out for here. And this is like, this is the trap, right? Like there's something called cloud provider sample that purports to be like an example of how to set up everything. It's just documentation. And I've opened an issue to kind of keep going. So anyways. All right. So if you're thinking at this point, wow, I kind of wanna get more involved with this. Congratulations, welcome. And there is a path. So the SIG itself, the cloud provider special interest group, we have meetings. The meetings are, they're at 1800 CET. Yeah, though I will point out as this is a European conference, that meeting's actually pinned to specific time. And at some point in the mid 2000s, the Americans decided it was a bad idea to have universally consistent calendaring. So at daylight saving times, that meeting jumps around a bit. So sorry. But the SIG meetings are fun. We are there. We are, it's a great place. It's a pretty small meeting and it's a great place to come and bring your questions, bring your bugs, you know. Yeah, yeah. I mean, you folks who are raising your hand about wanting to create a cloud provider and everything, come show up, ask us questions, bring us your problems, share your issues. We'll help out. And yeah, the, oh, and with link there, SIG cloud provider for all of the details. Yeah, so there is some active development going on as well that you can get involved with. There's two really big things that are coming up here. We've just, you know, merged a cap and kind of the initial code for a web hook hosting capacity for the CCMs. And you might want to use this for things like persistent volume labeling and whatnot. So when the admission comes in for, you know, for like a new storage object or whatever, you know to make sure to add the proper, you know, labels for your zonal awareness and your cloud provider. The other big thing that's coming up as well is we're trying to make the end-to-end testing even better. And what I mean by that is right now our end-to-end testing is very focused around the, you know, the cloud providers that we have available today. But what we would like to do is make it even better in the future so that every cloud provider could take advantage of the core end-to-end tests. And so we're looking at ways to abstract the interface there to make it even easier for, you know, new cloud providers to do that. And this is relevant to your interests because if you would like to do work that counts towards Kubernetes org memberships, you can come to the contributor summit in Paris next year, you know, jump in, start working on this. Yeah, absolutely. And if you want to get in touch, there's a mailing list that you can join. We've got the regular SIG meetings, as we mentioned. There's also an extraction meeting that happens on the odd week. And yeah, like, I've been starting to go to these recently. They don't seem to shout at me when I bring in lots of issues. So they seem like nice people. You open bugs and we say, this is fantastic. And then we start talking through the nuances. Yeah, so it's a little fun. Come along, have a chat if you've got any cloud issues. I think that's all we've got, right? Yeah? How do you pronounce it correctly? Bedant. Bedant? Bedant. Do you have questions? I think we have like one or two minutes for questions. So maybe one or two questions if people have any. Any questions? I have a conceptual question. What is the conceptual difference between implementing a CCM or first in my own operator? That's an interesting point. So the CCM obviously has a certain set of functionality that is implementing and you don't have to use the cloud provider implementations that are provided. In fact, there are lots of them that don't. So I know, for example, Alibaba had some different requirements. They went off and did their own implementation. They still call it a CCM, but it is very different if you look at the code to how others are doing it. Some of the providers share the library, Google, Amazon, as yours is slightly different because of some of their architectural concerns. So there's nothing to stop you going off and doing it as long as you implement the functionality. There's prior art in that case. I would add a note of caution there because I've worked with partners who have wanted to engage certain components that require a CCM, right? So in some clouds, like the CSI, the storage driver, wants to have zonal awareness labels that are usually added by the CCM. And I've seen cases where, we've had partners who say, well, I don't have a CCM yet. Could I just write some script that goes and adds these labels every time it sees a new note or whatever come on? You can do that. You definitely can. It becomes very complicated. So my advice would be to study the code that's there and make sure that you're following what it's doing because there are a lot of pitfalls that you could hit, just because if you try to wrap your own, you might have missed a detail that was in the cloud provider. And then this can end up causing kind of ripple effects where you're debugging the type of issues we were talking about before. Awesome, thank you so much. We're out of time, but we really appreciate you all coming and we'll be here for a few minutes for some questions. Yeah, absolutely. Thanks.