 Hello, and welcome to the Cloud Multiplier. This is episode five. I am your host, Gernie Buchanan. Unfortunately, JoyDeep wasn't able to join us today. He is feeling a little under the weather, and I encouraged him to take some time to get better, so hope JoyDeep will be back next time in two weeks. But I have the pleasure to be joined by Mike and Makayla from the Open Cluster Management Community. So today we're gonna be talking all things Open Cluster Management. I'm really excited to have them here. It's a really exciting CNCF Sandbox project that I've had the opportunity to work with and work alongside these folks for a good little while. So welcome to the show, folks. We'll do a little round table. Mike, you wanna kick us off? Introduce yourself a little bit. Yep, I'm Mike Yang. I'm a developer from Wet Hat. I work on the Wet Hat Advanced Cluster Management Project. I'm also the community organizer for Open Cluster Management, as well as an active participant in the Open Source Project around the Kubernetes ecosystem. Awesome. Makayla, you wanna introduce yourself a little bit as well? Sure. Yep, my name is Makayla Jackson. I am a writer with Wet Hat Advanced Cluster Management. I'm also a participant in Open Cluster Management Project and have been enjoying myself, at least just learning the ins and outs of Open repositories and what that means to actually be a contributor. So yeah, thanks for having me. Awesome, thanks for joining, folks. Like I tossed up there, GitHub's and Socials are there if anyone's interested in reaching out. We'll get more into the project here in a little bit, some of the interesting links and where people can get involved. But first, as always, I have some top of mind topics as I think I think I've still shamelessly stolen from Ask an Open Shift admin. I'm still workshopping names. Hopefully we'll give Andrew his back soon. But that being said, the one interesting thing I have been kicking around with this week, and I know Mike probably has this near and dear to his heart as well. I have been once again in awe of and hacking around Open Shift GitOps, basically Argo CD. So I've been doing a ton of work in Argo and recently it was brought to my attention and this might be new, Mike as well. Wanna see what you think? They're adding, potentially adding a Terraform integration to Argo CD. So it'll allow you to provision infrastructure and do other Terraform applies and runs and actions through Argo CD. So you can do declaratively through GitOps, configuration of infrastructure. I don't know if you'd seen that at all. I can hunt down the link for that if you're interested. Yes, I am interested. That sounds really exciting. Thank you for sharing. Yeah, let me, it is, looks like it's an Argo CD Terraform control. Yeah, here we are. I'll drop a link in the chat for everyone. Really interesting project. It is just in a pull request, but I'm pretty excited to see that as someone who's used Terraform in the past. And I think we'll get a lot of good insights, Makayla as well from the open source docs standpoint. This is one of the best written PRs I've seen as well. So I've enjoyed that. I know I'm bad about it, so. No, never did. So yeah, Mike, Makayla, any cool, interesting or fun open source developments you guys have seen in the past couple weeks is the usual, what we shoot for. Yeah, we've been actively participant in the multi-clusters SICK, which is a part of the Kubernetes SICK. We work on the work API, which we can say sort of it's a project that the open cluster management also originated from. So that is something that we're trying to contribute to the community to set a standard API so we can deliver a workload from a centralized location to multiple locations. So that's something that we've been discussing and we actively participate on in the Kubernetes SICK funds. Do we have to mention only projects within open cluster management or are you asking about just any projects that we? All of the above. Okay, okay, so something, a project that I've been aware of for a little bit of time now is it comes from the Linux foundation company and specifically is called Ag Recommendations. Well, excuse me, the entire project is called AgStack, but there's one project underneath it and that's called Ag Recommendations. And basically it's a framework designed to help enable people to create applications as well, but it's related to agriculture. And basically they're looking for contributors to help take information from the Cooperative Extension Guide. And that's a guide that farmers use or anybody that might be interested in like pest control management or water management, things of that nature. But basically there's an effort to digitize this information. And so personally I'm looking to figure out how open cluster management can be used, how that framework can be used to create applications and then be able to just add data, pull data, whatever for AgStack or the Ag Recommendations console, whatever is to come is still in its early stages, but that's something that I've looked into in terms of open resource, excuse me, open repos. Wow, I had never seen this. I had never seen the AgStack. I know a lot of open source work was happening in the agricultural space. If there's ever a tinkerer community that's going to solve their problems on their own, agriculture's got some really cool makers and that's amazing. I dropped a link, I hope it's the correct link to the AgStack.org. Yeah, that should take anybody to the original page, of course, and then they have their GitHub link attached to that too. That's incredible. Okay, the side conversation is in college, one of my projects that I worked on was studying honey bees and monitoring honey bees with little embedded systems on the side of a honey beehive. And we did machine learning to count bees and identify. So it's really interesting to see a whole open source Linux Foundation project dedicated to biological monitoring and agricultural monitoring, that's amazing. Yeah, I'm excited about it. I'll bring it up next week. We gotta bring it up to Joydeep because I know there's a lot of data science to be had there and that is his favorite thing. Sounds good. Yeah, thanks for bringing that up, folks. Well, without further ado, I think it's time for us, that's a good segue to dive straight into why open cluster management is relevant, especially when maybe you have a few embedded systems that might be monitoring rain and soil composition and growth and something maybe on a tractor at a cell site. And now you have five, 10, 1500,000 different compute devices that you really need to connect and coordinate. And I think that stems directly into the open cluster management world. So what is open cluster management? What's its foundation? Where do we find it? So one, you can find it, of course, on GitHub. That's something pretty easy. But let me start off with the summary of what open cluster management is. Again, my name is Makayla. And so open cluster management is an open source project. Oh yeah, you go to the next slide. It's an open source project from CNCF, which stands for Cloud Native Computer Foundation. And it's designed to simplify and unify the management of Kubernetes clusters. And so today, Mike is gonna also go into the architecture of open cluster management, which includes the hub cluster, or excuse me, hub and managed cluster architectures. And you can, with the framework, you're able to specify the distribution, or yeah, you can specify the distribution of Kubernetes manifest just from your hub cluster. So Mike, I'll toss it to you now to describe the architecture and for the demo. Yup, thank you, Makayla. So I'm gonna talk a little bit about the open cluster management architecture. I'm actually gonna head over to our websites where we have most of the explanation in the stock format. So open cluster management comes out, the need to control and manage a fleet of cluster. So with the premise of you wanna manage a single cluster and then grow that into a fleet of cluster, then you're able to maybe deliver a workload, then eventually you'll have application lifecycle, and then later on you can apply governance and configuration policy across the entire fleet. So it just keep going up the stack as we build on top of open cluster management. So in terms of the overall architecture, it's a hub and spoke or hub and managed model. So it's the same model that Kubernetes uses for the API pattern. We're very fortunate to have several key members from the Kubernetes project as open cluster management maintainers. So the open cluster management design, it's heavily influenced by the existing Kubernetes design. So the key is if you're familiar with Kubernetes on a single cluster, some of the terms and some of the design philosophy that we use in open cluster management should sound similar and work similar as well. So going back to the hub and spoke model, we have a cluster that can be act as a hub by running some cluster management controllers on it. Then the spoke or managed clusters with agent running in them will join the hub cluster. And this is the key point I want to emphasize is to make the connectivity is actually initiated from the spoke cluster to the hub cluster. This is done intentionally because based on the community and user feedback, we didn't want the hub having to go through the firewall and reach out to cluster in a push like model. So open cluster management is more of a pool model where the spoke or managed clusters are looking in the hub for work or the manifests that needs to be deployed and bring them down to the managed and spoke cluster side and applying them. So as well from a scalability point of view, it's much easier to have individually similar agent running on those remote clusters doing most of the heavy lifting instead of in a push model where the controllers on the hub is trying to push down to 1,000 and maybe 2,000 clusters. So again, our hub agent is a pool model. So the agent is pulling from the hub once they connect it. But that being said, there's obviously cases where you want to do push and we actually have add ons that enable the push model as well. So it really depends on your use case. You can have a default decentralized pool model or you can have a centralized push model. And that is one of the benefits of our open cluster management. It's really modular and extensible. So we provide a foundation where you can build your multi-cluster solution on top and you can pick and choose this, the solution that you want. So I'm, yep. Oh, there you go. Oh, I was gonna say that we have a good question that lines up right here. So you said that by default, it uses a pool model. So rather than having your hub component push down content 2,000, 2,000 different clusters, you're gonna have all of those one or 2,000 managed clusters, however many really pull from that pool data, pool work to do from that hub. And then there's a concept of add ons where add ons can utilize their own interaction framework that might be more of a push and some add ons use that push model. So there's an ability through open cluster management. It sounds like to do both a pool and a push but you need to understand the constraints and costs and benefits. That sounds right? Yep, exactly. Awesome. I hope that answers we had one question in chat so that might answer your question already. Yep. So are there any other questions before I show in the action with a demo? I don't think so. I'm ready to see it, Mike. I'm excited. Sweet, okay. So I got three terminals here on the left side here. I have a kind cluster that is, I set up sort of to imitate a hub cluster and then on the right side here, I have a, they all nearly created another client cluster that represents the remote cluster one and another client cluster that represents the remote cluster two. So what I'm gonna be using is the cluster admin CLI tool. So this cluster admin CLI two, we based on our experience from the cube admin CLI two. So some of the commands might be similar and does the similar things. In a cube admins CLI two, you boost trap a Kubernetes cluster with the control plane and then you join worker nodes to that control plane. And then in open cluster management, the cluster admin command, you initiate the multi-cluster hub control plane and then you have the remote cluster joining to the hub fuller command. So in the interest of time, I already ran the cluster admin init, which essentially lay down the controllers for the hub on the hub cluster side. And after that is done, it also creates the service tokens so that we can use a service token to do the initial join from the remote cluster to the hub cluster. And it'll also spit out the commands that you can use to request a join, request the registration from the remote cluster to the hub cluster. So I'm gonna run one of those, both of those commands. So I have a copy. So this will deploy the agents on the remote cluster side and then once everything's done, that will generate a registration request to the hub cluster. And then on the hub cluster, you will have to approve it because with the service token, anyone can initiate the request, but that doesn't mean that automatically, this remote cluster is joined. The hub cluster admin still have to approve it as well. So I'm gonna do that on cluster two as well. So while we wait, we can talk a little bit about the security. And then I'll go into more depth later when we have more slides to show regarding the registration process. So in this case, the sort of the remote or the worker cluster admin, they can list and then we, the managed clusters and create a managed cluster request on the hub cluster side. And also we create a CSR request, but those CSR cannot be impersonated due to the fact that CSR only contains the certificate and the client authentication would cry both the key and the certificate. And the key is actually stored in, the key is actually stored on the remote cluster side and it's actually never transmitted over the network. And the worker clusters or the remote clusters cannot approve his own cluster registration by default because we have our back set up so that to approve a cluster registration is only bound by the cluster admin on the hub. So when you generate a request, a joint registration request, you create the certificate signing request, and then you also create managed cluster as well. So these managed cluster appears on the hub, but you can see they're not joined and not available yet because the hub admin has not accepted the joining request. So now I'm going to accept the one of the clusters and then I'm gonna accept the other cluster as well. But we can go back and check the CSR and now they've been approved. And then for both clusters, so now that means we set up the authentication between the remote cluster to the hub cluster. And the certificate will renew itself by the controller on the remote cluster side. And also in terms of authorization, we can look on the managed cluster side. So now the managed cluster are joined as well. So after you set up this connectivity, one of the basic things that you can do is you wanna deploy some workload from the hub to the remote cluster. So this is done in open cluster management through a Manifest Work API. So Manifest Work API, after the registration is completed, each remote cluster have its own name space, that designated name space on the hub cluster side. So if we look on the hub, we can see these two namespace are created. So on the remote cluster, cluster one, it only has access to the hub cluster, cluster one namespace and same for the cluster two. Remote cluster two only have access to the hub cluster's name space cluster two as well. So to deploy a workload, we use some Manifest Work API and then you wrap your workload or manifest in this API and then you put it in the namespace, which represent the remote cluster that you want and then you apply it on the hub. So now we can see that on the hub side, you can see that the Manifest Work has been applied on the hub and then we can check on the remote cluster. It's being deployed on the remote cluster. And if we check on cluster two, obviously there's no deployment yet because we only get deployed on the cluster one namespace and the cluster one doesn't have permission to access the cluster two for security reasons. So we'll do the same for cluster two, change the namespace here to cluster two and we will apply the same Manifest Work API and then we should be able to see the pod running on the remote cluster as well. So that sort of gives the most basic foundation block of open cluster management, the key of registering your remote cluster to a hub cluster and then you can deploy some workload towards your remote cluster. So before I continue on, are there any questions? Yeah, so functionally what you just showed us is an entity that has access to a cluster that they want to manage and a cluster that they've created and configured as a hub using cluster ADM. They've created a request to start managing a cluster, they've created, they've used that request and certificate to create artifacts on the managed cluster that are the cluster they want to manage to initiate that handshake and then they've approved that handshake on the hub side so that that communication channel can be open and once you've done that their certificate is renewed automatically by the hub and managed cluster pair before it expires so that that link isn't broken and then you used manifest work defined in the hub to have the managed cluster pool that configuration not push but pool in that case. Okay, we have a couple of questions. The first one is a note, next time we go back to the slides we need to bump the font size or sorry the open cluster management webpage. The link should be in chat as well but the other question is to make sure by default we push isn't mandatory just pull push is an optional add-on something that you can enable and utilize separately but pull is the default out of the box. That is correct, so I'm gonna explain add-on in a bit but we do have add-ons available that allows you to push Kubernetes API request to the managed cluster. Can you bump the font size here? Awesome. So you would need to set up a cluster proxy add-on and a managed service account add-on and then you follow through the setup then at the end you can actually use with the hub to config you can actually proxy through to your remote cluster. So with that ability then you can push whatever workload you want using this cluster proxy. That is awesome. And it looks like thanks to Patrick in chat someone else was curious about giving this to try in a home lab. So the behind the scenes pulling back the curtain Mike told me ahead of time all of this demo is actually running off of kind clusters on his local machine. So kind and cluster ADM can actually get you basically what Mike's doing for the demo today is what he told me, right? Exactly, so all three of my cluster were actually freshly created today, kind clusters. So in our instructions they have about setting up a local kind environment to do the initial bootstrap and testing as well. Awesome. That is this is pretty, this is pretty great. I'll be honest, I've definitely been to plenty of these community meetings but I have never actually played around with the manifest work API which that's pretty incredible. That's a great easy way to push different Kubernetes manifests direct between clusters. What's the overhead look like? I know we have a lot of scale sensitive folks in chat, what does the overhead look like for the footprint on a managed cluster? So it should be pretty lightweight because our agents are quite small and we're not using a polling system for watching. We actually act, we're just watching the from the remote cluster side and watching on the hub cluster side. So on manifest work, you can actually configure status sync. So say, for example, you look at a pod and then you see these certain statuses and you wanna sync those statuses back on the hub side because that's where you manage your entire fleet. So there's also an option in the manifest work where you can configure that but we definitely don't recommend syncing the entire status. We recommend you, there's a JSON path configuration that you can pick on manifest work and then you can cherry pick the status you want and that'll become quite lightweight as well. So yeah, we run testing before and we were able to support a thousand, like 2000 clusters, no problem. There's some configuration tweaking needed in terms of the heartbeats going from the remote cluster to the hub cluster. Maybe I think the default is a little bit too strong but yep, we're in terms of scalability because we're using the pool model we're definitely able to support a large fleet of clusters. That's awesome. So basically what you're saying is the user gets a lot of dials to tune for how basically if you have 10 clusters or 20 clusters to monitor and to roll out 10 manifests to maybe you can actually send back a lot more rich detail but if you're running a thousand manifests to 10 different clusters maybe you just want the status item that says ready or not ready, ready true false and that's what you care about so you can specify that. That's a really cool amount of configuration and this definitely feels like a foundational tool that a lot of people can build on top of. I know spoilers for the community I think some folks are building on top of this this is the foundation but that's incredible. So you've mentioned add-ons a lot. I think this is probably where you're going next but is there any way for folks to contribute and build and extend add-ons themselves and pick up pre-existing add-ons? I know also Mikayla may have some comments in the doc side of things because I know Mikayla has been crafting a lot of these docs to make sure that you can actually understand how to build an add-on. So that's, I'm always curious about that. Yeah, I'll go into the more technical details about registration as well as add-on later but are there any other questions of the demo before I pass it over to Mikayla to talk about where you can contribute or where you can participate in the open cluster management community and possibly develop an add-on for example. If not, I'll pass it to Mikayla right now and talk a little bit about the community so how to get involved and how to engage with our open cluster management community. Cool, thanks Mike and great job on that demo. Do you mind selecting to present the presentation so it could come up a little bit larger? A doc? Yep, yep, there we go, thanks. Okay, so yeah, like I mentioned earlier in the video, make sure that you all start with going to GitHub first that's where you'll see a lot of resources. You'll get a view of the read me and you can even contribute in that way. So that's something that's amazing about open source and learning how to become a contributor because despite any knowledge that you don't have right now or any knowledge that you do have, you can define the value that you have and go ahead and maybe just make a suggestion or anything like that. So one way that I do, like I mentioned before, I'm a technical writer for the product team. And so one way that I know that I can contribute is through documentation. And so if you see something wrong or misspelling or even maybe we're using a wrong API or something that's described, you could put in an issue, create an issue, describe what the problem is and then that will eventually be solved or you can solve it on your own and you can create a pull request and get a review by one of the maintainers and things of that nature. So you can contribute, it's different projects within open cluster management. So you might see something like Michael was adding, you might, he was speaking about add-ons, you can contribute to the governance policy add-on project or you could contribute to API project, whichever project, you know, peaks your interest, feel free to go in, dive in and see what issues are, already created, what PRs are in progress right now. You can also join the Slack channel and see the conversations there, join the community meetings, which off the halfway, I believe at like 10 30 on Thursdays usually, but might correct me on that information if I'm wrong. And then you can also tune in on YouTube like you're doing now or on Twitch and just get involved with the community. That's how the community moves forward is through different contributions from outside or external contributors and just to help build collaboration within the organization. So, yep, back to you Mike on explaining add-ons. Thank you, Makilla. Yeah, I just want to touch up on our community meetings. So because we have contributed from the West Coast, East Coast, Canada, Israel, Europe, India, China, et cetera, we have actually two community time slots, one is 10 30 Eastern time and the other one is 9 30 PM Eastern time, the 10 40 AM Eastern time is in the morning. So we will take that on bi-weekly basis. So feel free to, it's a completely open community meeting. So feel free to join in and then we can talk about whatever problems or whatever suggestion you have for the community. So thank you, Makilla for that quick plug on our community side. So I want to go into a little bit deeper on the cluster registration and how it actually works. So the, in the demo, I glossed over some of the details around who does what and since the open cluster management is really about building a foundation block and the way we foundation and then you build on top of that foundation and the foundation piece is the most important is the act of registering or joining a managed cluster to the hub cluster. So we get a lot of questions when a organization or a team comes to want to participate in the open cluster management. They want to know that it is secure. It is done and designed properly from the base up. So I'm going to be using some of the slides that were provided by David Ease. David Ease is one of the key contributor of the Kubernetes project on GitHub site. He probably was participating in a project since the beginning of when it was brought over from when Google open sourced the Kubernetes project. So my, my explanation is probably not going to be on par it says, but I'm going to give it a try. So when, when, when we first started, you know, we have, we have three actors. We have the hub, the managed cluster and the, the hub admin. And the, the bootstrap identity is what we started with, which is, which was the service account. So service account that the hub admin share with the, the, the remote or the smoke cluster and the, and the managed cluster and users that bootstrap identity basically go like, hey, I want to create this custom resource managed cluster. I want to provide some information of what I am. So do you accept me here? And here's my current life cycle of states and the, and there, it contains other information like labels and then cluster claims, which sort of describe what the managed cluster, what the managed cluster is. And then we can build more functionality on top as well as the, the creation of the CSR. So in the, in a specific signing request, it is in the name shape like this. So it's in a format that the, the hub admin can recognize. So after, after it's created, so that we have a, we have a chance to approve. So after the approval, you actually, we won't use the bootstrap identity anymore. We actually create a more persistent cube config on the managed cluster side. And then it can renew itself and then access the hub cluster again. And this is how the manifest work actually function. This is how we sort of keep the different cluster distinct from each other and what access, the ways they have. So once you accepted the managed cluster, the hub cluster, the hub cluster actually creates a namespace as I mentioned in the demo. So because the way the manifest work, we want the managed cluster to reach out to the hub. So we need a way for the hub to distinctly, distinctly manage cluster efficiently and only grant access to a subset of resource. So we wanna avoid, we wanna avoid the situation where managed cluster one is able to get information from another cluster like managed cluster two or even modify the status of managed cluster two because you can, if you allow managed cluster one to modify managed cluster two status, you can, you can sort of steal all the workload. You wanna, you say managed cluster two is really busy now. Give me all the workload and that way you can sort of steal content that way. So in open cluster management, we're using Kubernetes RBAC on a namespace level. So it's efficient to evaluate. So because we know which precisely which user we need to grant access, then we can have them match inside the namespace and that really gives us the ability to segregate the workload delivery. So when we look at after registration, we can see how the managed work is detailed associated with the, with its cluster and with that identity contacts the hub and we treat the cluster, custom resource called the managed source work, which we show. And then inside the manifest work, it contains the workload and then each controllers, we have to deploy the resource on the remote cluster side. So it's also important to know that the manifest work can create almost anything from like namespace to custom resource or add-on, whatever it needs. And because there are RBAC that's associated with it, so if there's an evil managed cluster trying to access in a cluster, it'll get its access rejected. So, yep, go ahead. So not only does this, not only do we have a secure connection that has to have a hub and a managed cluster agree to become connected, but you're also saying that the mechanisms baked in to that foundational handshake protect from cross pollution and kind of cross access. So you can give role-based access control to, you can RBAC access to a single managed cluster and know that if someone can access rolling out manifests to that managed cluster that might be owned by app team A, app team B can't access the other managed cluster unless you give them access. And those managed clusters can't cross access either. That's really interesting, that's really good. Yeah, and it also helps with cleanup as well. We wanna segregate everything on its own isolate namespace so that it's easy to sort of clean up a managed cluster if the managed cluster wants to be the register from the hub. Are there any questions before I go into more details about the add-on? Okay, I'm gonna go into what an add-on is cause what if I wanna be like, what if I wanna be like work? So what naturally comes out of this is I wanna have a controller that runs on the managed cluster that communicates back to the hub. And that uniquely identify and perform whatever operation you want. So we call those add-ons. So there's a couple of criteria to determine if you're add-on. For example, if you need to read data from the hub so you can use it to figure out what information to feed or add-on. Also, do you need a different configuration for a different managed cluster? So that will be a good use case for an add-on as well. If you need the same configuration across all different managed cluster, you can consider using other mechanism like you can install GitOps on the hub cluster and then you just sort of distribute the same configuration across multiple cluster. So to support this add-on, we define a couple of concepts. So we have the add-on manager that runs on the hub that distribute the resources which is the add-on to the individual resources individual clusters. And then we also have the add-on agent that runs on the spoke cluster. That work almost similar to the work. It reads from the cluster and acts on the spoke and it pushes a small amount of data back. So one of the thing that was asked is regarding scalability that was brought up. So what if I want to push a lot of data back or I want to use something that is not a normal West resource back? So then we get into the access pattern that we sort of don't encourage for add-on. We don't encourage to a large amount of data sending back to the API server for the add-on which essentially make the add-on sort of like a proxy. And if you want that one way to do that as David suggested is you create an endpoints that on the hub side and then you can expose that endpoint to handles that better. And open cluster management actually provides a library that lets you perform the CSR dance which is the creating CSR requests and then accepting it that we talked about and demo about earlier. So that's another use case for open cluster management. You can use that library to build your, as David call it, the awesome data recorder and having the awesome data recorder sending a big amount of data and even non-West data back to the hub. So this is basically it in terms of my presentation, in terms of our presentation. We talk about how to join a cluster, what happens during the joining process, what happens afterwards, how does the information flow look like, how does the permission look like, how to keep the data segregated and how do I create and how do add-on work. So there are still a lot of other APIs on open cluster management. For example, we have in touch the placement API which is a really powerful API where you can determine which clusters have allocated resources and then you can use the placement API to determine that cluster one, actually have more resources in cluster two and I'm gonna push the workload towards the cluster one. And there's add-ons, we have native add-ons like application life, application life cycles, we have governance, policy, add-ons, we also have the cluster proxy add-ons, et cetera. So that really allows you to build functionality on top of open cluster management as well. So yeah, please give it a try and we welcome all contribution. We're very active on Slack, we're very active on the community meetings and the mailing list. Please feel free to give it a try and we look forward to your feedback. Yeah, that's awesome, Mike. So where would someone, so I'm looking at this, this is amazing. I see that there's add-ons for policy, I really like this manifest work thing. I see there's a proxy and a managed service account that look really interesting. Where would someone get started on figuring out if they should build and building their own add-on? So say they wanted to build another add-on, they wanna leverage this. They have some specific use case and they wanna extend what's out there. Where does someone get started? Is it this page with the what it is an add-on and some of the details and take a look at some of what's already out there? Exactly, so head over to the open cluster management and then you can click documentation and then we have all the topics that we talk about. We also have the specifically for add-ons, we have the add-on page and then you can visit the add-on framework GitHub and there's more examples on how to build or what is an add-on as well. And we're actually creating from your feedback, from community feedback, we're actually trying to enhance our add-on development guide. So GUE in here for creating this add-on doc, add-on developer doc to further enhance the creating an add-on experience and anyone can provide feedback to the doc and see if that helps with the experience in terms of developing an add-on. That is awesome. Someone already linked it in chat. Feel free to ask any questions, folks. I surely have a few in the interim. My favorite question to ask anyone in an open source community, especially anyone who's been as involved and involved for as long as I know Mike has, what is your favorite part of open cluster management? Is there a part that you helped write, that you helped architect, that there was a heated discussion about that is your absolute favorite? Mike, Mikaela, I don't know if you have any favorites. You don't have to pick your favorite child, but what have you enjoyed the most? Mikaela, you wanna go first? Sure, I can go ahead. So I would say my favorite part is when I do have the opportunity to do so, it's to review the documentation from the open source or at least open cluster management people. Like I said, when I have the time to do it, usually I'm already doing just regular product docs, but when that opportunity presents itself, it's definitely awesome to see why does the collaboration come together and be presented as one website, at least you know just this brand effort that we all want. That's awesome. And my favorite moment is when we get community feedback saying, hey, we're using a certain project, we're using a certain tools, but they are no longer scaling for our, as we increase the amount of multi clusters, we no longer scaled out. And that's when I get involved in presenting open cluster management, saying here's a really modular, you can plug and play and choose the pieces you want solution. So let's see if this open cluster management can solve your scalability problem, your multi cluster management, multi cloud management problem. And I really get a joy out of that being able to provide a solution for the community where they went into a certain mode block in this multi cloud, multi cluster space. That is awesome. Any other highlights from the community, anything else that we might have missed? If not, we can wrap things up if there aren't any more questions and we're always here for questions afterward. I'll drop the, here's the show contact, also drop the email and chat. But yeah, any shout outs, Mike or Michaela, to the community? I know we have a rather large participating community on and off, so any shout outs? Yep, show out to all the participants on open cluster management, whether you're from Canada, US, China, India, Israel, everyone is welcome to join. And shout out to the organizations that have been really help building open cluster management, Red Hat, Ant, Tencent, Microsoft, Expedia, Alibaba Cloud, and other smaller companies as well that really help us develop and foster this community as well as the, some of the key contributor from the Kubernetes space that really guide us in terms of the design, David Eats, I mean, Cam, Paul Maury, if I miss anyone, Q-Chang, if I miss anyone, I don't know this respect. It's just the names that come out of my head. So shout out to them. And I will echo Mike's shout out. So everybody that you listening and shout out to the future contributors, just go ahead and give it a go and shout out of course to the documentation team from Red Hat Advanced Cluster Management. I got to shout the team out. Shout out to everybody that has contributed and who continue to do so. So thank you. And thanks Garny. Awesome. Thanks for organizing this and hosting it consistently and with great support. Bro, you do a great job. Thanks guys. Yes, thank you Garny. Yeah, thanks for coming on today folks. So I dropped a link in chat for the Open Cluster Management IO project. Go take a look, maybe write an add-on if you have the concerns. Come to some of the community meetings. I know I make it from time to time. Normally the North America ones, I'm a bit of an early to bed person. So I tend to miss the other ones, but thanks everyone for coming and joining today and we'll see everyone in two weeks. I don't have the topic finalized for that one yet. It's one of two. So I'm not going to tease it, but as always, you can find announcements on Reddit and Twitter to find out we'll be talking about in a week's time. So we'll see everyone back here in two weeks and maybe Mike and Makayla, they'll come back another day or we'll do a cross stream with an Open Cluster Management Community Meeting. Who knows what is to come. So thanks folks. I'll roll the intro as the outro as always and you see, we'll see everyone in two weeks. Thanks.