 Good day everyone. Thank you for attending this session for Intro to Open Cluster Management. My name is Mike Gang. I'm a developer for the Wet Hat Advanced Cluster Management for Kubernetes team. I'm joined today by my other Wet Hat colleagues as well. This is a free recorded video. We do encourage you to ask questions, but after the presentation is done. So what is Open or Cluster Management? It's a new open-source Kubernetes multi-cluster-centric project that has been launched by Wet Hat and ANU. We created this project as a way to engage the open community in an effort to simplify the problem of managing many Kubernetes clusters. This project is not trying to reinvent the wheels and solve problems that are already well addressed. For example, for provisioning and upgrading and depositioning of clusters across many different cloud providers. There are already projects like Keepsake cluster API and the OpenShift Hive project. For health metrics of many clusters, there's projects like Banos, a compliant aspect of those clusters. There's projects like Open Policy Agents. So what we're focusing on is how Open Cluster Management as a community project builds additional capabilities that help bring all these separate tools together so that you have a central location with a unified view of inventory of clusters and have a unified way of delivering managing agents. The managing agents can help to configure projects like Banos that need certain configuration across many clusters to collect health metric data. We also have a native way of delivering application class movie clusters that can support a community project like Argo and Kubella. We also have a native governance policy framework that can be integrated with other compliance projects like Velco and Open Policy Agents. So Open Cluster Management is really about bringing all these many different parts of the community and deliver a whole solution to simplify feed management across the open hybrid cloud at scale. Before any community project that wants to become multi-cluster aware, what aspects do they need to address? Let's talk about some of the requirements. There are projects like Open Policy Agents or Velco that doesn't have multi-cluster concepts to go in. So they need to become multi-cluster aware so they can understand which clusters to enforce compliance on. There are also other projects that understand the concept of multi-clusters like Argo and Thanos, but it's not a unified experience. Different projects have different concepts of what it means to be multi-clustered. So let's consider all the requirements that a service within the Kubernetes ecosystem will need to become multi-cluster aware. So first of all, the service must have an API to determine the inventory of available clusters. So for awareness of cluster inventory, Open Cluster Management provides a managed cluster API to represent the cluster under management. In all the requirements, the service must have a way to determine where to schedule and sign Kubernetes API manifest to a selected set of clusters. This API, for this service, we provide API called Placement. So which for awareness of workload configuration needs to describe clusters, and the Placement Controller would sample the available clusters and dynamically match a list of clusters that's captured by the placement. Another multi-cluster aware service requirement is it must be able to deliver Kubernetes API manifest to a selected set of clusters. For delivery of configuration, Open Cluster Management have a manifest for the API which provides a simple way to specify one or more Kubernetes manifest that should be delivered and what it's all against the managed clusters. Lastly, the service must have a way to govern how user access available clusters or groups of clusters in a fleet. So for this security requirement, Open Cluster Management to define a consistent access control boundary, users or groups may be assigned to specific managed clusters or collections of managed cluster, nor as managed with cluster sets API. For example, we might have three teams that each have access to Open Clusters that are related to their day-to-day activity, but a particular team wants to perform an application or integration or policy change. We only want them to change their subset of the fleet and not impact the other teams' clusters. Another core concept that might be optional is the service that may need to extend the management agent with additional built-in controllers that should be run on managed clusters. If the server has enabled that, then to install built-in controller or operator on managed clusters, we provide the managed cluster add-on API which allows additional behavior to be injected remotely into the management agent to support abstraction built around the manifest work and the other four parameters. So at the bottom here, we see there are a list of projects that are possible for consumers of these multi-cluster board APIs. For example, we have created a Submariner add-on operator that helps you simplify provisioning Submariner across a set of clusters so they can interact with their network layer easier. We can also help create a multi-cluster operator that's deployed at Fano's data store on a central control hub and then link back to the clusters that are in the fleet so that their eating cluster for meaties can share the co-ed data. We also built examples that syndicate desired policies for FALCO or open policy agents so that it's managed at a fleet-wide level as opposed to cluster by cluster basis. And with Argo, not only we orchestrated the distribution of Argo into your fleet, we have also been working with the Argo community to help adapt concepts like the placement API and generically. So the Argo project in the community can leverage concepts like the cluster inventory, role-based access control, placement APIs, etc. But once again, these are the core principles that make up the open cluster management. So we've been describing the problem with multi-cluster management and some of the requirements and we talk about the concepts and the APIs that help simplify the fleet management. But what is the actual architecture and they need open cluster management? As you can see from the diagram, open cluster management uses a hub-to-agent framework. The hub cluster acts sort of like a multi-cluster control plan. It hosts the cluster manager operator which contains the registry and it communicates with the registration agent on the remote clusters for registration and lifecycle of managed cluster and cluster add-on. The placement controller on the hub cluster is responsible for workload scheduling across multiple managed cluster. On the managed cluster side, there's a cluster-led operator which contains the work agent on the managed cluster which pulls the placement decision from the hub cluster and launch workload to the related managed cluster. It also pushes the status of the managed cluster to the help cluster. So with this architecture, we avoid vendor logins because the APIs are not tied to any cloud providers or proprietary platforms. You can have your hub cluster initialized on any Kubernetes provider or platform you like. And you can have many different clusters from different providers that join that hub cluster with no functional limitation. So open cluster management overall is like a micro-kernel operating system. It has foundational parts to provide the core function that we described. It also has various add-ons to extend different capabilities. Some of the built-in add-ons are the policy add-ons that we described earlier as well as the application delivery add-ons. They're also highly modular and can be deployed separately. This makes it easier for projects to adapt the open cluster management API so they can become multi-cluster aware. Let's do a demo and showcase the foundation layer of open cluster management. Firstly, we're going to show you how to bootstrap a multi-cluster control plane that we call the hub cluster. Then we're going to register another cluster to the hub cluster so that it can be managed by the hub cluster. There are many ways that we can bootstrap this process. We can use the available operators from the operator hub or what I'm about to show you, which is using our in-house cluster admin line rule. It is a tool that we created inspired by the Kubernetes Kube admin tool. So if you're already familiar with the Kube admin tool, then the workflow we're about to show you should be quite similar. So on my screen right now, I have two terminals. On the left side, I have a newly created kind cluster that I'm planning to use as the multi-cluster control plane that we refer to as the hub cluster. On the right side here, I also have a newly created kind cluster that I'm planning to use to perform the action of joining the cluster to the hub cluster. So to start off, we want the cluster admin in the command on the left side of the hub, on the left side, which represents the hub cluster. This will deploy the cluster manager operator, which contains the controller such as registration and placement. It will then point out the command with a temporary token that another cluster can use to join the hub cluster. So let me copy and paste this command. So using my command on the right side, which is the 2D managed cluster, I will initiate the join process. So while this process takes place, let's go back to the slide and talk about what's actually happening underneath after running the join command. In open cluster management, the cluster registration follows a double op-in mechanism, which is the agent opting to register a cluster with the hub by creating a managed cluster resource on the hub. It also creates a certificate signing request, which means the agent will need a two-config with appropriate permissions to initiate the registration request. And that temporary token that we just copied and pasted forms the bootstrap hub queue config. And the agent on the managed cluster side will generate the queue config-secret. When the hub received the registration request, the admin user on the hub will need to approve the certificate signing request initiated by the managed cluster. So then the managed cluster agent is authenticated to the hub. The admin user on the hub cluster also needs to set the value in the managed cluster custom resource field to hub except client to true. So the agent is authorized to call the hub. With this authentication and authorization approval, this is what we call the double opt-in. So sometime has passed. Let's look on the hub clusters. We see the managed cluster resource has been created. But as we talked about earlier, the hub accepted is currently false. It's waiting for the admin approval. There's also a certificate signing request. As we mentioned earlier, the condition is currently pending because the admin hasn't approved yet. So to continue this workflow, we go on the hub cluster and accept the managed cluster joining request. So if we check the managed clusters again, we can see that the hub is now accepting the managed cluster's request. It's authorized to accept the managed cluster request. It's also authenticated to accept the request because the certificate signing request has now been approved. So after the approval of the registration on the hub, a namespace with the same name of the cluster will be created. And just I mentioned before, when I joined the cluster the first time, I gave it the cluster one name. So I should be expecting a namespace here with the cluster one. So this namespace can be regarded as a container of the resources that agents in a managed cluster can access. By default, the agent on a cluster is only authorized to access certain resource kinds in this namespace on the hub to ensure security isolation. So with these few commands, we were able to bootstrap a hub cluster and then have another cluster join that hub clusters. So after the registration process is done, we can now leverage the APIs that are available by the provided by the open cluster management. So these are some of the APIs that I mentioned earlier. We have the placement, the manifest works, the placement that finds the managed clusters. We have the managed cluster sets for low base control. We have the placement to find those managed cluster sets. And then we have the manifest works that can deliver workload from the hub cluster to the managed cluster. So with these APIs, we'll allow projects to develop new solutions with multi-cluster capabilities or enhance any existing project using these APIs to become multi-cluster aware. So that wraps up the demo as well as the quick introduction to open cluster management. For more information, please visit our website at opencluster-management.io. We're currently in the process of joining CNCF. We have a bi-weekly community meeting that we really encourage you to attend. It's open. Every bi-weekly Thursday, 10.30 Eastern time, anyone can join. You can find the details on the website. You can also find us on the Kubidani Slack channel. The link is on the website as well. Please feel free to download and play with the cluster admin to the iDemo that we demoed. Or if you're only interested in inserting API, please feel free to check out any projects that are under the GitHub open-cluster-management-io. Thank you again for attending this session. Okay, I have seen some questions or at least one question from Q&A. I'm not sure why I can't type in it, but I'm just going to answer it in audio. Ocm is not focusing on the provisioning of the clusters. Even though we have integration that allows that, we're focusing on more, I guess, day two operation. So once you have your cluster created, you're allowed to bootstrap it and register any Kubidani's clusters and shouldn't be at any issues. It's not tied down to AWS or any other cloud providers. I also have with me my colleague Q-Jan from Red Hat. If he's requesting audio share, please accept the request. Thank you. And he can answer some of the other questions as well over here. Yeah, Q-Jan, we can see you. I want to expand on that question or answer. Q-Jan, feel free. Yes, thank you. Yeah, that's right. It's not about provisioning the cluster. It's more about managing the cluster and more about the day two operations. It looks like we have another question in the chat. Does Ocm support other clusters besides K8s? Can I be able to hook up something like K3s and have it work with the K8s cluster? Yes, it works with K3s. We actually internally did some tasks with K3s already. So basically any clusters that is compatible with a Kubernetes CPS server can be supported by Ocm. All right. Thank you for that response. I guess to add on to that. Okay, we have another question. How does it relate to efforts to run cluster management outside of the cluster like Hypershift? I think it's a little bit different. Hypershift is the architecture of the Hypershift is that you run the control plan also on the centralized control plan. But I mean the clusters control plan on the centralized control plan, and then you'll have the worker node that link to the control plan. So Ocm is a bit different that there is no... You don't really need a cluster control plan run centralized. You can have different clusters, not necessarily to be OpenShift. You can have native Kubernetes clusters and other Kubernetes clusters like GKE and AKS. And so the thing is we just have a bunch of agents and some to provide different kind of the functions. And then this agent is going to connect back to the control plan of Ocm. So there will be some difference between this and the Hypershift. Okay, thank you so much. And then I think someone asked, what about KCP? Yes, so I think the... So KCP is still in a very quite early stage. And we are also working with a KCP team, and have some priority to discuss how that KCP can provide a high-level user interface and using Ocm as a back plan to support KCP. So I think KCP is more that you can define different logical clusters and each logical cluster could provide a transparent API for the user to deploy a workload to the multi-clusters. But Ocm is different focus. Ocm focuses more on how to make things multi-cluster aware. Like you can use Ocm as a foundation to make other tools to be multi-cluster aware like Sabrina or Argo CD or other things. Other projects can integrate with Ocm to make them more easily to manage multi-cluster. All right, thank you so much for that response. Thank you. How easy would it be to have a database in one cluster reference another or reference an application in another cluster? So I think the question here is if you have a database on one cluster and the front end on another cluster, how do we reference a database on another cluster? So actually in Ocm we have integration with Sabrina, and we can use Ocm to deploy the Sabrina which provides the network connectivity among the clusters. And then Sabrina there is a feature called MCS API that you can build the cross-cluster service discovery. In that way in one cluster front end applications you actually can access the database on another cluster using a certain name, certain service name based on MCS API. So would this functionality be provided out of the box? Yes. Okay, cool. Thank you. We only have a minute left. I just want to say thank you O1 for attending. Please check out our website at open-cluster-management.io. Thank you. And thank you to Jen for joining and answering some tough Q&As. Thanks. Thank you. Thank you so much Jen and Mike for this awesome talk.