 fields, and I'm a developer advocate at Google Cloud, a CNCF ambassador and a member of the Kubernetes Special Interest Group for contributor experience. My work revolves around advocating for users by understanding the real-world challenges they face. And today, I'm going to tell you all about the modern challenges faced by organizations when it comes to the challenges of scale, specifically scaling out their Kubernetes clusters and what Kubernetes multi-cluster and networking special interest groups or SIGs are doing to solve these challenges. Fun fact, all the illustrations in this presentation are by myself. Hold on, I'm going to grab the other clicker. A single Kubernetes cluster can scale upwards of 10,000 nodes, and Kubernetes has a variety of useful tools for enabling multi-tenant architectures. So why would an organization ever need more than one cluster? Let's take a look at just a few of the common reasons that I see when I talk to customers and users about their multi-cluster environments. Firstly, geography or hybrid environments. Latency, compliance, or resiliency, high availability reasons all factor into this. Whatever reasons you have, you'll generally need to create at least one Kubernetes cluster in each region or environment where you want your apps to run. Multi-tenant Kubernetes clusters are great for using resources efficiently, but when tracking costs is key, many users create clusters to better match their billing model. While there are some useful tools in Kubernetes for isolating multi-tenant workloads, sometimes it makes more sense to use the cluster boundary to isolate a team, application, or service for security and compliance reasons. This would mean you'll end up having multiple clusters in order to meet your security and compliance needs. This is just a quick look at a few of the reasons I see customers and users site for their multi-cluster architectures, and most organizations have a combination of these constraints. So what does running a multi-cluster Kubernetes architecture mean for you? Let's imagine you have applications running in a cluster on-prem and one in the cloud. Maybe you're running a website on-prem and in the cloud maybe you have a mobile app. Your first challenge in working with this multi-cluster architecture will be networking, and that challenge comes in two dimensions. First, the vertical dimension. How are you or your users going to access the apps running in each of these separate clusters? Secondly, the horizontal dimension. What if your clusters are running applications that need to communicate with each other? One way we could do this is to use DNS to reach the applications running in your clusters, which won't introduce any problems, right? I joke, but really DNS is a fragile tool that causes a lot of problems, as we've seen. On the cluster end, we'll need to use Kubernetes ingress objects to manage traffic coming into our apps, though, of course, the networking details of how we reach those clusters and apps will vary per environment, and we're also going to need some load balancers in the cloud. You can use the cloud provider's load balancer, make your own, and on-prem, you have a variety of options for load balancers. Not to mention any automation you want to write to make use of these connections, and I barely touched on anything you'd need to know about Kubernetes ingress itself. All this is getting pretty complicated. Two Kubernetes SIGs have been hard at work creating API standards to help make solutions to these challenges simpler and more consistent across environments. SIG multi-cluster has created the new multi-cluster services API standard. Multi-cluster services, or MCS, creates a concept in your Kubernetes cluster that is very much what it sounds like. It enables you to export and import services across clusters. This doesn't change where your apps are running, but it does make it so that each cluster knows about the services running on your other clusters. For example, you could log into one cluster and be able to access all the services that cluster knows about, even if they're actually running somewhere else. Now, about that DNS, there has to be a better way to manage incoming traffic for our applications than combining DNS with the Kubernetes ingress object. So SIG network has been hard at work on the Gateway API, which I commonly hear referred to as Kubernetes ingress v2. The Gateway API is a new implementation of Kubernetes capabilities for managing that vertical or ingress traffic to your applications. It includes a variety of improvements to make managing ingress easier and aims to provide a consistent way to manage your Kubernetes cluster's interactions with networking infrastructure. Gateway API can be used to implement a concept of multi-cluster ingress, where a centralized Kubernetes API server is used to deploy ingress controls across multiple clusters. Basically, if a single Kubernetes cluster can know about the services in another cluster and how to make use of the networking infrastructure in between, that means we can use the consistent tooling of the Gateway API to manage ingress for all of our apps, even across Kubernetes clusters. Both the Gateway and MCS API standards come from the open-source Kubernetes project. Implementations of these tools, though, will depend on your environment. So check the documentation for details on tools and environments that enable use of these APIs. If you want to get more hands-on with multi-cluster services in Gateway API, check out the Gateway API's documentation and also this cool tutorial on GitHub that's useful for learning more about multi-cluster services. This work is still in its early stages and there's so much left to do. So join SIG multi-cluster or SIG networking at their regular meetings, which you can find on the Kubernetes contributor calendar, and also reach out on slack.kates.io to get involved. As for me, I'll be hanging out around Google's virtual booth on Slack and the Google Cloud channel in case you want to ask any questions. I hope you all have a great KubeCon!