 Good morning. So those of you who've been around the community for a while will remember the early days of CNCF. And there was a lot of discussion around what was the right scheduling approach and which scheduler was going to overall win. On the one hand, of course, we had Kubernetes here with CNCF advocating a very strong cloud-native approach. But we also had this world coming from the distributed computing side. Some of you remember a lot of the HBC schedulers people were using, things like Elsa for Condor, grid engine. We had things like mesos. And there's a lot of discussion around what was the scheduler that was going to arise and start to take over this overall architecture as we move towards the cloud. Well, of course, we know how that played out. Kubernetes won. We certainly see here today it's a very, very vibrant community that's adopted this. But it didn't mean that the use cases around distributed computing disappeared. It just meant that the focus shifted towards cloud-native. But over time, and especially more recently, we started to see a few trends that have started to drive a lot of the convergence between what was coming from the distributed computing side, along with what's been happening around cloud-native. Especially when you take a look at things like machine learning and AI workloads, or things where we push out beyond traditional cloud data centers into environments like the Edge. This has been really a big focus for us in terms of how do we start to bring these projects and these different areas together. And a lot of the things I want to explore with you today are some of the new open source developments that we've been driving to go towards this new area of what we start to call distributed cloud-native computing. And there's three different areas that I want to start to focus on in terms of where we see the expansion of where cloud-native technologies are going. Now, the first area I want to talk about is we've been seeing cloud-native going into a lot of physically extreme new industries, places that don't look anything like a traditional cloud data center. In an earlier keynote on Wednesday, you heard about how we've been doing things like taking AI frameworks, such as Mindsport, and putting them into space and satellites with Kubernetes. We've been doing a lot of other things at the Edge in extreme environments. So going into offshore oil rigs, for example, we're dealing with tolling stations around the world. But one of the things I want to focus on is this use case around internet of vehicles, or IOV. Basically, what we're doing is we want to put Kubernetes inside all these different cars as they travel around and to be able to network them together into clusters for things like better traffic management, for example. The problem, though, is that Kubernetes depends on state sync. And when you have a lot of different vehicles moving around, if you enter a tunnel, if you go into an area with bad network connection, what are you going to do? Are you going to evict your node or your pod? Doesn't work so well when their vehicle is driving around the road. So the way that we've been handling that traditionally is that we default and fall back to things like a data sync. And that works OK, but it's not really cloud native. And so what we've been focused on is a project called Kube Edge, which allows us to bring Kubernetes out to the Edge. We can do full state sync. And then we can also start to bring in lots and lots of vehicles today. Today, we already have over 200,000 vehicles a year being manufactured with Kube Edge installed, bringing Kubernetes into these different environments. And we can manage a cluster of 100,000 vehicles with just one cluster. So it's a really, really big advancement in terms of how do we push Kubernetes into these areas that look nothing like a traditional cloud data center. Another area that we've been driving sort of the frontiers is looking at the scalability and breadth in terms of what we can do with scheduling. And this really comes back to the distributed competing area of the world. Now, a lot of the big workloads that are running in cloud data centers today are all-round machine learning training. And the problem is that you deal with very, very sophisticated scheduling algorithms that don't work so well on cloud native because you've got to understand things like fair share. You've got to understand resource scheduling over time. You've got to have a very, very dedicated resource affinity and so on. And this is very challenging when you take a look at bringing that into cloud native. So one of the things that we've introduced is a new CNCF project from a couple years ago called Volcano. And this adds a batch system onto Kubernetes so we can start to bring these two worlds together. We can start to give you extreme scale and a lot of these batch primitives but in a cloud native environment. And so we've seen massive improvements in terms of how do we start to bring these very sophisticated workloads into Kubernetes with Volcano. Huge advancements in throughput, scalability, and resource efficiency. The third area that we've been focused on is how do we start to do multi-cluster and bringing Kubernetes across clouds so that it becomes really ubiquitous. Everyone wants to do multi-cloud today. But the problem is that when you take a look at going to different clouds, everyone's Kubernetes is a little bit different from the other. So what a lot of people do is they'll take perhaps one Kubernetes distribution and install them from cloud to cloud to cloud. This still has a couple limitations though. Now, first you're tying yourself into one particular vendor. The second is that you're still dealing with the constraints of scheduling into a single cluster boundary. You don't really go cross cluster, cross cloud in a single pool. And so this is why we're excited about this new KMRata project, which is the first CNCF project in terms of how do we deal with cross cluster, cross cloud scheduling. It's in CNCF, so it's open for everyone. And it's also very much focused on how do we start to combine these different clusters into one pool that works across different environments. So a new approach that uses standard Kubernetes APIs to tie everything together. And then the final thing, just as I wrap up, we're very excited also to expand the community of who's involved with cloud native. One of the things that we've been focused on, especially in China, is that there's a lot of end users who are still not cloud native and how do we bring them into the community? So we've been partnering with the CNCF to build out what we call a cloud native elite club. And this is onboarding a lot of new end users into the overall CNCF community to become adopters of all the technologies we're all talking about here. One of the things we're really excited is we've also been focusing a lot about how do we bring more women into technology through this initiative. And so we're very excited about the growth that we're seeing in where cloud native is going as well as what the community's been doing. Thank you very much.