 I'd like to kick off our evening keynotes with an overview and update on Kubernetes. Last month we saw the fifth birthday of Kubernetes and in the last five years we've seen seemingly unstoppable growth from the project. There are now over 2100 contributors and growing and there are many reasons why it has attracted all of us here today. I think one of the main reasons safe to say is the community, the community is amazing and with that I'd like to talk about some of the work that our contributors have been working on for the newer releases. So let me do a couple of highlights from Kubernetes 114. Kubernetes 114 came with 31 enhancements with a big focus on supporting new workloads and making Kubernetes extensible. There's a huge focus also on moving features to stability. So the big one I think is one that everyone's been talking about which is Windows support. Windows support graduated to stable with 114. So now you can have your Linux and Windows workloads running in the same cluster. The possibilities are limitless. You can provide the same developer experience to your Linux and Windows developers. They can run in the same infrastructure. With this release we have Windows Server 2019 support for the worker nodes and containers. There's also out-of-tree networking support for a number of networking options. So yeah, that's no longer an excuse for not running it in Kubernetes. Another exciting thing is pod ready plus plus. It allows custom external feedback on pod readiness. This is an example of how you might want to define it. So this is for a very practical case of when your container is up and running, it doesn't mean that it's ready to take traffic. You might have a dependency on something external like creating a load balancer, waiting for the network stack to come up. So your container is running but you can't actually send traffic to it yet. So you want to give it the cluster feedback so that your rolling updates won't take down the old pods until the new pods are truly ready. So this is an example of how you might have two extra conditions for defining that the pod is ready. Pod priority and preemption. This is one that I'm personally super excited about because I've been using it since it came out in alpha. So this is a way for you to improve your cluster's resource utilization while ensuring that your critical workloads remain schedulable. This is a great option if you're running in your own data center, you have some fixed size cluster, or even when you're in the cloud, you might run into a situation where you run out of cloud, which I promise you can happen. Cloud has capacity limits, too. Or another case that I see people using this is when you have very, very latency sensitive workloads. So you want to evict pods so that you can schedule the sensitive workloads on before while you're waiting for new loads or new nodes to provision. So now it's graduated. And then another one for supporting new workloads is supporting more stateful services. This is persistent local volume has graduated to stable using local storage with the same persistent volume API. The scheduler awareness means that if you have a local volume and you recreate a pod that uses that local volume, the scheduler is now aware to schedule onto the appropriate note that actually has that local volume. It will automatically format and mount the file system. This is perfect for running things like Kafka or any other stateful things that need high performance storage. Lastly, I want to talk about the enhancements that came with KubeCuttle. So KubeCuttle plugins are graduating to stable. If you haven't looked at KubeCuttle plugins, they're a super cool way to introduce your org to Kubernetes. It's kind of been a soft thing of we're infrastructure engineers and we use KubeCuttle, but how do we introduce the rest of our engineering team to it? KubeCuttle has tons of options and it's very intimidating. So KubeCuttle plugin is a great way to package whatever custom commands or shortcuts you might have and distribute that to your team so that they don't have to have the steep learning curve on-boarding into KubeCuttle. It also, we released the customized integration. So customized is a super convenient way of patching Kubernetes manifest without creating templates. So here's an example of a time when you might use a KubeCuttle plugin. This is straight from the Ingress Engine X project and, again, it's an example of when you might want to package some convenient commands for your users. And we also recently saw the release of Kubernetes 115. Kubernetes 115 came with 25 enhancements with a focus being extensibility and continuing improvement for stability and maturity. So very quickly, I want to talk about CRDs. There are new features coming out for CRDs in 115, including pruning and defaulting. There's also open API support. In the cluster lifecycle department, there's very exciting development coming from Kube Atom. So now you can have high availability cluster provisioning moving to beta. So out of the box, you can provision highly available clusters with Kube Atom. You can also have Kube Atom manage your certificate rotations, which is always a pain. So now you don't have to do that anymore. There's also a call out for CSI. I want to call out that you can now clone another volume by specifying a data source to clone from. So, yeah. And with that, I want to pass it to Brian. Brian is senior staff engineer at VMware, and he's my fellow co-chair at KubeCon. It feels like we've already had a lot of VMware people on the stage today. So what I'm going to talk about is the CNCF project update. There are 30 plus projects that are in CNCF, whether they are graduated or incubating or in staging. I was told that I only have 10 minutes, so I had to pick a few projects to look at. I thought about it, and what I realized is I flew 26 hours to come here to China to talk with you all today. So we're going to talk about some Chinese projects, but we're also going to talk about some of my favorites as well. So let's get started. The first one I want to talk about is a TKV, and what TKV is, a distributed transactional key value database. That's a lot of words to basically say that what TKV makes easy is having key values, whether they are local storage or distributed. So what are the great features? So geo-replication, you can actually have a key value database that is here. It could be in Beijing. It could actually, it could be in Japan. We could go across the Pacific to the West Coast of the United States, keep on going across the United States to, let's say the East Coast of the United States, and guess what, the software actually supports that. Another important piece is that it scales to a large size, 100 plus terabytes of data. And the neat thing about this is why you actually might use TKV is because you actually have a key value storage that has transactions, and these transactions could be distributed. If anyone's actually worked with a key value storage like, say, Redis, you understand why you might want a distributed transaction. And just because I wanted to give a little shout out to all the people who put together the release notes, I put all the release notes for 2.1 on this slide. But really the most important piece here from what I'm seeing is that TKV is getting closer to being able to support more of the RAF protocol, which means that whenever you add and remove nodes from your actual clusters is that it becomes easier and there's less contention and it becomes faster. And then also about the hotspot scheduling, that's another interesting thing. If you have a set of values or a set of nodes that are receiving lots of data, TKV can now actually move things around for you. I like that. The next thing is I'm going to talk about is Harbor. And Steven, my co-worker, talked about this earlier. But what I want to do is highlight another Chinese project. But really what I want to shout out is what Harbor really comes down to. There are lots of container registries that you can choose. But the problem that we find is that do I choose a registry with features? Do I use someone else's registry or do I choose a good registry? What Harbor does is brings all these things together. It's software that you can easily install locally, whether it's in Docker or just running locally or in Kubernetes. It has a great user interface and it has all these things that businesses might want, such as role-based access control, things like robot accounts, APIs, and an API Explorer and auditing because you know how businesses like to audit things. So just actually a month ago, actually a month and a week ago, Harbor released 1.8. And the biggest feature for Harbor 1.8 is OpenID Connect. So what this means is that people who are actually using SSO and their organizations can now more easily integrate Harbor with their existing authentication. And then I mentioned robot accounts before, but this is very important. Whether you're using GitHub or GitLab, you don't want to have all these users that look like people. You can actually now properly define them as robot accounts. And another thing here is now with Harbor, replication has been enhanced where you can actually have multiple servers maybe locally to wherever your users are and you can actually replicate between those. So like I said, I wanted to talk about Chinese projects and I wanted to talk about my favorite projects. And one of my favorite projects in the CNCF right now is Jaeger. And what Jaeger is, it's the distributed tracing platform. One of the lightning talks from Tencent believe we were talking about tracing and testing and production. What Jaeger allows you to do is actually, so let me take a step back. So with a lot of modern monitoring stacks or observability stacks, we have logging, tracing and metrics. Jaeger is the tracing bit of this. And what Jaeger allows you to do is actually take traces from inside of your application and then put them in actually a really good looking user interface. 15 years ago, you would have paid many, many, many thousands of dollars for the software or the features that Jaeger provides, but now we don't actually have to. This is from Uber and Uber has done a lot of great work. So why would you use something like Jaeger? Well, really what it comes down to is that you have software, and let's say you have microservices and you have microservice one, microservice two, microservice three, something slow. But what you can do is you can actually define in your software where you want to, you can define in your software these things called spans, which are basically some bit of computation and you can figure out how long that span took. You can actually have logs on that span and you can actually see errors. And it's even better than that because we're talking about microservices or even just big complex apps. Now we can actually trace across applications. And that's pretty important as well. So just recently, before I move on to Vitesse, Jaeger released 1.12. You should actually go visit their site and try it out. Second project that I wanted to include that wasn't, it's not a Chinese project, but it's one of my favorite is Vitesse. Lots of people are still using MySQL. It's a super popular open source database. But the problem with MySQL is how do you scale it horizontally? You can scale it vertically by getting a bigger box. But at a certain point, getting a bigger box is not a good solution. So what Vitesse allows you to do is give you that great MySQL API, but allow you to scale it horizontally, let's say running on Kubernetes forever. So these are actually the biggest features. The biggest one I want to call out is actually the first one is scaling. And whenever you have, now you can start thinking about your databases and shards, and you don't actually have to think about how big is this thing going to become. You can just add another node. And whenever it becomes bigger, you just add another node. So I encourage everyone in here to actually look at Vitesse. It's still an incubating status, but there's a lot of good work going on there. Last month, Helm released 1.8. And what Helm is is a, I'm sorry, Helm released 3.0 alpha. And what Helm is is a package manager for Kubernetes. It's actually the package manager for Kubernetes. And what it allows you to do is to define your configurations, whether using YAML and templates, and then now you can install your complex configuration on your dev cluster or your staging cluster or your production cluster. Just last month, they released, like I said before, they released 3.0 alpha. And the biggest feature of 3.0 alpha is that Helm has listened to everyone, whether it be the users, the security experts, or just the guy walking down the street and said, we don't like Tiller. So what they've done is they're working to remove Tiller. Helm does all of its processing client side anymore. So there's no more open security vectors inside of your cluster that might allow bad things to happen. And you don't have to actually work around it anymore. So my last project here, and this is kind of a new project. And normally we don't talk about Sandbox projects, but we're in China, and I'm going to talk about our Chinese projects. There's a project called Dragonfly. And what Dragonfly is, is an open source, P2P-based image and file distribution system. So we remember P2P from years and years ago, back in the 90s, when we were all trading our music. But really what it is, is it allows us to distribute our files, and no one has to own all the files. So we can actually use multiple computers to send files to our users. And I love all of our projects. When I go to their pages and I talk to the people that run these projects, they always give me this long list, and I promise every single time that I will include the long list. But I will pick the ones that I like the best. So one of the features that I really like about Dragonfly is that you don't have to worry about CDN setup. With the passive CDN, whenever we can actually configure CDNs, so we can actually cache data that is served quickly, or frequently, and so we can reduce processing power. And then also, in many cases, cache data that is closer to the user. So Dragonfly is currently at 0.4. One of the biggest features that's in there is that they've done a lot of work moving the Supernode process inside of Dragonfly to go for performance and stability. So I'm going to end there, because I'm actually over my time. So thank you for going through these CNCF projects, and I just want to end with one thing. These projects are created by companies, sometimes started by end users. All of you all out there can actually participate in these projects, whether it's Kubernetes, or down to newer projects like Dragonfly, or other projects in a sandbox like Spiffy Inspire. So we are looking for lots of people, especially people that we don't get to interact with all the time to contribute to these projects. And if you have any questions, come find me. For once, I'm not very hard to spot, and I will definitely steer you in the right direction. Thank you.