 Good morning, everybody. Are you having fun? I'm definitely having fun. I wish I had capes to give all of you. Because by the end of this talk, you will know that you all possess superpower. It has been a privilege to be part of this team and this project and see how quickly it is growing and how many people are adopting it. But like Kelsey said before, this is just the beginning. And there is so much work in front of us, ahead of us. So today, I want to talk about the future. We are all obsessed with how quickly we can innovate, building more tools, automation, processes, trying to meet our customer's demand as quickly as possible. Our time is expensive. Think of a platform that will allow you to quickly build cloud-native applications. It will have all the primitives and the patterns you need, like declarative style API, built-in automation, a policy enforcement mechanism. And not only that, you can take all of that and reuse it and generalize it and make it specialized for your own needs. And don't worry. It won't create any friction for the developer, the user, or the operator. They wouldn't need to learn anything new. And on top of it, it runs everywhere. Doesn't that sound like magic? Everything we would like to have in a platform. I've been focusing my entire career in building tools to increase productivity for engineers. When I started my career some years ago, I led a team with a similar charter. Our goal was to build a new platform to increase engineers' velocity. Yes, it was a different technology stack, but we did build that. And we did everything for those developers, from rendering the UI to database optimization, monitoring, logging, everything. And we were successful. It was super easy to build new applications. Customers were very happy. We even enabled mobility between teams of developers. In retrospective, when I look on that platform that we've built, I think we had two main flaws in it. The first one, it wasn't extensible. This means that every time there was a new UI tag, for example, or any kind of technology, a new use case, it was sometimes impossible or just very, very hard to integrate it into the platform. And the other point was that we delivered that to our engineers as a black box. And now they didn't kill how the platform works until it breaks. And as much as we think of ourselves as great engineers, it always breaks. And then they didn't know how to look into the system. They didn't know how to troubleshoot that. And my team became a bottleneck. Has anyone here in the audience worked or built such a platform? I think you're lying. I think everybody had worked or built or used such a platform. Let's see again, have you ever worked or built or used such a platform? Yes, this is, yeah, be honest. That's important. None of it is new. Okay, we are still solving very similar problems for a very long time. I think Kubernetes and its ecosystem can be that platform I started with. Kubernetes already has many of the primitives and support for cloud native applications, like scale. It can run all type of workloads. But Kubernetes is also open. We designed it with flexibility and extensibility in mind to allow new functionality to be built in a seamless way. That's why there is such a large ecosystem around the Kubernetes project. What beautiful about the extension mechanism in Kubernetes is that we handle with care the tension between extensibility and consistency. Many times when you add new functionality, you have to compromise the usability of that feature or just introduce a different new experience. But when extending the system in a native way, you actually get to use the same API machinery, Kube-Cuttle, the UI, the API. And when you look into the system, you feel like you're in a familiar territory. And of course, it runs everywhere, making it an open hybrid platform. Now let's take a deeper look into Kubernetes. Brian Grant created this diagram to capture the scope of this project and also help lead the discussion of what Kubernetes is and what should be at roadmap. And he's gonna talk in his screen tomorrow exactly about that, so make sure you don't miss it. As you can see, it is much broader than container orchestration. You know, I just saw the website that Kelsey showed of what Kubernetes is. We have to update that website. It is a platform with the scope of a cloud provider. The team that is building and investing in building all of this platform has also been investing in building extension points throughout the different layers. We came very far with that effort. We have over a dozen extension mechanism in Kubernetes, naming only few like cloud provider, container runtime interface, web hooks, and many, many more. It started with the goal to increase our own stability, sustainability. But as we advance and the project mature, I think the project is now ready for all of you to use those extensions and build on top of it. I wish I could have covered everything here in this talk, but it will take us days. There will be many, many great talks in the next couple of days discussing different aspects of the extensibility. Today I wanna talk about automation. There are different approaches to automation. There is edge triggered automation. This is when you have events that you need to anticipate and decide how you're gonna act upon an event. The downside of this approach that you actually need to be familiar with all the events and also you need to be prepared to handle the scale and the amount of events because all there matters. The Kubernetes approach is talking about this reconciliation control loop. This is when we look at the current state, understand what is the difference between the current state and the desired state and continuously helping and working towards that desired state. That's what makes it a self-healing system and it's very powerful. Unlike the edge trigger automation, it is built for chaos. It is built for dynamic environments, which is exactly what we think about when we talk about cloud native applications. In order to build automation, you need to match those controllers with resources. We have different ways to extend the Kubernetes API. The first way is the cube aggregator that allows you to build your own API server using the Kubernetes API machinery. You can build package and install it separately from Kubernetes. But from the user perspective, it feels like it's extension of the Kubernetes APIs and it's serving in the same manner. This is the most flexible way and the most powerful way to extend Kubernetes. People before me talked about how we want to lower the barrier for entry to Kubernetes. So we've been working on that spectrum and trying to think how we can make it easier for people to contribute and extend and build on top of it. The team has been building an API server builder, for example, which is a set of libraries and tools that guide you through the process of building new API servers because it can be non-trivial coding to do that. But if you want to make it really, really easy with no coding, this is where we introduce the custom resource definition, which makes it really easy to extend and create new types within Kubernetes. And it does help accelerate innovation. It makes things simpler for you because you don't need to build on your own all of those building blocks. This is just a few. I don't have enough space on the slide to have more, but I did want to show here a variety of things that are built on top of Kubernetes. Starting with the Open Service Broker, which makes it easy to consume services in any cloud-native platform, it is built on top of the cube aggregator, implementing their own API server. We have Prometheus, Rook, Cubeless, which is a serverless framework, SED from Operator. All of them are using CRDs. All of them have been extending Kubernetes in what we say Kubernetes-native way. And from the user experience, it looks the same. You can use the same UI. You can, of course, use KubeCuddle and the same API. So it makes it simple. And when you look into the system, you recognize everything. But we didn't stop there. We talked about the controller before. How can we make it even easier to code the behavior, the controller, that you can do it in any kind of language you want to write in? So we continue to invest in that. I am thrilled to invite Anthony Ye from my team to join me for a demo. Everybody up. Yeah. Thank you, Hank. Anthony has been working on this for a few couple of months, and not only is a team member of the Kubernetes engine team, he's also been leading the 1.9 release team, and he got an award for that, to you, Anthony. I'm just gonna make sure this is logged in first. All right, thank you, Hank. Again, my name is Anthony. And explain how Kubernetes is powered by these reconciliation loops that we call controllers. My goal here is to make writing a controller so simple that you can just go to our webpage, copy a sample, make some changes, and then run it immediately. Custom resource definition achieved this level of simplicity for the storage step of Addigan API. It's where you put the objects. But defining the behavior of those objects, that's kind of the most important part. And Metacontroller is now the missing piece that kind of makes that step also as simple as you want it to be, if you're starting with a simple use case. So you can go to this GitHub repo if you wanna see the full details. I only have time to go through a quick example. So let's say that I wanna add a feature to a Stafel set. For those who aren't familiar, Stafel set is an API that is useful for running apps that need these kind of properties with unique identity, persistent storage, ordered deployment and scaling, and so on. The first thing I'm gonna do is just install Metacontroller itself. So Metacontroller itself is just a CRD and a custom controller. So it's really an extension mechanism built on other extension mechanisms. And that means that it lives totally outside of Kubernetes. It's not part of the release process, which I as one nine release lead am very thankful for. And so you can install it on any cluster. The next thing I'm gonna do is install this example that I call cat set. And what this is is a re-implementation of Stafel set using Metacontroller in about a hundred lines of JavaScript. And because it's so small, I'm just gonna treat it like a script, almost like configuration and just put it into a config map. Then I'll go and just launch a Node.js image that reads the config map and runs it. And lastly, I will create an actual cat set. And this is just the Stafel set example from the docs, but I changed the type from Stafel set to cat set. So that we're now using this replacement. So let's take a quick look at what's in this controller code. As I said, it's JavaScript, but I only use JavaScript to drive on the point that this could be any language. You can use Python, Ruby, whatever you want. And I want to point out that there's no imports here. There's no dependencies. So this is how you can make it truly any language. We don't have to write anything to allow you to use your language. And nevertheless, it's still only about a hundred lines. It was 101 lines without the copyright header. Unfortunately, we had to add that. So contrast this with the original PR that added Stafel set. It was called Pet Set at the time. This was 24 files, almost 3,000 lines. The author commented this took longer than I expected. And of course, this only builds if you have the rest of all of Kubernetes code as your dependency. So let's just go and see that this is behaving like Stafel set. All right, so we have these engine X background backend, and we have these stable identity pods, zero, one, two. Each one also has its own volume. So we have the zero, one, two volumes for that Stafel set. But if we look at the service, we've only created one service for this whole thing that selects all three pods. So the feature I wanna add is, let's say I wanna have, in addition to a volume for every pod, I wanna have a service for every pod. You might wanna do this, for example, if you wanna make sure that as the pod moves from node to node, it keeps a consistent cluster IP. Or you might wanna do this if you wanna give a public IP and put a load balancer in front of each individual pod in your Stafel set. Let's see what those changes look like. So first up here, all we do is specify tell metacontroller in addition to pods and volumes. Now we also are managing services. That's just adding this one line. Then in our cat set spec, we say this is our service template, the thing we're gonna fill in, just like a pod template. Then we have a little helper function in the controller itself that just fills in the fields of the service template. And finally, of course, we just have a for loop that creates these services. So this is all stuff that you could probably put together by making a copy and add this in. And then to deploy this change, all we really do is go back through the same process that we used to deploy it in the first place. Except this time, I'm gonna rename the config map so we can keep the old one around. And so then I need to update the reference and then I'll just apply those changes. So what we should see is if we look at the controller itself, we've now posted a new revision. So here you see we have two revisions. It's now in the process of rolling out that new controller code that we posted. And because this is a regular deployment, what that means is you can also roll back to the old configuration. So now let's see if we can observe our new feature. All right, so now in addition to the one service that selects all the pods, we also have a service per pod. You see it's only selecting one of the pods and it's doing that because we added a pod name label to every one of the pods so that we can identify it individually. So that's all I had time to go over, but I wanna show, CatSet is just one example that I made in an afternoon. This is a list of some other examples that are in the repo, but the point is that you can have any automation that you want. And most importantly, this proves that you no longer have to go through the Kubernetes core, go into a release in order to get this kind of automation. The next stateful set type of feature is not gonna come from the Kubernetes maintainers. It's gonna come from some user who just says, I need this, they go and do it, and then they share it. And I think this pattern is gonna be very powerful. So I look forward to seeing what all of you come up with. Thank you. The first time I saw this demo, I thought it was mind blowing. The idea that you can do something like stateful set and instead of investing so many quarters and engineering effort into it and just build it in one hundred line of code, it can be a game changer. It can be a game changer for everybody of how quickly we can innovate. But the question is, can you really rely on that? I started talking about the maturity of this project. Kelsey said it's boring. I think one of the strongest single signal of maturity of this community project is this justification program. We just announced it last month and there are already over 30 logos on this slide showing that the community is committed and the vendors are committed to make sure that those APIs are available everywhere and we are committed to support them. So what's next? Extensibility is about empowering. Anyone can write an app, a feature that will change the world. What is missing in the cloud native space? In the next couple of days, my colleagues will demonstrate some of the things that we've been working on. All of them are leveraging these extensions and the power of Kubernetes. Google, together with IBM for example, is building Istio, an open platform to connect, manage and secure microservices. It is using CRDs. It is using webhooks. Actually it was one of the incentives to bring webhooks to beta in 1.9. David Arancek and Vishnu will be presenting today to how they bring machine learning to Kubernetes. And tomorrow there is a great demo by Eric Brewer and Aparna Sinha showing how they are using open service broker and consumption of services in a hybrid setting. All of them are using those extensions. We are creating a generally purpose platform for developers for running any application, easy to use. Today I covered one area that I think is an important building block and a milestone that we've been investing in towards building that platform. We will continue to apply everything we have learned from our Google experience of managing microservices and services in Google scale to Kubernetes. We are working towards that future platform. We will continue investing in sensibility, in application definition format, and of course extending the conformance test. But not only building the platform in the core of Kubernetes, but our plan is to continue and make the Kubernetes ecosystem richer and make things more pieces that are available from the developer's experience within Google available to everybody as open source in the Kubernetes ecosystem. Those are exciting times. Remember, we all possess this Kubernetes secret superpower. I hope you take advantage of that. Thank you, and enjoy the rest of the day.