 Thanks for staying until Friday through all the keynotes. Yeah, a lot of people have talked about how much the project has changed, how much the community has grown. Sorry, but I'm going to talk about that again. It is just incredible for the people who have been on the project for a long time, how much has changed since the beginning when it was just a handful of engineers from Google and from Red Hat working together. I first met Clayton at the first DockerCon when Kubernetes was open sourced and released to the world. And immediately, we had a conversation at the OpenShift table. It was just clear that we shared a lot of the same common vision and operational experiences. And we're really lucky to have them as a partner from very early on. And I met Kelsey at the first OSCon where we launched Kubernetes 1.0. And again, it was just clear that he really got it from the very beginning. Kelsey mentioned my experience on Borg. Been doing this for a while. I've been in Google more than 10 years where I'm lead of, co-lead of Google Kubernetes Engine. And I have to say, I've been in Google a long time. A lot of great projects. Borg was a great project. But Kubernetes has definitely been the most fun out of all the projects I've done at Google. Now, I co-lead the architecture special interest group, which is trying to document and communicate the design of Kubernetes to all the contributors. And I'm a member of the Kubernetes Steering Committee. I'm also on the Technical Oversight Committee of the CNCF. And actually, we're looking at a number of the projects that Clayton mentioned that would complement Kubernetes, like Open Policy Agent, for example. And I'm going to be talking to you a little bit about a simple topic here. What is Kubernetes? This is actually, I think, someone else showed this page in their talk. I wrote this a while ago. It's due for an update. I'll try to do that in January. But I'm going to talk to you about different ways Kubernetes has been described. It seems like it's obvious, but there are different features and properties of Kubernetes that are important to different use cases. And that leads to different perceptions of how people think about Kubernetes depending on their use case. There are other non-technical ways you can think about Kubernetes, too. Sarah talked about our social experiment aspect of the Kubernetes project. But I'm going to focus on the more technical ones. I'm also going to mention a few cases where lessons we learned from Borg motivated different design decisions in Kubernetes. And hopefully, this will give you some idea about why Kubernetes is the way it is and where it's headed in the future. I actually agree with most of Clayton's talk about things that are going to be super important in 2018, bring all the cloud-native technologies together in a seamless way. So these are 10. 10 was a nice round number. So I could just pick 10 ways of thinking about Kubernetes that are relevant to its design. A recurring theme is that Kubernetes can do a lot. But it was always intended to be a platform that could be built upon and extended. And even as a toolkit that can be used by other systems, such as service meshes and machine learning platforms. So the first way most people think about Kubernetes is as a container platform. That's pretty straightforward. That's more or less how it started. The Kubernetes node agent, the KubeLit, executes and manages groups of containers. This functionality is exposed through the Kubernetes API server with the pod and node APIs. By the way, there's a lot of detail on these slides. This is kind of a SIG style presentation to kind of just keep it real. I'm going to unlock these slides. After the talk, I posted a link on Twitter. So you don't have to read the details. It is kind of an eye chart. But the container execution functionality, it's at the lowest level of the Kubernetes domain model. It's the bedrock of Kubernetes. I call that layer the nucleus. And you can see that I left a lot of space for other APIs. We knew from our experience with Borg that users would need a platform that could do a lot more than just run containers. In Borg, most of the other functions that users need are actually implemented as completely independent systems with very different APIs and configuration mechanisms. And then all those things need to be glued together. In Kubernetes, we wanted to provide a platform that could be expanded to meet the needs of users. So as I introduce additional functionality, I'll show how it plugs into the Kubernetes components and also show where it falls into this logical, architectural layer cake. This layering is going to be important to contributors. And I hope we still have a number of contributors here. In order to reinforce the structure of the system as it continues to grow and evolve, I always say we have about 10 years of work left to do on the system. So having a vision of how it should be organized is going to be really important as the community grows. And it's also important to ecosystem developers to make sure that we don't introduce accidental coupling and we can preserve the flexibility that the ecosystem developers need to use the pieces that they need, Alucard. And it's going to be important to users as well to make it clear what parts you can depend on Kubernetes everywhere as the system becomes more and more customizable. So the other part of the bedrock of Kubernetes, which is really one of the things that distinguishes it from most other systems, is the control plane. At the center of the control plane is the API server that implements the common functionality for all the system's APIs. Users interact with Kubernetes just through the API server, either through Qt Control or, and that's the canonical pronunciation. By the way, I came up with the name, so I get to say. Or a UI or some other API client. All the components also are driven by and interact through the API. The controllers don't access the state store at TD directly, nor do they use private APIs, a message bus, or any other kind of communication channels. Everything goes through the API server. These controllers continuously strive to make the observed state match the desired state and report back their status to the API server asynchronously. All the state desired and observed is made visible through the API, both to users and to other controllers. Borg used a different model. Borg had a state machine that was hard to change and extend. New states could not be added, and even changing, introducing new reasons why objects could remain in an existing state broke client assumptions. So we really wanted a design that was resilient and composable to make it easy to add new automated processes, whether implemented as part of the system or even by users. This control plane infrastructure is also part of the nucleus that the rest of Kubernetes is built on. Way number three, Kubernetes can be thought of as a configuration distribution system. It shares some common features with key value stores like ZooKeeper, SED, the store where Kubernetes keeps its date. It's also been compared to Ansible, Chef and Puppet configuration management systems, because the combination of state distribution and the ability to execute programs across multiple nodes is at the core of many automation systems. About 30% of Borg configuration is application configuration, even though other systems exist for distributing application configuration. That's because many users want to use the same configuration mechanism and tools to configure their applications they do to deploy and upgrade them. Kubernetes ConfigMap and Seeker APIs deliver configuration information to containers as environment variables or as files, facilitating the configuration of 12-factor and non-12-factor applications. And even Kubernetes Components. We have something called Component Config where we're consuming config maps to dynamically configure Kubernetes itself. And with new custom resource definition and API aggregation features of Kubernetes, other automation systems can be dynamically configured using the same patterns and libraries and tools as built in Kubernetes APIs. This is the Prometheus operator, which enables monitoring targets to be specified similar to how service endpoints are selected. These mechanisms further expand the set of API and execution primitives in the nucleus layer. Way number four, container infrastructure as a service is a term that's been used to distinguish Kubernetes from simpler container platforms. Distributed applications are comprised of multiple containers that need to be discovered and load balanced across. This is why Kubernetes from the beginning has supported primitives similar to infrastructures as service platforms, such as instance groups, load balancers, and shortly after the initial version storage volumes. But in terms of container and service-oriented abstractions instead of virtual hardware, these flexible primitives can be used to craft application topologies as simple or as complex as you like. For example, I'm sure you've seen a number of presentations showing different rollout schemes with canary deployments like in the GitHub presentation. These primitives are exposed as APIs in the API server, like the other APIs, and implemented as controllers and new additional components, such as the scheduler, Kube proxy, Kube DNS, and ingress controllers. Most of these components even just run on Kubernetes themselves. And all of them are pluggable and can be swapped out as long as the controllers implement the API. Clayton just mentioned Kube router, for example, which is a replacement for Kube proxy. Core DNS has been used to replace Kube DNS. There are many ingress controllers that are based on different load balancers. So this bugability gives a lot of flexibility for operators to adapt Kubernetes to their environment. Replication controller, replica set, service, and ingress are at a higher logical layer than the nucleus layer, and they're built upon those APIs. For example, replica set just crates and deletes pods. Way number five, I have described Kubernetes as a platform designed to automate deployment, scaling, and management of containerized applications. That's what it says on that, what is Kubernetes page? That's not to say it's a general purpose of in-driven automation framework or generic policy engine, but its APIs do facilitate building higher-level automations, such as auto-scaling, and Kubernetes supports scoped enforcement of configurable policies, such as resource quota. These mechanisms are configured using Kubernetes APIs at the highest logical layer of the Kubernetes architectural layer cake. Similar mechanisms could have been built entirely outside of Kubernetes, and we're working to make that easier with mechanisms like admission control extension. But we built in these features because we felt they would address a lot of common operational concerns of applications running on Kubernetes. Number six, services as a platform. Easy consumption of a wide variety of services in an important way cloud platforms improve developer productivity. Kubernetes service catalog APIs make it easier to consistently consume a wide variety of services through the open service broker. In addition to simplifying service discovery, authentication, and configuration, binding to services through the service catalog makes it easier to use different implementations in different environments. For example, an application could use MySQL during local development and a hosted SQL database service on a public cloud. As with the other higher-level automation features, the service catalog APIs are the highest logical layer. Because Kubernetes runs essentially everywhere on pretty much every public and private cloud now, users can treat Kubernetes as a portable cloud abstraction. Portability was always a goal, and we're making Kubernetes even more portable with new plug-in mechanisms such as the container runtime interface, window support, and the Kube admin portable bootstrapping tool. We introduced a test-based certification program so users can be confident that the distribution that Kubernetes they use behaves as they expect. And we're making applications more portable too with mechanisms like the service catalog and also by making more kinds of workloads run better on Kubernetes, including stateful applications and data processing applications like Spark. Users tell us that they value the control observability and consistency that comes with managing all their applications on Kubernetes. We found that boards one size fits all job API made it harder to specialize the behavior of the system for different types of workloads. So Kubernetes has different APIs for different classes of workloads. The APIs used to run those workloads, deployment, stateful set, and daemon set, achieve stable V1 status as of the 1.9 release. So I just want to call out to everybody who helped make that happen. The base certification profile will cover the stable APIs at the lower two layers of this stack. And other profiles will cover other APIs and features. Those APIs at the bottom two layers of this stack are important because those are the APIs used to deploy, configure, and monitor your applications. So obviously Kubernetes has grown a lot from just four APIs to more than 50. It's also grown from one GitHub repository to more than 90. These new subprojects span many areas, such as our documentation, the dashboard UI, Minicube, container runtimes, and lots more. Three quarters of Kubernetes development is now in these new repositories. And we're working on moving more code out of the original repositories for further increased velocity. All of these projects provide a lot of functionality, but Kubernetes doesn't and shouldn't do everything that you would want it to do. Some of those needs can be satisfied by building on top of Kubernetes as a platform. To facilitate the integration of other tools and services, we developed client libraries for seven languages in 2017. We also added several new mechanisms to enable extension of Kubernetes, such as the Emission Control, webhooks, API aggregation, and custom resource definitions. But Kubernetes is open source. So we expect it to be used as parts, as well as in whole. You may have seen Kelsey's standalone KubeLit tutorial that enables you to use KubeLit without the rest of Kubernetes if you just want to manage containers on individual VMs. And also SDKs for Kubernetes-style APIs are under development, such as the API server builder mentioned during Hinstalk. Already dozens of projects, such as the KubeLit serverless platform and the Rook storage orchestrator, express their own APIs using Kubernetes API extensions. And that enables you to manage all your applications using the same tools. So with Kubernetes, you aren't limited by what the project itself can deliver. You can also take advantage of the ecosystem built around it. Hundreds of products and projects have official Kubernetes support or have been integrated by the open source community. And more than 10,000 GitHub repositories mentioned Kubernetes. So Kubernetes can be hard to describe, because it's all these things and more. How I think about Kubernetes is it is a portable, extensible open source platform for managing both containerized applications and services that facilitates both declarative configuration and automation and has a large, rapidly growing ecosystem. But one great thing about the conference is I have the opportunity to talk to other people, find out what you're interested in using Kubernetes already using Kubernetes for. So reach out to me on Twitter. I'm bgrant0607 on Twitter and GitHub and let me know how you think about Kubernetes. So thanks.