 All right, good morning. My name is Aparna Sinha, and I lead product management for the Kubernetes open source project at Google, as well as the hosted version of that in Google Cloud. So, and this is my Twitter handle, apputnagar, in case you want to follow me or reach out to me there. So Diane asked me to talk about Kubernetes 1.8 and beyond. So what is happening in 1.9 and what's happening next year? I wanted to start with a question that, actually, my husband asked me just the other day. He said, you lead Kubernetes for Google. It sounds like it's kind of popular. I saw it on Hacker News. Why is it popular? Do you understand why users like and why do they choose Kubernetes? It sounds like a complicated name and maybe a complicated technology. So I started thinking about that. And I wrote down a few reasons that I've heard in other talks. Well, it's open source, so what you see is what you get. It's got a great community. Red Hat is part of it. There's lots of other companies. So you know that it's a project that's going to go on. There are very frequent releases. This can be a pro and a con. But we have a release every three months. But that means that you're getting new features. Actually, the technology makes it efficient to use your underlying hardware. So some people use it because of that reason. And then it runs anywhere. And then lastly, it's fast deployment. So these are all reasons that I've heard and you've heard and probably experienced some of these. But I think that there's a real reason here, which is behind all of this and more important than others. So what is that reason? I think to understand that, and we had a great talk by TELUS, that was a good example, let's take a look at what Enterprise IT environment looks like. And the more I talk to users, the more I understand this, particularly large Enterprise IT environments, there is a lot of different types of applications, a lot of different versions of operating systems, not a lot of upgrades. It's a fairly complicated environment. And I've worked in that environment for the early part of my career. And it's not easy. But what I've seen is that every Enterprise IT organization is interested in the latest. They want to be able to do things quickly. They want to latch on to the most compelling technology. And it's important for their business, so they do that. But often it takes two to three years. I've heard customers tell me it takes two to three years to introduce that new technology. And at the end of that, it doesn't actually give you the benefits that you sought. So that's the status that I've seen. And I think we heard that a little bit from TELUS as well. So I think the reason that people choose Kubernetes, and my husband actually kind of educated me on this, is because of this last thing, the fast deployments. In an Enterprise environment that is pretty complicated, you can get in there, and you can do kubectl apply-f, and everything deploys. You can deploy many, many times a day. And the TELUS folks told us that they went from deploying once a week to deploying 400 times a day. That's a huge change. And I've seen many customers tell me, we did that demo, my CIO got up, and he showed that to the board. And for the first time, we had something that worked really quickly, and gave us the benefits. So I think when you can do that, you look like a superhero. That's why I have that superhero costume here. So that's great. That's wonderful. The next thing my husband asked me, OK, so you can deploy faster. Great. So one day, that's hopefully going to become ubiquitous, and then you can retire. Hopefully, there'll be other things on top of Kubernetes. And I do hope that it becomes ubiquitous and disappears. That's my hope for the future of Kubernetes. But the reality is that Kubernetes is a small section of the enterprise today. The present reality is a very hybrid, mixed reality. There is a lot of traditional enterprise IT. There's a lot of virtualized, non-containerized systems. And then there's a small portion that's fully managed and optimized for containers that's running Kubernetes. And that exists because of that great deployment benefit. But in fact, our system has to live in this reality. So back to my topic, which Diane asked me to talk about. 1.8, 1.9, what are we doing? This has to be based on this reality. And I think it is. Our community is looking at what matters to customers, and how can we build a project that actually caters to those needs? So I think a lot of customers want to move to the cloud, whether it's a private cloud or it's a public cloud. They want to move as they can. There's a lot of workloads that are on premise that want to have that developer productivity, that want to have that benefit on premise. And OpenShift provides that. They also want a consistent environment. They want to have the same environment in the cloud as they do on premise. Why? Because they don't have to train their teams twice or thrice for multiple environments, both their operations teams and their developer teams. And Kubernetes is a good base for that. But then at the end of the day, every enterprise needs something that's secure, that's private, and auditable. So these are actually the principles that also drive the Kubernetes roadmap as it matures as a product. How can we make sure that users can use it in any environment so that they can move where they want to at their own pace? How can we make sure that their services run everywhere? There's a huge effort there in the community around Kubernetes conformance. And I'll talk a little bit about that. And then how can we make sure that it's private and it's secure and it's auditable? And OpenShift is quite ahead in those areas. So the themes for the last two releases of Upstream Kubernetes, 1.8 and 1.9. is coming out in a week, were stability and conformance, security and extensibility. And as you can see, these play to those needs. We're really trying hard as a community to make sure that Kubernetes fits in with the environment and can enable its users to expand that footprint. So it can enable Telus to go from 200 applications to all of their applications. So I'm gonna go through some of the features. In 1.8, really along the lines of the themes of stability, we really matured the security framework. So role-based access control moves to GA or stable, which means that it has a long-term API stability guarantee, not a lot of changes, a lot of maturity that happened there and it becomes on by default. Also network policy and Clayton actually covered a lot of these things in Clayton and Matt or Mike. I think they covered a lot of this in their talk. Network policy is a great builder or building block for security at the pod level, at the L3, L4 level to be able to say which pods can talk to which pods. And then as we look forward, combining that with Istio, you also get the East-West security and the ability to set policies at an L7 level. And that really truly gets to a compelling enterprise product that has end-to-end security. So that's one, maturing the security and I think that's gonna continue through at least the first half of 2018. Second piece under stability is the graduation of the Workloads API. So the Workloads API is actually broken into two. There's the Apps API and the Batch API. Apps consists of deployments, which has been one of the longest-standing objects in Kubernetes. It's finally graduating to stable as part of the overall Apps Workloads API. So deployments, demon sets, replica sets, and stateful sets. All four of those are graduating to stable in a couple of weeks, which is a huge accomplishment. And what the community has done there is to really rationalize those, make sure that they're consistent with each other and they'll be available by default in one nine. And I think what's really cool is that, again, deployments and replica sets are the oldest and very broadly used. People are always asking me, when are you moving that to stable? Why is that not stable already? But stateful sets, actually, and demon sets, these are newer and that the community has brought those forward together I think really shows our commitment to allowing you to run all of these workloads side by side, together in one environment. And that's the whole consistency and resource efficiency benefit. Batch workloads has also moved forward, so Cron jobs finally moved to beta and we expect that to move to stable over the course of next year. So that's one stability. The second piece here is on extensibility. So CRDs replace TPRs. We have a lot of three-letter acronyms. I'm not even sure that I can keep them. Custom resource definitions is CRDs, which is a, and that's now moved to beta. That's a great way of extending Kubernetes, replaced third-party resources, which was a previous version. But the real just behind this, the thrust behind it is to allow you to extend Kubernetes to add a custom API or a custom resource or a custom controller to Kubernetes. And I think this is particularly important in consuming the rest of the Kubernetes ecosystem. So for example, if you want to run service catalog or you want to run Istio on top of Kubernetes, along with Kubernetes, and these are ways to extend your Kubernetes environment to non-Kubernetes environment. You use CRDs or you use some of these extension mechanisms to run those pieces on top of Kubernetes. So those are the three, I would say, major things, maturing security, maturing applications and workloads, and extensibility in 1.8. There are also experimental features, experimental features being alpha features. So particularly in scheduling, priority and preemption move to alpha, and I think it will continue to graduate. This really gives you more sophisticated scheduling capabilities, and we've been working on that over the course of this year. I think where that really takes us is towards multi-tenancy, so that you will be able to schedule multiple different types of workloads and types of users inside the same cluster, hopefully a larger cluster, and get a lot more resource efficiency from that cluster. And then the storage, StorageSig has done a lot of maturation. I think Clayton talked about local storage, which is important, of course, for stateful workloads, but also the definition of the CSI, the standard, the storage interface standard, which allows many enterprise storage vendors to plug in to Kubernetes and develop those plugins out of tree so that they don't have to be part of the Kubernetes core, giving it greater extensibility and stability. And then lastly, really in the area of workloads, there's a lot of expanding work in Kubernetes on supporting big data through Spark and also through supporting GPUs and ML. So I think that's gonna be a huge theme in the coming year. These are some of the overview, I think that's actually just an overview. I have a little bit more detail, but I won't go through all of it. I think everyone is aware of what role-based access controls are, and it's a GA feature. It allows admins to really dynamically define roles and enforce them at a very granular level on resources, on pods, on specific namespaces within Kubernetes. It's a very rich security feature. And then network policy. I talked about network policy already, the combination of network policy and Istio really provides that end-to-end security. And a lot of these network policy implementations, like Calico, can be used not only for Kubernetes, but also for the rest of your infrastructure. So again, talking about hybrid, you wanna set network policy on services that may be running on bare metal. Istio as well, it can be used beyond Kubernetes. So to set east-west communication for services that are not running in Kubernetes. And the same thing with open service broker, again, that allows you to bring in services that are potentially legacy services or services that are running in VMs. And so that you can see how Kubernetes is adopting these open source pieces that allow you to have that hybrid environment. This is the Workloads API I talked about this. So this has kind of been the road to GA. In 1.8, everything moved to V1, beta 2, and then in 1.9, we're moving to stable. And I think the key piece here is, Kubernetes is an open source project. It really moves at a frenetic pace, and a lot of times users don't have visibility into what's coming in the future. We're trying to provide a sort of format for how we graduate APIs. And the Workloads API is an example of that. So we sort of move it into a group. We go to beta 1, and then we go to beta 2. And then we go to kind of stable, and that's the first version. And then it has a long-term deprecation policy. So we're trying, again, to provide a framework that we intend to use for other APIs as well. So this is a good example of how we're bringing stability. Extensibility, and this is an example of how the extensions of the API server can be used, for example, for service catalog. So you have an excellent API in the form of the Kubernetes API. If you could extend that with your own custom APIs or bringing in things like the service catalog, which is a sign of itself, and it provides you the ability to essentially bring in a whole other API, in this case, to create services, to bind to services. And Clayton talked about this as well. This is very powerful. You can bring in any kind of third-party API or your own APIs, and they look just like Kubernetes APIs, and they're accessible from KubeCuttle, which is really fantastic. So I think when we think about all of the roadmap, the stability, the security, and the extensibility pieces, you start to see the blueprints of how you would run in this hybrid enterprise environment with Kubernetes. So you'd have Kubernetes for the cloud native portion, where developers can really develop fast, and they can deploy them 400 plus times a day. But ultimately, that hopefully becomes something that's invisible. You can use Istio to connect both the services that you have in your Kubernetes cluster, as well as services that are running outside your Kubernetes cluster, and both visualize them, as well as secure them. And then the open service broker hopefully becomes that common catalog, where you can consume services that are developed on Kubernetes, outside Kubernetes, and potentially in public clouds. So the benefits here are being really that with all of these pieces, the team has essentially raised the level of abstraction and really decoupled deployment from development. Also decoupled the management of traffic and security from the deployment, and then ultimately decoupled the people who are consuming the services from the people who are developing the services. So it essentially becomes like a SaaS-like infrastructure inside an IT environment. And this is very similar to how Google operates. And so we are very excited about seeing this happen in the rest of the world. I think it's quite differentiated from the way people have been thinking about hybrid cloud and enterprise IT in the past, because it's very much developer focused. It's services led and not infrastructure led. It incorporates legacy and modern on the same infrastructure. And then ultimately, it's open so it can run anywhere and it's multi-cloud. So that's pretty much it. I think beyond, as I look at 2018, we are gonna continue the focus on stability, graduating more of the APIs to stable. We're gonna focus heavily on conformance. The conformance program launched actually between 1.8 and 1.9, and OpenShift was part of that. There were 30 plus vendors that announced conformance. What this really means is that you can now run your applications on Kubernetes and whether you go to Google or you go to some other cloud or you go to another vendor, those applications should run well across those different Kubernetes distributions. And that's extremely important to the mission and the purpose of Kubernetes is that you can run these services anywhere. I think there's huge amount of work on continuing security and enhancing security in Kubernetes, building multi-tenancy into Kubernetes at the pod level, at the node level, at the namespace level, and of course augmenting that with Istio. So I think 2018 will be a huge year for that. You'll see really enterprise readiness on the security side. And then extensibility so that you can build on top. You can add customization. I think also what you'll start to see is a focus on applications so that the underlying infrastructure can start to disappear a bit and there can be a much greater focus, even in the upstream code, on applications. Clayton talked a little bit about the definition of what is an application. That's a huge piece of what the community is going to be working on in 2018. So that's what I see beyond. Looking forward, I hope that in the future, once we've progressed further, everyone will be able to start writing code immediately. There's no need to file a ticket and they'll be able to reuse services out of the box and secure them easily. If there's a fault, they'll recover quickly and they'll only pay for the resources they consumed and the infrastructure will essentially take care of all of that. So very much in line with the vision that the TELUS folks said the applications will hopefully write themselves. That's it, thank you.