 Well, hello, and everybody welcome to a new technical talk series that I'm kicking off here on OpenShift Commons around having more of a conversational tone and bringing together some of the thinkers, creators, and coders to discuss some of the interesting ideas and innovations that are happening in technology today. Not necessarily things that are what we normally talk about in OpenShift, but some of the underpinnings and other technologies that are coming out. And today, I'm really pleased to have Aparna Sinha with us, who is going to be talking about Kubernetes 1.7, the release just came out last week. If you're in our space, it would have been hard to miss all of the announcements that came out. There was a lot that went into this one. A lot of Red Hatters worked on it. A lot of people from across the entire ecosystem of cloud natives have worked on this release. And I think I probably listened to at least three or four updates on it. And each time I find out something else that's in 1.7 that I didn't realize got in. Because there was a lot of alpha stuff in the previous releases that are now beta and now in a few more alpha things that got stuck in to 1.7. So Aparna is the group product manager for Kubernetes at Google. So she's one of the very well-versed in what's gone in and what's coming in the upcoming releases. So I'm going to let her introduce herself and talk about it for about a half an hour. And then we'll have some Q&A. So go ahead. Great. Thank you, Diane. Thank you for hosting me on your show. As you mentioned, this is a very exciting release, Kubernetes 1.7. It launched just before the July 4th holiday on Thursday, June 29th. And it is quite a featureful release. If you look at the release notes, there are actually multiple pages long. And so one of the things that we tried to do as the product management group within the Kubernetes community is to synthesize what user-facing functionality is basically the highlight of this release. So we synthesize that. And also in this discussion, I'd like to describe some of the value that users can get out of these features. Before I do that, I do want to reiterate, as you said, there's a broad community consisting of multiple companies that have contributed to this release and in general contribute to the Kubernetes releases. Red Hat and Google being very prominent participants. And there are many features in this release on which we collaborated extensively between Google and Red Hat to deliver those features. I'm very proud of. And I think our users will find those features very useful and impressive. So with that, I'll get into a little bit about 1.7 and I'll be scrolling down as I walk through this blog post. It's really a milestone release for Kubernetes, the community and on all of the commercial distributions of Kubernetes and all the tools that are built on top of it. The main themes that we would like to highlight are extended security. And this is particularly important because now there is much more adoption of Kubernetes in large enterprises, both on premises as well as on public cloud. And what that means is that there is a lot more need for robust security and in fact secure multi-tenancy to be supported fully in Kubernetes. We're very happy that users are using Kubernetes in this fashion and that we are able to build towards a strong roadmap for security and multi-tenancy in Kubernetes. And 1.7 has several new features in that regard. I also want to highlight the other two themes. The second theme is around multiple workloads and we've been working on stateful workloads as a community for quite some time. Red Hat and Google I think have been foremost in stateful workload technology and adding that to Kubernetes. And this release actually brings in support for upgrading stateful applications. Stateful applications being things like databases and batch processing. And then the last big theme of the release is extensibility. Again, I think extensibility features are driven by the demand from large enterprises who are adopting Kubernetes in production. The extensibility needs that they have are related to bringing in custom business logic that they've written that they want to be included into Kubernetes. It's also driven by additional third-party and internally developed APIs that users would like to use in the same way that they manage other API features in Kubernetes. And so the extensibility features really broaden the scope of what Kubernetes can deliver and it makes it much more usable and customizable by our power users in our large enterprise environments. So those are our three themes for this release. Again, there's more than I think 30 features. But what I would say are the major themes are security hardening, stateful application support and extensibility. I'm going to go through each of them and in that process I'm going to walk through the detailed features. Then I'll also cover some of the additional highlight features. Again, there's a very long list. I will not be able to get through all of the features, but there is a multi-page release notes document where you can find everything that's new in the release. So first, speaking about the security enhancements, there's at least five that I would like to call out. The first perhaps most exciting one is the network policy API. The network policy API was beta in the last release, so it's not new. However, it has been promoted to stable. And with that there are some changes that have been made to the network policy API that improve its performance. It's a much awaited feature. And again, as a community we have been focused on stabilization. So moving this API to stable means that it is ready for production use in large enterprises. What network policy enables is essentially a basic foundation for multi-tenancy. If you want to have multiple users and multiple tenants share the same cluster, you need some way for the pods or applications to communicate with each other. And you may want to set policies on which pods can speak to which other pods. And that is what network policy enables you to do. That's a simplification, but that's essentially the use case for network policy. And we find that increasingly users are running multiple services for multiple different teams. And these are all running in the same cluster and sometimes across multiple clusters. So network policy allows you to control the network traffic and set up governance rules around which apps can talk to which other apps. The second feature here is Node Authorizer. Node Authorizer, which is implemented through an admission control plugin, essentially restricts the Kubelet's access to objects that it should have access to. Kubelet is the agent that runs on the node, the Kubernetes agent that runs on the node. The node is usually a VM or in a bare metal configuration, it could just be a machine. The Kubelet, that agent before Node Authorizer would have access to Kubernetes objects such as secrets and other objects in other nodes. And there's one Kubelet per node. So what we didn't want to have happen is should a node get compromised, that Kubelet on that node should not have access to other nodes and be able to compromise the other nodes in the cluster as well. So with Node Authorizer, the Kubelet's access is restricted just to the objects that are scheduled on that node. This is very important obviously for isolation and to prevent that kind of risk in the case of a node compromise. The third feature in this release, again related to the security theme, is encryption for secrets. And this is one where Red Hat had a huge role in developing this feature. And of course, as did the security experts at Google. This was a very important feature from a overall security perspective for Kubernetes. Secrets were not previously encrypted in SED, so at rest, secrets were not encrypted. Now in, for example, Google Cloud, that's less of an issue because the SED data store is not actually accessible by users. Since in, for example, in Container Engine, Google manages the masters and also it's backed up to Google Cloud storage. But for those users who are using the open source version and unmanaged version of Kubernetes, this could have posed a risk. The fact that secrets were not encrypted at rest. And so that is, we're very excited that that is now available as alpha for all users of the open source software at large. And it'll, over the next couple of releases, be moving to beta and then stable. The next feature is Kubelet TLS bootstrapping. Actually, Kubelet TLS bootstrapping existed, but what is new is that it now supports certificate rotation. So client and server certificates are both rotated and that's important because the certificate can get stale. Or, you know, if someone has access to it and it isn't rotated and that user is no longer, the old certificate can be abused. And so certificate rotation is an important feature also to maintain the security of the cluster. And then the last one that we chose to highlight is audit logs. And audit logs are, again, very important for large enterprises. They want to be able to have a audit trail on what exactly happened and what were all of the actions who took the actions in the Kubernetes cluster. And so audit logs are now stored by the API server. They're more extensible and they have support for filtering. So you can look at specific events and also support for webhooks so that you can then take these audit logs and you can surface them in a different UI or different front end. And there's also a richer data set that is available for system audit. So NetNet, you know, I would boil down the security features to continued progression towards multi-tenancy and the node isolation and the pod-to-pod policy for communication. These are things that are foundational building blocks for multi-tenancy. In the future in 1.8 and 1.9 and so on, you'll find pod identity will augment these features and provide even more granular multi-tenants capabilities. And then the other pieces, encryption, certificate rotation and audit logging, they really help to bring in, you know, more security in depth within the Kubernetes cluster. So that's a high-level overview of the security features in this release. Again, security being a major theme for Kubernetes 1.7 driven very much by our enterprise users and their requirements. The second major theme, as I mentioned, is around stateful workloads. Stateful workloads are things like databases. It's particularly important for Kubernetes because in the beginning and early days of containers, containers were supposedly good for stateless applications. And many users did not have the functionality to use containers for stateful applications. The problem that creates is that you have your, say, web front-end, which is stateless, running in Kubernetes or, you know, running in containers. And then your database and or any other stateful part of your application has to run in a VM. And so now from a operating team perspective, you have to manage two different types of infrastructure for the same application. This is doable, but it's not ideal. And many users have wanted to use Kubernetes as the standard for all of their workloads. So we, as a community, started working on stateful application support a little over a year ago, actually more than a year ago. And in fact, the construct for stateful application, there are several foundational pieces that support stateful applications. One is, of course, storage management. And there's a whole host of features that were released over the course of last year and a big release at the Berlin launch of dynamic storage provisioning, which is in support of its provisioning storage in support of all applications, but particularly important for stateful applications. So we have a pretty robust set of storage management features in Kubernetes. But then how do you roll out a cluster database like, say, Elasticsearch or ZooKeeper, one of these more modern or CockroachDB, one of these more modern scale out databases, how do you provision them in a Kubernetes cluster? That was a problem that was solved by our community, I would say, mid of last year with the introduction of stateful sets. So just like there are daemon sets and replica sets, these are constructs in Kubernetes. There's also stateful sets. Stateful sets essentially allow you to provision a stateful workload such as a scale out database on your Kubernetes cluster. And that's all well and good. Stateful sets went beta in November of last year, and many users have been adopting stateful sets. There are a couple of problems, though, and that's what this release, this is a key release, because it addresses those outstanding issues, two problems in particular. One is now that I've rolled out my database, how do I update it? As you can imagine, it's very important to be able to update your database. And if things don't go well, you know, to roll it back as well. So with this release, the team has introduced stateful set updates. And in fact, the update process of stateful applications can be automated through the stateful set update feature. And there are a range of different update strategies that users can use to update those databases. One of the more popular strategies is rolling updates. There are also other strategies such as update on delete and so forth. And this is particularly useful for things like Kafka, ZooKeeper, etc. You know, Elasticsearch, potentially Redis, MySQL, Kafka2DB. These are some of the common databases that our users have started using stateful sets for. And so now with 1.7, this is a beta feature to be able to update these databases automatically. The second problem that we sought to solve here is sometimes when you roll out a stateful database, it needs to be rolled out such that it's an order. The provisioning of the pods requires a certain ordering. But for other databases, the ordering is not required and you can actually provision them in parallel. Previously, we only had ordered execution. And that led to some performance bottlenecks because for databases that didn't require the ordered execution, we still did ordered execution. And so we received feedback that for some of these databases, this is taking too long. And so in this release, we have pod management policies, which allows you to choose different types of rollout, parallel or in order. And that can be a major performance improvement. I think in particular for ZooKeeper and for Kafka, which is based on ZooKeeper, we have a demo that shows the performance improvement here. So that's the big announcement with regard to stateful workloads. The other supporting pieces are alpha support for local storage. This was one of the most frequently requested features associated with stateful applications. So users who were deploying stateful applications on Kubernetes wanted to be able to use the local storage. And this is mostly for performance. If you require high performance from your database, you may not want to use clustered or networked storage with that database. You may want to use the local storage that's available on that VM or on that machine. And this was previously not something that was supported out of the box. And so we're very excited that with 1.7, we're able to announce alpha support for this feature. And so now local storage volumes on the same VM can be accessed through the standard persistent volume, persistent volume claim interface and are also available through storage classes, making again the dynamic storage provisioning piece available for local storage with stateful sets. You can see how those things come together to create, I think, what is a really compelling and I think industry leading capability in terms of stateful application support. I think at this point Kubernetes is starting to bust some of those myths about whether containers are only good for stateless workloads. I think with the number of users that are using these features and the additions that we've made with this release, it's going to become much more common to use again the modern scale out databases and run them along with your stateless applications all on the same underlying infrastructure. So that's the major announcement there. A couple of other new things. Damon sets are essentially when you want to have a pod deployed in every node. So for example, if you want to monitor every node and you want to install an agent in every node for that monitoring, you would use the concept of Damon sets. Damon sets essentially create one pod per node. Another use case for it would be if you wanted to have a networking plugin. So a monitoring plugin or networking plugin or anything that you wanted to install one of in each node. Damon sets have existed for a while and Damon sets can also be updated and the upgrade capability has also existed for a while. What is new in 1.7 is the ability to roll back and upgrade. And in fact, it's a smart rollback and we also have more history of the rollout and rollback. So that capability is now added to Damon sets. The last piece of storage OS volume plugin which provides highly available cluster wide persistent storage. I'm not going to go deep into that but that's also something that is a new feature here and it can be particularly useful in on premise environments. So that's the set of features that are highlights in 1.7 for stateful application support. And again, updates, major capability, performance improvement and local storage, those being the major things. The last but not least theme of the release is extensibility and again one on which Red Hat and Google collaborated extensively. It's very important because we have a lot of large enterprise users. And again, as I mentioned at the beginning, those users are interested in extending the capabilities of Kubernetes either with custom business logic that is specific to their enterprise or by adding third party or user created APIs to their clusters. So the first of those features is API aggregation. API aggregation is, I would say, yes, the most powerful extensibility feature in this release and it is beta in this release. It basically allows users to add a Kubernetes style API to their cluster. So the Kubernetes API comes built in with lots of different capabilities, you know, you can do, you can roll out new deployments, you know, you can set up replicas and so on. There are many, many capabilities. But if you, for example, wanted to write a new object, something that combines existing pieces of Kubernetes and calls it a new object and that's that's or the brand new object. And you want to specify that object for your enterprise and you want it to be available in the same way and manageable in the same way that the rest of the objects in the Kubernetes API. Then you can use the API aggregation feature and add your custom API to the to your Kubernetes clusters and this can be done without recompiling and rebooting your cluster. And that's the kind of the major thing you don't really want to have to restart everything. So API aggregation works at runtime. There are some good examples of this. We've seen users that have built passes on top of Kubernetes and for the purpose of those passes, they have created many, you know, new sort of objects and then so those can be added to API aggregation. Another use case is that for adding service catalog service catalog is a relatively new concept in the Kubernetes open source community. And it has a it has an API that allows you to essentially list a set of services that you want to make available to your team or to your company in a catalog and for service discovery and ease of service deployment. So the service catalog can be added as an as a third party API through this API aggregation mechanism. Those are just two examples. There are many, many more things that can be done using API aggregation. So very excited about that feature. There's actually another feature. It's listed here under additional features. It's also an extensibility feature, which is support for external admission controllers. And this is this is currently alpha support something that was worked on extensively again, as I said by Red Hat and Google. And I believe Clayton Coleman has talked about both API aggregation and and external admission controllers. And those are definitely talks that are worth watching. So external admission controllers, they basically allow users to add custom business logic to the API server for modifying objects as they are created and or for validating policy. So what's an example of this? There's a there's a lot of recent interest in a network proxy called Istio and Istio is essentially a sidecar proxy. It's a it's a new container that gets deployed in every pod. And what it does is it functions as a network proxy. It can allow it can monitor network traffic, but it can also do other things that a proxy does such as authenticating. And so it enables service to service authentication, which is something that users have wanted for quite some time in Kubernetes, but also it does things like load balancing. Anyway, so in a Kubernetes cluster, if you want to have Istio running, you have to inject this sidecar proxy into every pod. And one of the things the main mechanism that enables doing this in an easier way in an easy way is the use of external admission control plugins. So what that does is you can think of Istio as essentially business logic that you want to add to your server. So it modifies the existing pods in your cluster. And so you can use external admission controllers to that mechanism to add that Istio sidecar to your cluster without disruption. Other things you could do is you could validate whether, you know, a certain policy around what your pods are or what your pods do can be validated through this type of mechanism as well. So those are two main extensibility features that I think our users will find very valuable as they deploy and expand their use of Kubernetes for enterprise use cases. The other extensibility features that we do want to highlight are with regard to the container runtime interface, and this is work that's been ongoing over the course of several releases. In this particular release, the CRI has been enhanced with new RPC calls to retrieve metrics. There's a new set of validation tests that have been written for this interface. The validation tests are important because then other runtimes can validate themselves, you know, kind of more in a self-sufficient manner to provide support for CRI. So again, just backing up a little bit, the container runtime interface was developed in the interest of code health and in the interest of extensibility to allow not just the Docker runtime, but also things like the rocket runtime or the cryo runtime or hypervisor based runtimes or the variety of runtimes that our users might be interested in using. Docker since then, since the early days has now split out the core runtime and donated it to the CNCF and that runtime is now called container D and it's an active development. There is alpha integration with container D within Kubernetes 1.7 using the container runtime interface. And there's also a in depth post on the CRI that you can follow here. So that's the summary of our extensibility theme for this release. And so again, those are the three major themes going back, you know, the security enterprise security features. Stateful workload support and extensibility. Those are the high level three themes that I would suggest that users keep in mind as they try out 1.7. As I mentioned, there's a longer list on some of the others that we highlighted in this blog post. I think one that's important is third party resource because third party resource, which is also an extensibility mechanism has been deprecated with this release. Our deprecation policy is such that third party resource is not going to be removed in this release. So you can continue to use it, but you should plan to move all of your TPR objects to the new API, which is called CRD or custom resource definitions. It's a cleaner API. It resolves a number of use cases and corner corner cases that that were that were sticky points with TPR. We got a lot of feedback on TPR. And so if you are using the TPR beta feature, there is a migration guide that is linked here to migrate to CRD and hopefully you will all see great improvements with that migration. And then ultimately in the next release, which is Kubernetes 1.8, TPR will no longer be supported until it will be removed. So once you migrate to 1.8, it's important to make sure that all TPR usage is prior to 1.8 upgrade, move to CRDs. That's in a nutshell the set of features that we highlighted. Again, complete list is in the release notes. I would like to say a little bit more about, you know, the growth of the community, the community, the open source community continues to grow is very healthy. There are a number of different companies and many, many independent contributors that made this release possible and are continuing to work together to make the future release as possible. We are already in deep planning stages for 1.8. On the customer front, actually, so together I guess we've pushed more than 50,000 commits, but since the beginning of the project and that's just limited to the main Kubernetes repo. It is worth pointing out that in the interest of code health and in the interest of stability, the main Kubernetes repo is being restricted, more and more restricted to just the core components. That's what most all users use within Kubernetes and many of the sort of umbrella and extensions to Kubernetes are now being moved off into associated repos. This move and cleanup of the repositories is bringing overall stability to the project as well as making it much faster for contributors to contribute and develop. Again, this is important because Kubernetes is one of the fastest growing open source projects ever and we are coming up with how to manage the project as the world turns as Kubernetes is rolling out. It's been a very interesting community development process and we've seen lots of other projects like OpenStack and even Linux and stuff. It's an evolving model of how to get involved in the project and how to manage it. So far the Kubernetes community has done a pretty awesome job, especially one of the things that I like to point out is the special interest groups and how those drive the features and the functions and the process of what gets incorporated into next releases and managing them. The coordination of that has been pretty awesome to date and hopefully we'll continue to do that. And the other thing that's really been, as I said in the beginning, is always every time I listen to another 1.7 release, here's something else that I didn't note before. And then this one, what was interesting to me was the audit logs, which I hadn't really thought about a lot in the past and now that we're moving into sort of an enterprise, a mass adoption of Kubernetes out there in the world. This becomes more important than ever and will be something that I think we could explore a little bit more too and I'm pleased to see that it didn't make any sense. One of the questions that I have for you, maybe you can explain a little bit, when you say something is an alpha release bit or a beta release bit or it's something that's stable and incorporated into production. If you could explain a little bit of that because I think there's probably some people who are very new to OpenShift. Not OpenShift, Kubernetes was my back. Right, right. So alpha features are subject to complete changes. Alpha is essentially an early experimental version. So when the community first comes up with a solution for a problem, say, let's see what's alpha in this release, external admission controllers. So we recognize that it's important to have external admission controllers. This would be a really key addition and really make it possible for our enterprise users to use Kubernetes with so many other. So many other custom, to bring in so much other business logic and or so many other tools that they've been dying to use. So when we do that, then there's a design proposal in the community and everyone will sort of coalesce around a particular solution and then a solution will be developed and it'll be launched as alpha. And that means that, hey, we've got kind of the rudimentary building blocks here. Please try it. Tell us if this is what you're looking for in this solution. We try and have some feedback from early adopters about whether alpha features actually hit the mark for what they're supposed to do, but often alpha features are incomplete. You know, it's the first installment towards what's needed in the future. And so when you look at the issue on GitHub, you'll see that, okay, we consider this to be shipable as an alpha, but then here's a longer roadmap of what we need to add to this feature. And in many cases in making those additions, we change how the alpha functionality is working. So there's no guarantee with an alpha feature that the way the feature works is going to continue to be how the feature works. And that's extremely important for our users to understand. You know, many Kubernetes users are at the bleeding edge and Kubernetes being such a rapidly developing project, people are waiting for these new features. And sometimes it's great that users adopt alpha features and that they give feedback to the community, but it is also important not to use alpha features in production. And certainly to keep in mind that there is no guarantee. And most often alpha features do change completely in terms of the implementation. And so when a feature moves to beta, then there is more of a guarantee that says, you know, now we've sort of figured out exactly how this API is going to work. And now we've graduated it to beta in the way that this API works and the way that it is specified. This is likely to stay the same. But it might still be incomplete. For example, when you look at stateful sets, stateful sets are beta. But there were core pieces of functionality that are missing such as the support for updates. So they were beta in November of last year, but we didn't have support for upgrades. And you can't really take stateful sets to GA until you have support for upgrades. And even now while we have support for upgrades, there's not really good support for rollbacks. So there will be, we will be adding, you know, additional features to stateful sets over the next couple of releases with the goal of moving stateful sets to GA. So when we move a feature to GA or stable, such as network policy, that means the API is now stable. And we think that as a community, all of the basic functionality that is needed to run this to run this feature and this set of features in production is complete. And so there really shouldn't be any major gaps at that point in the feature should as a whole be very usable, you know, in an enterprise ready production fashion. But even beta features, you know, you can use them, but you have to recognize that while they're less likely to change, there will be additions. And the functionality may not be complete. So that's the difference between alpha, beta and stable. So what state is the service catalog at the moment? The service catalog is actually, I believe it's not part of the main Kubernetes repo. It's one of those extensions. So we don't track its state as part of the, the, the product management group, but per my recollection, it is alpha. Yeah, I think so too. And there's a couple of problems. That helps at all. Not sure. But I think there's a number of things that are coming to the service catalog that a lot of people have been waiting for too. But it is, I'm pretty sure at an alpha state at the moment. There's one of the things with Kubernetes, and I think pointing out that this catalog is one of them. There are a number of ways that the facility has managed to keep the big tent from growing too big by putting these other ancillary projects outside of the Kubernetes repo so that Kubernetes itself can grow and go through these stages. And some of them, and a lot of the extensibility pieces that we're putting in place are part of what looks that possible. So like service catalog, some of the stuff that Prometheus and in other monitoring tools and things just keeping them outside of the tent. And I think that's one of the reasons why this community is potentially not only the fastest growing, but probably the one of the better projects in order to get to a certain level. A stable enterprise ready as quickly as it has done and to get so many companies betting on it the way that they have. So 1.8 is in flux right now. We're all kind of debating about what should be in 1.8. What do you have for a secret wish list for 1.8 that you're looking forward to getting into the next release? Yes, that's a great segue, particularly your comments around stability. So I am hoping that 1.8 will be a stability release. We've been as a community moving between kind of a tick-tock cadence where you have a stability release, a feature release, another stability release, another feature release. So 1.6 was a stability release where we moved features from beta to stable and graduated alpha to beta. And 1.8 is a similar release. And from a PM group perspective, we are pushing for there to be no new alpha features. It's hard to hold back and say no new alpha features. Good luck with that. Yeah. So, you know, and it's an open source community. There's no hammers running. You know, we don't have any, we can't really constrain. But by and large, there is agreement in the community that we need stability. And then that is actually a feature from a user perspective, just stability itself. So I'm hoping that 1.8 will be largely a stability release. Some of the things that we want to move to beta, for example, local storage in support for stable workloads was Alpine 1.7. And that's such a high demand feature. We hope that we can move that to beta. Also, encryption of secrets. I mentioned that that was an alpha feature. And again, that's such an important feature for security that I do hope that we'll be able to move that to beta. We'll see about external admission controllers. Again, these are such critical and important features. We hope to move those to beta in 1.8. But overall, I think the part you mentioned about extensibility being a driver for stability, that is become much more fleshed out within the Kubernetes community. There is a new SIG that has been formed, which is called the Kubernetes architecture SIG. And the Kubernetes architecture SIG has laid out a diagram for how all of the different parts of Kubernetes are organized. So what's at the core? What's client-facing? What's infrastructure-facing? And they've said, here are the group of layers that are part of the main Kubernetes repo. And then a lot of other things have been broken out as extensions. And so what I expect to happen in 1.8 is some of the pieces that are broken out of extensions as extensions already have interfaces, like there's a network interface, there's a storage interface, there's a runtime interface. But there are other pieces that don't have good extensions like identity. So you can plug in different identity providers and cloud provider. The cloud provider interface has sort of been pedering along. There's been a little bit of work. Even in this release, there's a little bit of work. But you really need some attention to make it a whole proper interface so that different cloud providers can plug into that. So I think there'll be a lot of work at that extensibility layer to make sure that we can write clean interfaces for the other pieces that need to plug into Kubernetes. And that's something that we will prioritize. It certainly has a PM group in 1.8. We'll look forward to that stability and to some of those other extensibility things. So this has been a couple of folks who have popped up questions, but I think they've all been answered by your talk. So I really appreciate the thoroughness with which you've done this. A couple of things in terms of other talks that are coming up. There is a service catalog talk, Deep Dive on the 26th. There's an, I'm going to say it wrong. It's so, it's so, you're the first person I've heard say it out loud. It's so talk coming up in about another month or so. So there's a whole, all these ancillary things. There'll be lots of things and they'll be on the commons.openship.org events calendar. If you want to take a look there. So if there is one other person, this is a question I'm going to ask at the end of every one of these sessions that you want youth or one other topic that you think that we should dive into. Just to keep us up to speed or to tickle our fancy with what's coming. What would you suggest that we look into next? You know, I think an interesting topic would be machine learning and GPU support. That's something that, certainly from a Google cloud perspective, there's a lot of interest in. I think in general there's a lot of interest. GPUs have started to be supported on Kubernetes. In fact, we've just launched alpha support for GPUs in Google cloud, Google Kubernetes offering. I think that would be an interesting topic for all of your users. In terms of people, I would suggest Han Goldberg. She's the end director here. I think she would be phenomenal. Obviously Clayton, but I'm presuming that you've had discussions with Clayton before. Great suggestion. That would be a great topic. I know that there's a lot of interest across the board. The other thing is that I wanted to also say is it's been amazing to see Google support for Kubernetes and your continued efforts at Google to make this community and to support the community and drive it through the CNCF. This has been a wonderful experience in terms of community development and open source efforts, and in a large part Google's had a big hand in making sure that that handoff over the CNCF was smooth and effective. Thank you for all of your work there. This has been a great update. There'll be another one next week as we're going to dive into other tech talks, but we'll try and set up that machine learning talk. That sounds like a great idea. Thank you very much. This blog should be up, and the video will be embedded in it probably in the next day or so. We'll put the links into your Kubernetes blog post here and a couple of other references so that people can find all the 1.7 release stuff. Again, thanks for not using slides and to just talking us through it. Thanks very much, and we look forward to hearing you more on some of the other Kubernetes calls. Again, we're having you up in Austin as one of our guest speakers for the OpenShift Commons Gathering, so we appreciate that greatly. We appreciate everything you do, and so thanks very much. Thank you so much.