 Well, hello and welcome everybody to an OpenShift Commons briefing. We're repeating what we do every Kubernetes release is try and coerce Clayton Coleman who's our lead architect for OpenShift and lots of other things Kubernetes related at Red Hat. And we've just gotten Kubernetes 1.7 out the door. I think sometime late last night, the SIG release team posted it. So it is up and out and ready for consumption. So very timely that we get this update from Clayton. And so the format we're going to do is going to let Clayton do his presentation. You can ask questions in the chat. There's lots of people on the call. So we may get some of them answered during the chat. But the good questions or and then we'll open it up for Q&A at the end and we'll read those out and get those in. This recording should be up by the end of day today on blog.OpenShift.com and you'll be making it available as quickly as we can because we know you're all interested in this. So without any further ado, Clayton, why don't you introduce yourself and take it away. Great. Thank you. So my name is Clayton Colman. I'm Diane Medchick. I'm a Kubernetes and OpenShift architect at Red Hat. I have been working on Kubernetes since the very beginning and this is a very exciting release for us. There's a ton of both great new features as well as changes coming to Kubernetes that make it even more of a platform. So I broke the features I'll talk about today are just a taste of what's coming in Kubernetes 1.7. There's a huge number of bug fixes, performance improvements, practical real world problems that people have hit on all of the different platforms that Kubernetes runs on. I wanted to talk about kind of four top level themes. These are part of the ongoing evolution of Kubernetes, making sure that as a platform and as a place to run containerized applications that Kubernetes keeps getting better and better, that the Kubernetes community becomes stronger and that other people in the world can not just run Kubernetes but can also extend and enhance Kubernetes to solve their own problems and to solve challenges that they face that not everybody else may have. So the four arcs are runtime, which typically deals with the core of the system, security, extensibility and running applications. And as we go through, I'm going to start by talking about extensibility because I think when it comes down to it, that's the key change in Kubernetes 1.7. If you walk away today with one thought is that our goal, both at Red Hat and the broader Kubernetes community, is to make Kubernetes a platform that is extensible, that can be extended in all sorts of ways to add new capabilities to be easier to develop against and to be more modular to allow people to disable and disassemble Kubernetes and reassemble it in the ways they'd like to enable powerful integrations like OpenShift to run on top of Kubernetes, as well as to make it easy for administrators and integrators to plug their own policy into Kubernetes. And so there's four features that I wanted to call out here in Kubernetes 1.7. They're all in alpha or beta that deal with how we extend Kubernetes as a platform. So the first, many people who've worked with Kubernetes may have heard about third-party resources. This is an alpha feature that has existed since at least Kubernetes 1.2. And in this release, we took the feedback from a large number of people in the community, worked with people who had built extensions using third-party resources, and tried to get all of the requirements captured, all of our lessons learned with third-party resources, get it boiled down into something that we believe we can support going forward. And so in Kubernetes 1.7, the third-party resource is being transitioned to beta under the new name Custom Resource Definitions. So if you can see on the right, the name has changed. A few of the fields have changed. There's a ton of underlying work in Kubernetes that makes third-party resources more stable, that deals with challenges like upgrading third-party resources over time. And all of these improvements are specifically designed so that when people build extensions to Kubernetes and when people run extensions on type of Kubernetes, that they can support those over the long term. For those who aren't familiar with third-party resources, in a sense they're just like adding a new API to Kubernetes with the minimum possible amount of work. So you can give a name of a resource and as an administrator, install that when that becomes enabled. An end user who has permission to create those can go and request those. There's not a lot of control or structure in those. They're designed to be ad hoc. They're designed to be easy to set up and install. And there's a number of community projects that have latched on to third-party resources to store the API definitions that make up their extension. And so we have left room in third-party resources for future enhancement. We do think that over time third-party resources under their new ages of custom resource definition will become more powerful. We'd like to add more features. But this is typically the place where extensibility to Kubernetes starts. It's the easiest way to get started. But it doesn't necessarily capture all the use cases. And so a second feature that we've added in Kubernetes 1.7 is part of work that was in a very alpha form in Kubernetes 1.6. But it is the ability to register and define new APIs that actually have a backend that is implemented in code. And so the difference between API extension and registration and the custom resource is the custom resource is entirely provided by the end users of custom resources. No validation. There's no external logic preventing changes to these APIs. But on the flip side, as you begin to develop an API, you may find that you want to impose more policy. You may want to define validation rules in source code. And so API registration allows you to implement those APIs and then plug them into a Kubernetes server. So the API is registered with the Kubernetes API server and the Kubernetes API server will act as a proxy to that newly defined API. And in a sense, if a custom resource is about an end user perhaps not having to bring code but able to quickly iterate on their new policy object, the API registration path is more when someone has written in code a fairly powerful API service. And we think of custom resources as the place that you start. And in many cases as Kubernetes grows as more people build APIs that extend Kubernetes or offer new features alongside Kubernetes that the second mechanism of registration will be how the vast majority of them end up implementing. In the diagram you can, as a concrete example, if I registered a new API called my API, I would define where that server is. And the expectation is that something that could run on Kubernetes as a service, but it could also run off Kubernetes if necessary. When a client talks to Kubernetes to find out what the list of APIs that Kubernetes supports, Kubernetes will return not just for instance the apps v1 beta one group, which is a part of the core Kubernetes project today, but it also returns that custom API group. A client would then talk to the CUBE API server and it would receive the request proxy through the server. When we think about how this will evolve, much of the work that we've done in OpenShift to enable Kubernetes to be a true platform, we've looked at it through how OpenShift resources might plug in to Kubernetes. And so OpenShift offers a number of APIs above and beyond Kubernetes. Some of those are parallels to features that are starting to reach GA in Kubernetes like deployments and ingress for routes and deployment configs. But other of those like builds, image streams, templates, and a number of the OpenShift policy objects have no equivalent in Kubernetes. And so we think about this custom API extension as a way for someone like OpenShift who has built an API that works very well natively with CUBE and being able to simply and easily run that both on top of CUBE and plug into CUBE. So over the next few releases of Kubernetes and OpenShift, we expect to see this used by more production grade extensions. And custom resources will still be available for end users to use when they're prototyping. And they'll always be a trade-off in terms of whether you want something that's simpler and easier to install or whether you're looking to build more complex policy and things that aren't quite possible without writing code. Each of those will complement each other to kind of give the both sides of how do you add new capabilities to a Kubernetes server. And along those lines, it's not just in Kubernetes, APIs are kind of the heart of the system. Each of the Kubernetes APIs is about a simple declarative representation of some concept like a service or a pod. Kubernetes also has declarative resources for things like policy. So the RBAC engine in Kubernetes is a set of declarative resources. Quota is a set of declarative resources. One of our goals is if Kubernetes has simple APIs that are fairly declarative, often there are rules and policies that need to be imposed on top of that. A large chunk of how OpenShift extends Kubernetes today is to add a large set of policy that makes Kubernetes a fully multi-tenant system. And so when we talk about what it would take to turn Kubernetes into a true platform, a real key goal and a common request is to allow policy decisions that are made when an end user submits an application to Kubernetes to be delegated to an integration or an extension that a team may run for specific use cases. In Kubernetes 1.7, there's a new alpha feature that is broadly characterized as extensible admission control. Every time you submit a resource to a Kubernetes server, if you create a pod or a service, define an ingress or a deployment, there is an internal bit of code that runs, which is called the admission chain. This applies policy like Quota and pod security policy, as well as a number of other things that have developed over time, like calling out to see whether images are allowed to be run on the cluster. And that is all code that is compiled into Kubernetes. Starting in Kubernetes 1.7, there is a new admission control plugin that can call out to additional servers in the process of accepting it. In the example in the slide, I'm showing an end user creating a pod. The API server calls that internal chain, some of those initial ones run like pod security policy or image policy, and then the generic external admission plugin will take a set of registered extension webhooks that could be running on the server, could be running as a function as a service endpoint, could be running as a microservice. Really just a simple call out that takes the information about the pod that's being created and sends it to each of the external hooks. Each of those external hooks gets an opportunity to accept or reject the request. And so in the example here, these three example webhooks I might choose, webhook one is recording that user A requested creating this pod in an external system such as an audit chain. Webhook two is verifying that there's no secrets in the environment variables. So it's looking for something that says underscore password, and it's just going to say, by policy in my company, we don't allow people to set secrets in environment variables. Webhook two actually rejects the request and webhook three, which is called at the same time because we call these in parallel, might do a check against LDAP to say, you know, does the user creating this pod have access to run this service account on this particular system or to use this image, so a more sophisticated policy engine and what is possible in communities. Because one of the three webhooks failed, the entire request is failed. And so the admission chain would stop and the user would get an error. Now, this is a somewhat contrived example. But if you followed along with the development of this feature in Kubernetes in Kubernetes today, we have about 10 or 15 built-in admission controllers that can be optionally enabled. Those cover things like pod security policy. They cover node placement policy, allowing you to set annotations on a namespace that control what node selectors are allowed on pods. OpenShift adds about 40 more admission controllers. Kind of our goal here is on a regular Kubernetes cluster. It should be possible to add in new admission control as easily as adding new APIs. And this feature is alpha. It's still in its initial phases in cube one seven. But over time, we expect to build a set of standard webhook code and examples that make it easy. For instance, for individual operations teams who want to impose specific policies to build their plug-in hooks in a fairly simple manner. There's some performance cost that comes along with making these call-outs. But in general, we found that for most use cases, this flexibility is one of the most important extension points for Kubernetes. Because it's not just about running applications. It's about knowing what is running and being able to control some of the aspects of that. And as a kind of the fourth part of this extension, we talked about adding new APIs. We talked about adding new policy. A fourth part of extensibility coming in cube one seven in an alpha form is the ability to add new CLI extensions. So the cube control command now supports a very simple plug-in extension mechanism. A plug-in can be registered via a file in user's home directory or in a standard location on Linux systems. That plug-in identifies a name and a description as well as a command to run. And when a user in cube control invokes, for instance, in the example of my plug-in command, it delegates down and calls echohelloplugins, which outputs that back to the UI. We envision this being a way for new capabilities specifically around workflows to be added to cube control. So if there is a specific new API resource that gets added through API extension, that needs some form of command-like tool to make useful. For instance, a policy object that is fairly complex for an end user to create. A plug-in extension to cube control might be created that offers a helpful flow for setting up that policy object the first time. And there's a lot of other work going on. I'm just scratching the surface of the extensibility to both CLI and the cube API. There's a lot of other work going on in the covers to make extensions even more powerful, to deliver, to make cube control a great way to manage not just the resources that ship out of the box with Kubernetes, but extensions that people bring to the platform. And you'll see more of those in the cube 1.8 and 1.9 releases. So moving on from extensibility, security was another big arc in the 1.7 release. Red Hat takes security very seriously. And we've been not just working to ensure that people can run fully multi-tenant systems through the work that we've done with RBAC authentication and the pod security policies to control who can use what host-level resources. One of the common requests that we've gotten is to make it easier to keep secret data in the platform encrypted. The cube 1.7 and OpenShift 3.6 will both include this feature. And in its core form, the ability to encrypt at rest in the SED database secrets as well as other resources that the user may want to encrypt. For instance, if you have an extension resource that also contains secret data, you may wish to provide this configuration for that as well. When a user submits a secret to the Kubernetes API, we'll take that. We'll look at the encryption config and then encrypt that using either AES, CBC or the secret box standards for encryption. There's some documentation on the trade-offs. Our goal was, even though in general, we still think that whole disk encryption is the best possible way to ensure that your backups don't go walkabout. We did want to offer an out-of-the-box way for administrators to, at least in the short term, isolate and encrypt secrets at an extra level. These secrets, of course, all of the existing characteristics of a Kubernetes system are still preserved. Secrets are always encrypted over the wire and are never stored on disk. This just adds an extra layer to the secrets from the SED servers. There's a number of recommendations that we've made in this release on how secrets can be more adequately controlled. Work coming in future releases is going to include better integration with external secret providers like Vaults and other forms of distributed secret management. If you are interested in encryption at rest of secrets, I urge you to get involved in Kubernetes, SIGAuth and follow along as we improve the integration of secrets with existing solutions for secret management as well as the security of the overall platform. A second part of the Q1.7 feature arc was tighter controls on the secrets and, or tighter control on what nodes can access. In Kubernetes systems today, there's a fairly flat security model for nodes. Nodes are expected to be controlled to a set of resources, but they have access to the resources in bulk rather than the specific resources that schedule to them. In Kubernetes 1.7, nodes are protected by a new authorization layer that will restrict what a node can access, not just to the pods that are scheduled onto it, but for instance, only the secrets or persistent volumes that are actually referenced by the pods that are scheduled onto it. This is Alpha in 1.7. We expect to continue to evolve this if folks who are running extensions or third party network plugins on Kubernetes may find that some of the permissions here might be a little aggressive. And so depending on how your network provider integrates with Kubernetes or how your storage provider integrates with Kubernetes, there'll be some additional things that we clean up in the next few releases. And of course, another key part of this that is coming is being involved in Kubernetes is if nodes are given a unique set of permissions, we obviously want to have a standardized path in Kubernetes for nodes to be given their identity. And so this is called bootstrapping Kubernetes. You may have also heard it as self-join or as dynamic registration. This is defining a standardized flow whereby when a kubelet starts up, it instead of having its permission embedded on that system. So instead of having to distribute a unique credential to every machine, when the kubelet starts up, it actually asks the cluster that it's going to join to receive its identity. And the process, this is through a new API that has been in beta since Kubernetes 1.5 called the Certificate API. This is very similar to systems that have existed in other config management solutions like Puppet or key generation systems. The kubelet creates a certificate signing request on startup and it asks the Kubernetes master for a client certificate that will identify it. A separate process shown in this diagram as the signer, which could be either automated, which is something that's available in an alpha forming Kubernetes or manual in the terms of this is available through the Kubernetes command line and APIs for administrators to script as needed. The signer would see the certificate signing request and can make a decision to grant access. That might involve gathering information about, for instance, which node, the node when it makes a request will identify itself, involve an additional verification step that the IP address that is requesting the certificate corresponds, for instance, to an actual running server in the cloud environment or an out of band process that might involve a unique secret delivered by the hardware when it creates the request. This is still fairly early stage. Our goal in Kubernetes is to provide a simple out of the box path that works well on the cloud providers and to allow people to extend or supplement this path with their own logic. And so there is no, there is no expectation that this is a closed system, which is why the signer is split out like it is. If the signer decides to accept the request, it will create and sign a client certificate and give that back to the Kubelet. The Kubelet will then use that client certificate, which uniquely identifies it as node one to make a request for a serving certificate. This is used to fully encrypt the communication between the Kubelet and the cluster. And then the Kubelet will proceed with the rest of its normal bootstrapping process. In combination with the previous feature, this means that when nodes join up, not only are they able to easily receive a unique certificate that identifies them, but they are restricted to the things that that certificate would to things that their identity allows them to. And so we see this being involved over the next several releases to provide better subdivision of Kubernetes clusters, as well as allowing additional work to be done to make the Kubelet load its configuration dynamically. As well, because this is a fully automated process or can be fully automated from the node side, this is being tied into certificate expiration. And so the Kubelet as those certificates get closer to expiration will re request new certificates and the human or the automated signing process can then grant those again. And we see this as this is a first step in Kubernetes one seven that will begin to make clusters more automated and autonomous in terms of how individual machines are set up, but still leave administrators with the control over who joins their cluster and what level of automation and verification they need as nodes join. The audit feature is also being improved in Kubernetes one seven. This was first delivered into Kubernetes one five after being upstreamed from OpenShift. The first iteration was a simple file based mechanism or simple log based mechanism we then extended that to log to a separate file. In Kubernetes one seven, we've added a large number of new of new filtering and logging capabilities that allow it to be allow more API actions to be reported to allow filtering and to allow individual sinks of where that data will go. So we envision over the next few releases to see the ability to ship specific audit events to distinct systems as necessary, or to send keep a full record of all actions locally and send the most important events remotely. This will continue to evolve. This is just really the beginning for this but we envision plugging in well to both existing enterprise level audit solutions, as well as integrating into the cloud providers out of the box. So that it's possible to for instance send your logs directly to Stackdriver or CloudWatch if you so desire. So there again, as with extensibility, this is really just scraping the surface of the security improvements in Kubernetes one seven. Our overall arc for security in one eight and one nine is going to be to continue to offer better subdivision and better integration with external secret stores. There's a lot of work going on in Kubernetes SIG off and I highly recommend if this is an area that interests you that you join. We'll be talking about topics over the next few months like container identity and giving pods and services unique certificates that allow them to talk to to talk to other systems and to have a chain of at a station all the way back to the node that launches them. So we see some of the work that's going into Kube one seven is really foundational for the next year of evolution of Kubernetes. In one seven for running apps. There's really just at the heart of it. The focus on making stateful applications work well on Kubernetes is as we think one of the most important changes in computing is not just allowing this high level of automation for 12 factor applications or simple web services. But to really get at and take the stateful application and make it a first class concept on top of Kubernetes to solve the common challenges of stateful applications in a way that makes them easier to run easier to develop and easier to operate. So with one seven moving into beta is the ability to update stateful sets and to have rolling updates performed when those automatically when those stateful sets are deployed. So this feature has is very similar to the existing Kubernetes deployment and the OpenShift deployment config. When you perform a configuration change on a stateful set the stateful set controller will begin replacing members of begin replacing members of the stateful set one at a time. It will follow the default rules for stateful sets which is a very predictable order. A new capability that's being added is partition and we see this as a key part of enabling a more complicated orchestration on top of Kubernetes in the future. The partition ability lets the person doing the initial update. So when I update a stateful set to pick a new the V2 image I can also set a partition number and that says how far the rolling update will go before stopping. And so in the example you can see I've got seven instances in my stateful set. I perform the update the stateful set controller starts from the lowest ordinal works its way to the right. So it's going to update the first pod which would be you know stateful set zero. It will then once that's ready it will then update the second one. And the key point looks like I didn't color that second V2 correctly the key point is with the partition limiter in place the stateful set will stop its update at two members. And so when we think about canary deployments or the ability to script more complex deployment checks to ensure that an update really is going to be successful. Starting with Kubernetes one seven and stateful sets you'll have the ability to pause that deployment and then continue it as it goes and so you can update the partition number from one to two to five to seven. If you clear it that rolling deployment will run to full completion and continue for the rest of the said we anticipate higher level primitives plugins and Jenkins tools that might orchestrate Kubernetes deployments like Ansible or some other capabilities that are out there to start leveraging the partition on stateful sets as well as we plan on bringing it to deployments to give this fine grained control over the act of deploying and make make stateful sets even more predictable and reliable in their roll out. In one seven this does only update pods to the pod definition will be changed but the volume definition will not be some of the work that's been discussed in the Kubernetes storage SIG about adding resizable volumes may actually show up at some future point in stateful sets but it is not part of cube one seven. A second highly requested feature for staple sets is the ability to perform parallel scaling. Today if you run if you run a very large staple set the staple set controller in cube one six would progress one at a time so if you have a thousand pods. It's going to do each each of those thousand pods individually in one seven we added a new alpha capability of staple sets that allows you to do parallel scale up so if you have a staple set with a thousand Cassandra nodes. You can set the parallel pod management policy to or you can set the pod management policy field which is new to parallel and all 1000 would be created as quickly as possible. That applies both to scale up and scale down it does not apply to update so a rolling update of a staple set will follow the rules of the rolling update. So you can still get predictable controlled rollouts but for instance if a large chunk of your pods are deleted a new set of pods would be created to replace them very quickly. This feature will we don't anticipate this feature changing much but it is alpha in one seven while we get some feedback from people deploying very large staple sets. And then finally for the running apps section Damon sets also received the ability to be to have rolling updates the because Damon sets tend to be more administrator focused or focused on actions that take place across the entire cluster. The rollout options are different unlike a staple set or a deployment Damon sets define a max unavailable percentage and the Damon set controller will then guarantee that no more than a certain percentage of those pods are unavailable at a time. So in the example you can see when a Damon set is updated and the max unavailable is set to a third the Damon set will choose to keep all but two pods stable at any one time. So in the first in that second line you can see it spins up to new v2 pods those will get updated when those become ready and are fully running the second version. It'll pick two more pods in that fourth row you can see that one of the pods one of the v1 pods has died. The Damon set controller will only spin up one new one until that v1 pod is replaced. And then you'll see in that last line both the v1 pod that died as well as the previous v2 pod have completed and the rest will be carried out. And this policy is different from both deployment and staple sets simply because Damon sets tend to apply to a cluster in bulk. If you would like to do controlled rollouts of Damon sets across smaller subsets. Generally you would just create several Damon sets one for each subset of nodes and update those at a separate time. Both Damon sets and staple sets track the previous revision through a new resource called controller history. This is an alpha resource technically even though we depend on beta even though this is the update process of Damon sets and staple sets is itself beta. And so what you can expect is as a user when another person performs an update on a staple set or a Damon set you'll be able to go and view the history just like you can view the history of a deployment or a deployment config through replica sets and replication controllers. On the runtime side, we've haven't talked a ton about this, but the container runtime interface in Kubernetes is getting close to moving into a beta state. The container runtime interface for those who aren't aware is the abstraction in Kubernetes that separates the container runtime from the Kubelet and allows, you know, has a standard set of APIs that it invokes to ensure that containers are running for pods that logs and the exact behavior can be supported that statistics flow and that images are managed. Over the last year or so Red Hat has been working kind of on a standard container runtime that uses OCI and system D. As we get closer to having CRI move into the beta state in Kubernetes Red Hat has been pushing forward on ensuring that the cryo cryo also reaches an alpha milestone. It really is focused on supporting exactly the subset of features that are necessary for Kubernetes. It adds significantly less overhead on a Kubernetes system. And we envision this in the future being both a proving point for CRI as well as offering some operational and production advantages when running containers on Kubernetes. And so you'll see more of this in the future as the CRI and the underlying Kubelet becomes more amenable to plugins. There's also a lot of exciting things that may come over the next year or so around both running non-container runtimes like VM based systems underneath Kubernetes for better isolation of containers. And so we view the evolution of CRI and the work that's being done in cryo as a necessary part of the evolution of how we build more secure containerized platforms. Network policy is one of the more popular features in Kubernetes, at least so I've heard. In cube one seven, it will now move to GA and the network policy plugins, both in Kubernetes and OpenShift, a large number of them are moving to support network policy as a as a fundamental resource. This allows fine grained control over which containers and applications can talk to each other across namespaces on the OpenShift side. We view this as a really huge development because it allows end users to specify to collaborate and specify what what other tenants of the system they may wish to receive traffic from. And it allows a finer grained control over what over the the network resources that they expose. So for people who are running not just web applications, but may also be running databases or other as a service type of capabilities on the platform. These this fine grained control allows people to make services available, but then to segregate who can talk to which parts of that application for administrators or for other applications on the platform, or even further applications off the platform. And so over over the next looks like the example here is wrong over the next few releases will expect most of the network plugin providers to enable network policy out of the box and to fit some of the other concepts around network policy and multi tenancy more deeply into Kubernetes. And so there is a huge set of other things that I would run out of time trying to to dive into. I did want to call out for four very important areas that we won't go into an extreme amount of detail on local storage volumes was a proposal for Kubernetes one seven and a very early alpha version of it is in cube one seven. This is a really exciting feature. It's very commonly requested the ability for a pod to request local storage and then if the pod gets rescheduled to have it be rescheduled back on that machine if that storage is still available. This allows applications that don't quite fit into the network attached storage model who need much higher IO or only need storage for a short amount of time to get to both request that and for administrators to be able to quote a control that we expect in we expect in Kubernetes one eight to see more of these pieces land and this will be a big enabler both for local development and for high performance applications. There were a ton of CLI improvements in Kubernetes one seven the overall usability of the command extensions that make it easier to manage deployments to get better information from the server as a number of performance improvements. The cube proxy, which is kind of the current heart of how applications talk to each other on Kubernetes today, got a massive set of performance improvements that we expect will continue into the Kubernetes one eight release and then service catalog which many of the folks on this call are probably familiar with. The service catalog work continues apace. We expect over the Kubernetes one eight and one nine timeframes to to really bring that to, you know, a first class behavior in Kubernetes service catalog is actually one of the first consumers of the extension points I mentioned before. And so we expect a fairly rich experience and that to be the proving grounds for on Kubernetes being able to add new capabilities in a seamless way without having to necessarily compile those into the Kubernetes project itself. And with that, I'd love to take some questions. If there are any. Well, there are there are a couple, Clayton, and I've also posted in the notes in the chat. The least notes for the Kubernetes one point seven release. And we have a couple of deep dives coming one on Creo, another on service catalog in the upcoming OpenShift Commons briefing schedule. The first question or the most recent question will go that way. Do you have the user context in the web hook and this is going back to the session about sensibility. Yeah, a fundamental part of policy extension is knowing who the acting user is. And so that is a part of both the extension mechanism as well as for for the web hook extensions for emission policy, but it is also available to extension API servers. And so you will continue to see that propagated through the rest of the system. And there was one early one which sort of related to OpenShift and Chuck is asking if we are going to create a client library for OpenShift similar to the client go of K Kubernetes. Yes, that is a goal. There was a number of really important changes that have continued to occur in Kubernetes one five one six and one seven to clean up the internal structure of those libraries to make them really be consumable. So it is definitely a short term or a near term goal for us to have a very, very powerful go client library and as well to be able to reuse the extension mechanisms or sorry, the generation mechanisms that we've been working on in Kubernetes for the OpenShift API as well. A key part really of being able to do API extension is to have client extension. And so even though I didn't cover that in an extreme detail. We want all of the languages to have very good Kubernetes clients and for any extension of Kubernetes to also be able to use those clients. So, Mattias from get up cloud is asking how about backporting are any of those being reinvented and on old versions of the features that I described. There's no explicit plan to backport those to older Kubernetes versions. A large chunk of the speaking for OpenShift a large chunk of the API aggregation and API extensibility and the service catalog work is specifically something that Red Hat has driven because you know we believe really strongly in extensibility. Some of those capabilities will be in Kubernetes and OpenShift three six. The encryption at rest is targeted for three six one in a in a early access form and there's a couple of other items in this list that may make it into the OpenShift three six release. We're in general we were focused more on the security and extensibility aspects in terms of backports for OpenShift three six. Let's see somebody's asking for and it should make him write it would be great if we can have a read me on Kubla bootstrapping on OpenShift and check. Yeah, that is that is Kubla bootstrapping has been one as it evolved in Kubernetes one seven. There's been an early version of it in OpenShift and Kubernetes since the one five release our goal. We really do want to make bootstrapping because it's so fundamental to how in the future we'll do certificate regeneration. It allows an external CA to be used to sign certificates. It allows us to delegate trust from that initial Kubla bootstrapping to the containers that run on the platform and because it just reduces the amount of overhead necessary to stand up new nodes. It's like there'd be a very strong focus for us in the OpenShift side to have that both be you know initially we will do a better job of documenting what's already there. But we intend to use it more fully to to make it easier to stand up nodes. So while in one seven and three seven OpenShift you may not see it be fully supported over the long term bootstrapping will become the de facto way that we stand up new nodes. There's one more coming. Are there any Federation updates coming to 0.7? There was a lot of work on Federation in the one seven timeframe. The Federation is still alpha. There's a number of things that we still think need to be closed out such as upgrading and a lot of a lot of getting things in final working shape. So there was a ton of work that was done in the one seven timeframe in terms of functional changes, not a terribly large number. See any other questions? I'm going to pop over and just remind everybody that the OpenShift Commons gathering is coming up soon and we have limited space again. But it will be at KubeCon and you will be able to hear Clayton give by then it should be the same update on hopefully Kube 1.8. And that'll be there you can register that and then all of the other I mentioned that there were other OpenShift Commons. And it's common is the events. The events webpage has all of the other Kube related calendar folks here. I think we have a big week coming, maybe not over the fourth of July week, but we're going to have Kaperna Sinha from Google is going to give a technical roadmap. We wrote a head for 1.8 talk on July 11th. We'll be a distributed tracing with the agar, which is and then some a couple of other ones and then coming up soon, there'll be one on service catalog and we haven't found a date yet for another one on cryo, but it should probably be on sometime in July. So it's a very busy calendar. So please do keep an eye out and we'll try and keep you all up to date on everything that's coming out the pike with Kubernetes and ownership. There's a lot of interest in this. And so we hope that you'll stay tuned. And if you haven't joined the OpenShift Commons yet, please do so and get you on the Slack channel and into our mailing list so that you don't miss any of these events. And I still don't see any other questions. So Clayton, I think you've done an awesome job and we can really just say thank you very much and hopefully we'll see you in Austin and you'll feel better and have a great fourth of July weekend. Everyone, wherever you are. Thank you, Diane.