 Hello and welcome everybody to the OpenShift Commons briefing this week. We're really pleased to have with us today Alan Nam from Google. He's the product manager for both Kubernetes and Container Engine, as well as Clayton Coleman, who is the lead architect for OpenShift, which as many of you know, up streams Kubernetes and we do these weekly podcasts out to the OpenShift Commons community and then we post them afterwards for recording for viewing purposes on YouTube and elsewhere. So we're really hoping to have a very interesting presentation on the present and the future of Kubernetes. As we all know, a recent release went out the door for Kubernetes and there's lots of good pieces and parts that got added in, and then there's a new roadmap coming up for the next release. So we're going to try and dive into those aspects. We'll do Q&A through the chat in BlueJeans. So if you type it in there, everybody is on mute except the speakers. We'll try and answer the questions in the chat, and if they don't get answered, we will then reiterate them after the presentation, and we'll have a conversation and do Q&A verbally afterwards. So without further ado, Alan, thank you for joining us and I'll let you introduce yourself and take it away. Thank you. So I'm Alan Nam. Product Manager here at Google focused on Kubernetes and Container Engine. So I'm going to be focusing on the present state of Kubernetes, and then I'll be handing it over to Clayton to talk about the future. Before I get started on the present, I always like to spend a few minutes talking about how we got here, and I think it's important to add context to the conversation. So the path to modern containers really started at Google back in 2004, when Google engineers contributed much of the code in the Linux kernels that supported C groups and namespaces. You guys have heard this probably over and over again, but everything at Google runs on containers. We spent up 2 billion containers per week. Through our experience with containers and developing our internal system called Borg, using this 10 years of experience, we basically took all the best practices associated with managing containers and developed a framework that we open source called Kubernetes. So Kubernetes really at the end of the day, lets you take a cluster first approach to application development, where the focus is on the application and the resources that they need. As I mentioned, it was really inspired by Google's experience. It supports multiple clouds, bare metal environments. It's 100 percent open source written and go. At the end of the day, it really enables application developers to focus on the application, not the machines. Kubernetes GA back in July 2015, that was when we announced our 1.0 version, and since then the adoption rate has been phenomenal. Looking at this graph here, you see the number of commits since January 2015. Our latest version which I'll be talking about in a few minutes, 1.2 had 5,000 commits associated with it, and it had increase of 50 percent unique contributors, which is just phenomenal. All of this is really driven by the community. We are in the top 0.01 percent of all GitHub projects. There are 800 plus unique contributors, and over 1200 external projects today that are based on Kubernetes. This slide shows you just a small subset of the companies that are using Kubernetes today, and as well as all of our great community partners that are contributing on a daily basis. Thank you for making this happen. I'm going to provide a two-minute accelerated overview of Kubernetes just before I get into the 1.2 features just to make sure we're all level set. Containers, Kubernetes manages orchestrates containers, and developers typically find three key benefits with containers, and those are portability, repeatability, and abstraction. Portability and repeatability come from the fact that each container image includes everything that's needed to run the process that it represents. But the big win really is around abstraction. By abstracting the app from the operating system and the machine, you get clean separation of concerns between application development maintenance, infrastructure deployment, and operation, which really translates into agility for the developer. You start with containers, you containerize your application, but then you have challenges around how do you scale, how do you update, how do you manage the hard part of building distributed applications, and that's where Kubernetes comes in. Kubernetes really takes an opinionated view around managing containers, and we have these primitives that we introduce, which are pods, replica controllers, services, labels, and so on. A pod is really a unit of deployment. It's a collection of one or more co-located containers and volumes, containers in a pod, share process network, and inner process communication namespaces. So an example is, we like to recommend that when people run applications on containers, a container should do one thing and do one thing really well, but there are certain cases where containers have similar behaviors and share things. So for example, you look at a web application and a log roller. Rather than put both of these in the same containers, because you may have to scale them independently, why not put them in separate containers? But at the same time, you want these containers living close to each other so that they can share resources, and that's where a pod comes in. A web-servering container and a log roller can live within the same pod, share the same IP address, share the same volume, and ultimately act as one. And a pod, all the containers in a pod actually live and die together, which makes it really good from state management and so on. Pods have labels. A label is really a key value pair that's used to identify the role or what this pod actually is doing. So for example, I could have a label called roll and say this pod is a web-server or this pod is a front-end. And then that basically is used to represent all the various objects that you then create using what we call label selectors. So in this particular case, a replication controller, which is really a spec that you create to really tell Kubernetes the desired state of your pods that are currently running. So the replication controller ensures that a specified number of identical pods are running at one time. It can be used for scaling both annual and automatic for doing health management, for doing rolling updates, et cetera. So a replication controller takes a label selector. So for example, I can say I'd like five replicas of pod that have roll front-end and it'll go off and create five replicas of that pod and then ensure that the actual state is always equal to the desired state. And you can do that for many pods. So you can have multiple replication controllers that represent pods that are front-ends, pods that are mid-tier, pods that are back-ends. You basically define that spec and that desired state and you submit it to Kubernetes to ensure it's actually implemented. Now that you've created your replication controller and you have this application that's running a number of replicas, how do you expose it to the outside world? Well, we introduced this concept of a service really which is an abstraction and defines a set of pods and how to access them. So there's a stable IP address and a corresponding DNS name that's generated associated with that particular replication controller and it's abstracted using a service implementation. Now requests coming in externally can discover this application and be routed automatically to those replicas that are running across the cluster. And in this particular case, I can leverage my label selectors to specify which replication controllers I'd like to expose. And when I put all this together, I now have this microservice oriented architecture where my application is composed of multiple tiers that scale independently, that are exposed independently and can communicate together. So what's new in Kubernetes 1.2? The first thing that we introduced was what's called multi-zone clusters. Previously with Kubernetes, your application was limited to a single zone. With multi-zone clusters, we've actually made it possible for you to have your application span multiple availability zones. So basically you create nodes in each of the zone and then you submit a replication controller spec with the number of replicas that your application needs and those pods get distributed across the different zones. And now your application is highly available because it's distributed across multiple failure domains. Multi-zone clusters is GA as of Kubernetes 1.2 and it's fully automatic support for both compute engine and AWS including on Google or container engine service. It requires some manual configuration in on-prem and other clouds but we're actually working on that. Every node that's created within a zone gets a label that associates it with that particular zone which makes it possible for the Kubernetes scheduler to schedule the pods in those particular nodes in the appropriate zone. The second feature that we released in 1.2 and it is in beta in Kubernetes 1.2 is this concept of a deployments object. Prior to Kubernetes 1.2, when you're doing a rolling update, it was very much a client-side action where it was imperative in nature. And Kubernetes doesn't like imperative things. We believe in declarative, we believe in being able to create server-side objects that actually have a controller associated with them that ensures desired state and actual state are equal. So we introduced concept of deployment which really takes and enables you to define an object that specifies how you want to update your replication controller. So rather than say I want to submit a new replication controller that has a number of replicas using this particular pod, I can now define this in a spec called a deployment object, submit it to Kubernetes and Kubernetes will constantly maintain this object and ensure actual desired state are equal. And then you can, if you want to make changes from a rolling update perspective, change your application, you can now submit to this update that deployment object and Kubernetes will pick up the changes and roll the updates through. The other feature which has been around but we actually went beta with it in Kubernetes 1.2 and is also available now in Container Engine is this whole notion of daemon sets. So suppose you want to run a pod on every node prior to daemon sets, it was, you know, you could do it but it required quite a bit of administrative overhead associated with it. So now you can run a daemon set command, deploy daemon set, have pods deployed on every node in the cluster as well as leverage a label selector to deploy these pods on specific nodes or specific, exactly specific nodes that match that particular label. Horizontal pod autoscalers went GA in 1.2. So for those of you that aren't familiar with horizontal pod autoscaler, this is different from node autoscaling. This is leveraging metrics like CPU utilization to actually have your pods autoscale within your cluster. Today we only support CPU utilization but we're looking at adding more and more metrics including custom metrics, which is an alpha in 1.2 but in 1.3 you'll actually start seeing more metrics that you can do horizontal pod autoscaling around. Config maps are something that we introduced as well in 1.2, so they enable you to really bring in configuration files, for example, into a pod namespace and allow the pods to be able to pick up configuration information in a, you know, in a deterministic fashion and manage these configurations through the Kubernetes API. So this actually improves the administration of the application in a big way so that you're not having to leverage other means of persisting data to manage the configuration for your actual application that's represented through the pods. We also baited ingress which is our object that abstracts away some of the complexities associated with routing incoming traffic to backends using layer seven load balancer. So with ingress you can basically define rules and say I'd like requests coming from here routed to this particular backend. You can use the HTTP host header or define URL paths that map the incoming requests to the appropriate backends within your application. We support HA proxy, engine X, the native load balancers both on AWS and Google Compute Engine. Implementations are in progress. We also in 1.2 announced support for SSL, which is great. So this feature is beta in 1.2 and we're continuing to invest in areas to abstract away that some of the complexities associated with URL path mapping and so on. Network isolation is another feature that went alpha in Kubernetes 1.2. This is the ability to restrict pod to pod traffic across namespaces. So you can have certain pods live in various namespace in a particular namespace and say, I don't want any type of traffic flowing between these two namespaces, these pods can't communicate with each other. So this is something our customers have asked us for and we've introduced the alpha in 1.2. Scalability has been a top ask for a lot of customers prior to Kubernetes 1.2 we tested, internally we tested with 250 nodes, with 1.2 we've now tested with 1,000 nodes. So we support that and this represents up to 30,000 pods within that cluster. And this again, this is what we've tested. There is nothing stopping someone from going in and spitting up 2,000 nodes, we just have not tested against that. So we are continuing to invest in this area, from 1,000 we want to go to 2,000 and higher and higher. And this is all based on customer feedback. Some of the other areas around performance is really optimize the IP table cube proxy path. So there's really no measurable hit from CPU throughput or latency across that path. We've optimized the Qubelet with a 4x reduction CPU memory and then some other optimizations here listed around binary encoded API, caching and parallelization and scheduler that we're planning for the 1.3 timeframe. So those are the main areas that we've seen we've invested in with 1.2. And at this point, I'm gonna hand it over to Clayton to talk about the investments we're making in 1.3. Great, thank you, Alan. So can everyone hear me? Absolutely, yeah. All right. Okay, so as Alan covered, there was a lot of great work in Kubernetes 1.2. And I think you'll notice a theme as Alan was alluding to that we're continuing a lot of that work into Kubernetes 1.3 and the 1.4. One of the key goals for Kubernetes 1.3 was to have a very predictable release cycle. And so that release cycle being predictable helps ensure that we can deliver these features, get feedback and then move on. So for 1.3, there was a bunch of goals, but I boiled them down to really the three that I think are most relevant for people who are interested in what's coming and as well as people who are looking to make the decision to move applications onto Kubernetes. And those three short-term goals are to support the more diverse application workloads. A lot of the buzz in the container space has been about being able to take pretty diverse applications and be able to run them inside of containers. But the next step is really important is do those application workloads actually run well in clusters and making sure that Kubernetes has the tools to enable that? Performance is a huge goal. The better the cluster performs, the less we have to worry about when it comes to can we scale to meet the most demanding needs. There's just a lot of opportunities to improve the overall responsiveness and as quite a few people are famous for saying performance is a feature. And finally, it's not just about running the workloads and it's not just about supporting larger workloads, but it's also about being able to understand what's actually happening in the cluster and for the cluster to be even more, even better able to react to the changing conditions and keep those application workloads healthy. There's a pretty massive roadmap for features that we want to add to Kubernetes, capabilities that come from some of Google's long experience in the container and cluster management space, as well as other contributions from the wider scheduling and cluster management communities like Mesos, which we have an opportunity to set a direction and continue to improve how well those applications run. A feature that has received a lot of attention that we are targeting for 1.3 is Petset. And the name is somewhat of a working name, but the basic idea being that it's pretty easy to run a scale out cluster of a bunch of independent containers that don't have to talk to each other. If the containers can come and go, you can make a lot of assumptions around load balancing and trying to keep those pods up that makes it fairly easy to say, yep, we can run 30 of these, give or take five or 10. But when you flip that around and you say, well, it's not just scale out horizontally scalable stateless applications that we want to run. There's a whole other class of applications out there in the world, which are stateful, which have certain needs regarding network identity and the ability to join clusters that we feel is extremely important to support at a fundamental level in Kubernetes. So the work that's coming in Kubernetes 1.3 will be to deliver an alpha version of Petsets. And a Petset is a lot like a replication controller or a daemon set in that you define a template for what you want your pod to look like. But instead of the replication controller, which is optimized to create multiple copies of your application, but each individual copy isn't very special, the Petset is intended to ensure that each of the pods that gets created has a unique identity that's stable over time. And so an example would be if you're running a ZooKeeper cluster, you need at least three members to have a quorum to be able to remain highly available, even if one fails. You need persistent disks so that ZooKeeper can store any of the cluster state. And if one of those instances dies, you want to bring back an instance that actually matches not just the volume, but also the network identity of that ZooKeeper instance. So in the drawing down below, you can see I've just done a very simple, you've got one member, two and then three. The Petset makes it easy to ensure that you have three instances of ZooKeeper running, but it also does a little bit of extra work to say it will work its hardest to ensure that there's only ever one of the first ZooKeeper instance and that the first ZooKeeper instance actually knows that it's ZooKeeper one as opposed to ZooKeeper two or ZooKeeper three. And there's a ton of edge cases in this space generally. Our goal for Kubernetes 1.3 is to set a foundation and then enable much more sophisticated cluster management inside of the images so that people can realize that goal of they can have a pre-canned ZooKeeper cluster that they can scale dynamically, perhaps, that can recover free failure and that is in general a low touch operation for administrators and that this applies not just to things like ZooKeeper, but also things like databases, which want to ensure that there's only a single instance running of MySQL master when it dies and comes back. The IP address or the DNS name of that, MySQL cluster doesn't change. For instance, if you have slaves or replicas built around MySQL that even though the individual pod comes and goes there's still something representing that unique idea of the MySQL master. And right now there's examples that are going up if you follow the link to the pull request on this slide. We're working through a number of examples and we expect to continue that throughout the one for timeframe. Along with petsets, one of the scenarios that we think is really important in Kubernetes is being able to take applications you've containerized and adapt them for the Kubernetes environment. So for instance, if you're a developer working on a container locally and you're or you're taking a container that is coming from someone else. For instance, from a vendor or there's somebody else in your organization that's providing an image. And there's some initialization you need to do or you want to wait for a number of other services to be available before you actually start your containers. We are adding the init container concept as a way of giving application developers a tool for separating out responsibilities. So the easiest example is waiting for a dependency to be ready. So each init container gets a chance to run before the rest of the containers in the pod are started. So if I have a pod with three application containers and two initialization containers, the first init container will run to completion. If it succeeds, the second initialization container will run. And if that succeeds, the rest of the containers will be started. On the other hand, if the first init container fails, it will block the startup of the second one. We won't actually ever start either the second initialization container or the regular application containers until each one completes. And this will give application authors and administrators tools for splitting out responsibility. So you can have an image that has all of your tools for waiting for service dependencies or talking to a central service discovery registry and registering yourself or downloading config or secrets from external configuration sources to do that before the application containers start. And because these are just regular containers in the pod, they can share volumes. So you can write data to desk that your application containers can then use. And this, we think of this as an enabler for a number of types of features, including Petsets, because the initialization container can help take some of the load off of the EtsyD image or the ZooKeeper image, for example, by doing initialization on disk of what the desired configuration would be. And this will also be an alpha and cube one three. Our goal is to get more feedback and to look at some of the needs of both Petsets and regular applications, and then to use that to guide where the init container concept goes. Another feature coming in cube one three is the ability to run recurring jobs on the cluster. Today, there is a job concept, which as Alan mentioned, has been graduated into a beta state, I believe. That job is the idea of taking a, again, taking a pod and running it until at least one succeeds or at least n succeed. The scheduled job is a job that is run at regular intervals. So the scheduled job actually is a template for another job, just like a replication controller is a template for a pod. A scheduled job is a template for a job that is run. We're using the crontab syntax to start for anybody who's familiar with that. You know, you can specify days, hours, minutes and seconds. And the goal will be that you'll be able to easily set this up from the command line in both Kubernetes and OpenShift to be able to say, I want to set up a container to run at the specific interval and to be able to customize what that container does. So as an operation shop, if you want to run recurring tasks nightly or if you want to run them on an hourly basis, for instance, to go and clean things up on the cluster or to run a number of checks on the cluster, that should be much easier to accomplish once scheduled jobs land. And finally, I think the fourth of the biggest application-focused features that's coming in Kubernetes 1.3 is as we've talked about really from the beginning of Kubernetes, a goal is to be declarative about the system, which allows a number of config management on top of the system, as well as making it easier to reason about how all the different components of the system can work together. And a pretty important concept that underlies a cluster is it's not enough just to have the ability to keep applications running, you want to minimize the impact, you want to minimize the impact to the cluster when applications are restarted or minimize the impact to the application when the cluster needs to take action. And the concept in that's coming in alpha, Q1.3 and the timing on this is a little uncertain. We've got some of the basic API objects, but many of the components of the system may not be taking advantage of it in 1.3 is to have a central place for components to both check whether a particular action would be impactful to an application in a way that would violate the admin or user's desired policy and also a place for the cluster to record the impacts of restarts and nodes going down on the application so that future interactions from that component take that into account. So an example might be an application that defines that it needs three nines of uptime. And the time period that each month that it can be down is recorded and tabulated if an administrator wants to go drain a node to be able to take it out of operation and then bring a new node in place and there happens to be a component of that application running on there, the admin will be able to check in today, we'll be able to check in through the tool on whether this would violate the availability of the application. And the goal is as we move forward to make more and more components of the system automatically take disruption budgets into account, which really is the first broadening of the step around policy. It's not just about I wanna have five of these running, but I want to ensure that no matter what across the course of a week or the course of a month or a year that I'm meeting some sort of goal for the service whether it's SLOs or SLAs, depending on who's providing the service the cluster is working with the application author to make sure that the right application or to make sure that disruption is minimized. And this also sets up a number of other elements of the cluster that will be able to more intelligently react just by virtue of having a central place to aggregate, how much disruption and applications willing to tolerate. So there's a ton more application features coming, but ultimately if you can't run things effectively that tends to, if something's slow it doesn't really matter how many features it has. And so performance is a big focus for us. One of the biggest changes that will start showing up in the cube one three timeframe into Kubernetes one four is the next version of SED which is an evolution of the work in the current SED two version to be a much more powerful key value store. The core OS SED team and some of the folks within the Kubernetes community have worked closely together to take a number of requirements and pieces of feedback and use that to guide the evolution of SED. Some of the easy wins will be the underlying storage model of SED will be much more efficient. Keys and values will be treated slightly differently. Values will be stored on disk so the memory requirements of SED should in general drop. We'll be storing binary values on disk instead of text values. So this opens up some other optimizations I'll talk about in a little bit for efficiency and improving the amount of data that we can store in a single SED instance. There are, there's a new feature coming that we don't plan to take advantage of in cube one three which is the ability to do very limited read write transactions but there are a lot of performance optimizations in Kubernetes around scheduling as well as around the efficiency of some of our core operations for updates that will allow us to do, this feature will allow us to get some very significant wins as we add new features in the future. The watch function which is really, it's a fundamental SED concept, the ability to connect to an SED server and to see all of the changes that happened since a very precise time in, is anyone familiar with the Kubernetes API as we depend on this feature a lot? Watch is a fundamental property of the Kubernetes API as well which means it's easier for administrators and developers to build integrations that can react to things that happen on the cluster in a very efficient low latency manner. The watch function in SED three has been turbocharged and we are expecting to see some pretty significant efficiency gains which will mean that both the cluster will get more reactive. The latency between the time a pod is scheduled to when it starts we'll see a lot of wins in that space as well as just in general increasing the number of watches altogether so that it will expand the range of integrations that people can build to the Kubernetes cluster to allow us to support even more control and integration points with third party software. And the timing on the SED three official GA release is roughly congruent with the one three timeframe but I expect that there'll be some soap time that we want to put just to make sure that we can recommend this as a production configuration for folks in the cube one three timeframe. A second big optimization is as Alan alluded to is being more efficient about how we store data in SED as well as how we transmit data back and forth between all the components in the Kubernetes cluster. Kubernetes talks to itself a lot so the Kubernetes nodes talk to the API servers the scheduler talks to the API servers controllers talk to the API servers and any wins that we can get and how quickly we can take API objects and convert them onto the network and then back off the network has a multiplying effect through the rest of the platform. So we are planning on introducing a serialization format that is an optional format alongside JSON for the majority of the Kubernetes objects for inter cluster communication but for people doing integrations we will probably publish a protobuf spec that people can write against if you wanna use any of the existing protobuf implementations in the many different languages out there. Generally it's for efficiency we're kind of hedging our bets a little bit we wanna make sure that we're gonna use this internally for storing an SED as well as inter cluster communication we're probably gonna hedge a little bit and recommend people be a little bit cautious about using this as clients just so that we have enough time to make sure that we get the right model in place. We're actually generating our protobuf schema from the internal objects and so there's some fiddly details we don't want people to be disrupted. So expect to see a lot more about this over the next couple of releases. And finally on the performance front there's just a whole host of optimizations that lots of different people from a lot of different companies and individuals has been made to the cluster already. Just some real big wins across the board so we're very optimistic about Kubernetes 1.3 we're not gonna give any early numbers but it's very promising about the improvements that have been made and even better those performance improvements are gonna allow us to keep adding features and keep improving the ability of the cluster to run itself. I'm gonna jump through a few of these remaining slides but along with better applications and more performance we really do want to make the cluster easier to run and manage. Federation which is the idea of having an API that looks a lot like Kubernetes available for talking to multiple clusters and having very specific use cases that lets you replicate pods across a set of clusters in a consistent way as well as having services that allow you to globally access those is targeted for Alpha for Kubernetes 1.3 there's a bunch of great work going on. It is a pretty big feature so I'm gonna hedge a little bit and say we expect to get all the foundational pieces in Kubernetes 1.3 but some of that will trail into Kubernetes 1.4 and we're gonna take baby steps on this the goal really is to enable people who have clusters and multiple geographies to easily replicate applications across a wide range it will not be the full set of Kubernetes features magically being replicated across every cluster and that's where future work will focus. Notification will help ensure that over committing the cluster and getting better resource efficiency as possible so for those familiar with how Kubernetes handles pods today depending on what resources you request you get put in a quality of service tier the three quality service tiers are best effort which is you take what you can get burstable which is pretty much the normal scenario for anything that tends to want to stay on one machine and get a certain amount of CPU and memory and then guaranteed which is the highest tier and guaranteed are the most important services in your infrastructure the ones that can tolerate the least disruption the work that's being done in Kubernetes 1.3 will look for conditions on the node that due to over commit there's too much memory or too much disk being requested and then to kick pods off the nodes starting with the lowest quality of service tier best effort and then moving up to higher tiers is necessary I believe we have discussed kicking off burstable but not guaranteed off of a node and this sort of reaction eventually will hook into disruption budgets but there's also other integrations into the scheduler so that you get booted off a node and the scheduler doesn't just put you back on the same one when the replication controller spins that up so there's a number of iterative steps we'll take over the next couple of releases but this will allow the cluster to be much more reactive and nodes to be much more reactive about their health cascading deletion is something that's been requested by quite a few people building user experiences around Kubernetes as well as users who want to centrally manage want to easily manage the cleanup of the diverse number of resources that can compose a Kubernetes or OpenShift application cascading deletion will be a way of on a pod or replication controller, deployment, service, route any of the concepts in both Kubernetes and OpenShift to declare who your parent or owner is and when the parent gets deleted you get deleted as well so for instance in OpenShift we're looking forward to this feature it'll allow us to define templates that your parent service might be your front end but if your database if there's no point to your database sticking around if the front end gets deleted you can define a relationship between the database service to the front end service and when you delete your front end service it will actually go and ensure that all the rest of the resources are cleaned up so we look forward to taking advantage of this to give regular end users who are creating and then deleting apps and then kind of an iterative flow to be much more efficient network policy was something Alan described a little bit of policy is getting a little bit more sophisticated we're hoping to get some of the first API objects that represent those more sophisticated policies into Kubernetes 1.3 and a number of the network vendors including OpenShift SDN are hoping to take advantage of this as quickly as possible in OpenShift SDN we have the multi-tenant SDN plugin today we'd like to transition to that so that it's just a native Kubernetes that uses the native Kubernetes policy object and then to layer on top of that a lot of the advances that we've been discussing in the network SIG around improving the ability for you to isolate individual namespaces or even components within those namespaces to give you the security policies that you want on your applications and again there's so much to there's so much coming and being worked on across such a huge range of collaborators I couldn't even fit all of them in but the pod and node metrics API and Heapster becoming more official so that it's something that you can rely on and expect it to be stable over time is something that's being worked on the downward API for resource limits so that pods can get access to the amount of memory or CPU that an application has available Sec Comp support which finally merged into Docker quite recently will allow us an even deeper control over what applications can do with regards to syscalls which in OpenShift allows us to deliver a much better multi-tenant experience closes get another potential avenue of attack if you want to have multiple tenants on the same machine a lot of usability improvements in cube control AWS cloud provider has gotten a ton of other work and then in the storage side we've been squizzling some things around to make them easier to maintain and I would expect in cube 1.4 to actually see some of the really exciting stuff with cube storage classes start to come into play so we'll have more of that when we start talking about Kubernetes 1.4 So that is all for me, Diane. All right, well that was enough and that brings me to probably my first question is nobody's talked about dates or timelines for the 1.3 release do you have a guesstimate or a time when that's supposed to be out the door? So our goal is to start freezing the code and the next week and a half that'll be the kind of traditional slush period after that where features are supposed to be in and we're trying to finalize the details the rough schedule has it about the end of June and so just based on we're trying to get a little bit better about how rigidly we hit that date so you might wanna hedge your bets a little bit but I think we're looking at the end of June and early July. We did have one question which takes us back a little bit to the disruption budgets asking how to measure the impact of an application is it based on quality of service level or just the number of restarts and uptime and that was Elon and if you'd like to follow up just unmute yourself, here we go. Hi, this is only, can you hear me? Yes, we can. Hi, I was just curious about when you talk about this disruption budgets how do you measure the impact of applications? Are you talking about certain quality of service level or just the number of restarts or time of uptime? The focus in the initial stages is on uptime and the availability spread of the application so a more available application the impact of downtime would be measured appropriately I think because it's very early in this the goal is we do have a number of discussions and long-term plans to greatly improve the number of things that a disruption budget could take into account the focus initially will be on uptime of a particular pod from the time it goes ready to the time that it goes into graceful termination. I see, thank you. All right, and Mark is asking about any updates to identity and access management in Kube coming? So I did have one more slide. So there's a number of features from OpenShift that we've been working in the Kubernetes community to get upstream with participation from others with Google and CoroS in some cases who were helping to move those features upstream. So Alan mentioned deployments and ingress which actually have parallels to OpenShift's deployment configs and routes and we think of those as being places that we're gonna continue to evolve. In Kube 1.3 there are a couple of core goals so pod security policy which is the equivalent of the OpenShift security context constraints. We'll go into Kubernetes as alpha. It will not be enabled by default but our goal is is to, over the course of the 1.3 and the 1.4 timeframe to get that in a state where by default on a Kube cluster you can start up a Kube cluster and expect to have users be constrained to whatever the default policy would be. For instance, not being able to access host paths or not being able to access the host network or run privileged, et cetera. And then OpenShift authorization which is the roles and role bindings. There's a, Eric from CoreOS is actually in the progress of helping move that into Kubernetes. And the goal would be that over time out of the box in Kubernetes you'll gradually see the full OpenShift RBAC systems in Kubernetes and we're incrementally dripping it in so that we get, you know, soap time and testing and we can take it as an opportunity to kind of do a V2 of authorization to add in some of the things that we would, you know, we've learned from running an OpenShift RBAC for the last year. There are a couple of other authentication related features but I would say that the progress on pod security policy and authorization would be the biggest ones. Great. There is another question from Alec. What's the plans about HTTP2? That is a great question. There is some enablement going on right now. Tim St. Clair from Red Hat and a few others from his team have been measuring and impacting on the API server side to enable it for the clients to reuse connections between the masters and the servers. In terms of applications running on the cluster, you know, anybody should be able to take advantage of that. There's been some discussion of it for Ingress. So any of the native HTTP2-capable Ingress controllers would be able to take advantage of it. And then the last holdout really is the exec and port forward APIs in Kubernetes leverage the speedy protocol. And we've been working with the Golang maintainers of the HTTP2 library to ensure that we can reuse the HTTP2 library in order to move those protocols forward. So there's a lot of movement across the board. I would expect it to move into Q1.4 as well or to happen over the Q1.3 and Q1.4 time frames. All right. Well, those are the questions that we have in the chat. Does anyone else, is there anything else you want to add, Alan or Clayton? No. Oops. I'm good, actually. Here comes one more question. Any audit log improvements in version 3.1 there's still no audit log? So one of the things that is happening in OpenShift is we did a quick initial implementation of an audit log for that at least on paper is targeted for OpenShift 3.2.1. There is a discussion about we'd like to move that into Kubernetes and there've been a number of discussions. We were actually somewhat waiting to get the OpenShift authorization work upstream and that gives us a little bit more of a foundation to include information along what the user can do, what their effective role is. But I would expect to see something like that in Kubernetes and OpenShift very soon. So Alex, there's another follow-up from Alex. So in long-term, HAProxy will be removed from OpenShift, he's asking? No, so the plan for, and this is more of an in-progress thing, as we add features to Kubernetes, we really wanna do it in a way that minimizes impact to OpenShift users. So for instance, routes, we worked upstream on Ingress to ensure that Ingresses were a little bit more flexible. There is still ongoing work to improve routes, including features coming in 33-like AB routing. And our goal is over time to converge the API objects, so it's a seamless transition for users. So for instance, if you're using routes or deployment configs today, our goal would be for that to be seamless for you and to be able to take advantage of Ingress without any impact as those features are enabled. So as an example as well, Kubernetes deployments are simpler and implemented in a slightly different way than OpenShift deployments. So they don't allow you to run hooks or to run custom deployment processes. Where our goal is to just keep adding those features to Kubernetes until there's parity. And then to end users, you can keep using deployment configs or you can keep using deployments and there's no impact one way or another. You can, they will be seamlessly interoperable. Cool. So there's another question. What is the suggested way to install OpenShift Origin on top of existing Kubernetes cluster? That might be a topic for another whole conversation. I think that's a good question and it's a tough question. A lot of the features that OpenShift provides, there's features that run on top of Kubernetes like deployments and builds. And then there's features that are deeply integrated into Kubernetes like the security work around security context constraints, the security controls around builds and services that prevent malicious parties in a multi-tenant environment prevent malicious parties from going in doing things that are unsafe on the cluster. And one of our goals over the one three and one four releases have been where possible we're upstreaming those features and the sticky bit really is it's totally possible to run OpenShift on top of Kubernetes today but you do have to give up some features in order to do that. And we're kind of looking to converge over the next couple of releases so that all of the core capabilities for security are possible to be layered on top of a cube cluster so that even if they're not in cube directly you'd be able to enable them without impact without having to reinstall Kubernetes. So it's on the roadmap but it is an evolving target really just to be able to ensure the level of security that we want to guide. There's a lot of hook points that aren't quite totally flexible yet in cube. Chris John is asking are there any improvements in AB testing support? I think that's very... Yeah, from an OpenShift perspective we plan to add support in routes for AB services. So the first pull request is actually just opened to OpenShift on this. So a route points to a service today and you'd be able to specify additional target services and give those percentages or weights and have that be reflected by the HA proxy router. And we have discussed a little bit of that around ingresses but I think it's not gonna be a one three goal because we had already locked in cube one three before we started working on the feature so it will eventually come to ingress as I suspect. Well, we're almost to the end of the hour and I hope everybody found this as helpful as I have and we'll repeat this I think after the next release doing present and future as a format is actually quite good so you can figure out where we're actually at and where we're going. This has been incredibly helpful, Clayton and Allen. So thank you all for coming. We'll be posting this recording up very shortly on the blog.openshift.com and also on YouTube so you can look for it and I'll post it to the mailing list for OpenShift Commons. If you have further questions, you can post them to the OpenShift Commons mailing list and we'll send them along to the Clayton and Allen and the other folks. Clayton, maybe one last thing I would ask is if someone wants to get involved in the Kubernetes project or what is and or to have another feature added in what's the best way for them to connect? So what we've tried to do and maybe Allen can speak to this as well, what we've tried to do is be a little bit more formal about the beginning of each Kubernetes release, working and soliciting feedback from the community, identifying people who might want to contribute features and trying to shepherd them through the process so that we can ensure that even if you aren't yourself able to directly contribute a feature that your voice has heard and you'll be able to focus, you'll be able to get input in. The Kubernetes SIGs are a big part of that and then it for people who do wish to contribute features, we've been working a lot to try and streamline how easy and how approachable it is to understand kind of the process and to make it more of a community owned road map, not just a community influenced road map. Again, I'll second that as well. The SIGs are a great way to participate with the community. We have SIGs that happen on a weekly basis so go ahead and check out the SIGs and if there's one that's interest you join and if there's a particular SIG that you feel isn't represented or a particular topic area, please suggest and we can look at starting up a new SIG. Perfect, well, I think that's a great way to end. Again, I'll thank you both for taking part in this comments briefing and we will have it uploaded shortly and we'll grab the slide decks from both of you and have those available as well. So thanks again and we look forward to redoing this again after the next release.