 Good morning, everyone. My name is Clayton Coleman. I'm the architect for OpenShift in Kubernetes at Red Hat. I've been working on OpenShift for about five and a half, almost five years now, and I've also been working on Kubernetes since the very beginning. I actually, as I put this slide up here, I thought, I think there's probably a lot of confused people at tech conferences who work at the tech conferences or see people going in and out of these buildings because there's all this container related stuff. They must think that the tech industry is suddenly part of the shipping industry or that we're all like a longshoremen conference or something. So I'm going to think of everybody in the audience as longshoremen as I talked to you about this. So if you see me giving you strange looks, it's because you're big, burly people throwing around boxes on a dock somewhere. So next slide, please. So this is a really, I said, Diane asked me to come up and talk about the features that are in OpenShift 3.5, which is based on top of Kubernetes 1.5, and to talk a little bit about where we're going. And unfortunately in 30 minutes, it's almost impossible to talk about anything more than just a few topics. So what I did was, we have a massive list of features that are coming in OpenShift 3.5, which is, you know, a stabilized version of Kubernetes 1.5.2, dot, dot, dot, with a whole bunch of patches that we carry for things like multi-tenancy and security. And I wanted to focus on kind of four main talk themes today that cover both what we're doing in OpenShift 3.5 and Kubernetes 1.5, what's happening in 1.6 Kubernetes and what will happen in OpenShift 3.6, and a few things that might happen in the releases after that. Next slide, please. So if there's four things that really matter from an OpenShift perspective, and they also matter in a Kubernetes perspective as well, but the things I think about as what we're trying to do, community is extremely important to Red Hat. It's extremely important to everyone who works on the OpenShift team. It's extremely important to everyone who works on the Kubernetes team at Red Hat, at Google, at CoreOS. We've worked at any of the companies that have come together in the Kubernetes community to build better tools for running applications. We all kind of believe in that shared mission. It's been a real privilege to work with this community over the last two years and to see it grow and evolve, and I really hope to see it continue to grow over the next five, six, 10, 15 years. At some point, I hope to look back on Kubernetes, you know, 75.9 and say, wow, this is something that really changed computing and it changed how we build and run applications at scale. Security is the, you know, one of the most important things when we think about what the difference between out-of-the-box Kubernetes and OpenShift is, it's incredibly important to us to provide something to that multi-tenant user base for applications, because when we think about what OpenShift has been focused on, it's about bringing organizations together to build applications. Obviously, to do that, you need a really fundamental approach to securing and managing how those application users interact. So, Aparna talked about our back. That's based on work that came out of OpenShift. Many of the same folks who are involved on the OpenShift side, along with many others in the community, helped bring it to Kubernetes, and I'll talk a little bit later about why that's so important. But there's many other things in security that still need to be done. This is just the beginning of a journey. We want to build a platform that is secure as anything that has ever existed because we're gonna be running all these applications on top of it. Reliability is fundamental to us. Part of the reason that OpenShift trails Kubernetes is to give it that extra soap time, to make sure that we're working in the upstream open-source community, working with Google and others, to take fixes that are coming out of production environments for people who are doing early upgrades, and to take those, stabilize them, make sure that we've done reliability and performance and scale testing, build the multi-tenancy, test the security, verify. All of these things contribute to ensuring that while we want to rush ahead with features, we actually want to get to a spot that is extremely stable and extremely reliable. And finally, bringing new workloads. So, Stafel Sets are another great feature that's just now kind of getting out there and getting into a beta state where people can really try it. And we expect to continue to push these new types of workloads and these new opportunities for different types of applications, not just your traditional three-tier web app, but really branching into other areas of computing, areas that are kind of underserved in terms of the ability to rapidly iterate and deploy applications. Next slide. So, this is the whole stack. I could talk for a day on any of the topics up here. We're just going to selectively pick a few things out. And then if you have any questions, please find me later. And I don't know if any of our, you can find some of our PMs and they can talk to you for six or seven days about any of these items. So, next slide, please. So, now I can talk about what's happened. These are just kind of the high-level bullets. Every release, it seems like there's more and more features of key concern to us is ensuring that the stability of each release continues to increase. And it's actually becoming somewhat of a challenge. The more features, the more capabilities that exist in both Kubernetes and the ecosystem around Kubernetes, it's actually really important for us to begin to look out past just the near horizon of the Kubernetes infrastructure, but also look at what are the other projects in the ecosystem, like the other projects that are part of the CNCF that people are using today to build microservice applications, frameworks, new technologies, new backends. And how can we bring those together in a holistic way? How do we bring together as Red Hat and as the OpenShift Origin community a wide and diverse range of tools and technologies and help them work really well together in a secure fashion? Next slide. So, OpenShift 3.5 focused on platform reliability and iterating on the core experience. We tried to expand application support, Stafel sets, our tech preview and OpenShift 3.5, and we'll continue to, we're trying to get them as stable as possible and have a fairly long soak period, so that everybody gets a chance to really offer feedback on the direction. Again, it makes no sense for us to push something out there that doesn't match the needs of applications people are building, and so that we expect that to be a dialogue. And container security, and there's a number of small items that have taken place in this release around, ensuring that we can articulate and communicate how you secure a platform, all the way from the OS level and the hardware level, up through the individual layers of the stack. What are the different parts of security? How do you reason about how secure your containerized environment is? Because if there's anything that we hear, it's that this is such a new space, and there's so many things going on, and it upends so many patterns of how people run applications, that if we don't take some time to make sure that people understand how things are changing, we actually increase the risk of security. There's huge advantages to standardizing how you build and run applications, but the risk that comes with it, if you don't understand that standardization, there's blind spots in your security reasoning. And we've also done a lot of work in the 3.5 release with other teams in the community to bring the container development kit, which is a VM that contains OpenShift within it. We made it easier to use and some of the great work by folks on the OpenShift evangelist team and in the community as well to give power users a way to run OpenShift more effectively on their local clusters, or on their local laptops. Next slide. So in OpenShift 3.4, we were this tech preview of integrated Jenkins with the OpenShift platform, and the idea was that everyone uses Jenkins to some degree or another. We wanted to make sure there was just enough power and ease of use that you could start consuming Jenkins to run deployment pipelines. Jenkins is fairly fundamental to how many organizations build continuous integration and continuous deployment today. We wanted to make that bridge from OpenShift as a platform for developers to be able to easily consume Jenkins pipelines, which is the exciting new change in how Jenkins is moving from the older jobs-based approach to more of a declarative, here's the steps and stages, and the ability to reason about not just individual parts of your build chain, but how you might start with code and a build, you then move to a test environment, you might do performance tests and fan that out into a matrix, and at the very end only then you might deploy to production. And in OpenShift 3.5, we've actually gone back and we have the foundations there to do integrated Jenkins, integrated security to be able to spin up a Jenkins machine for you dynamically, but what we really wanted to focus on was how you can use this to accomplish the end goals. And so there's a huge new number of examples for how to do blue-green deployments, taking advantage of the flexibility of Kubernetes, tying that into the build pipelines in OpenShift as well as the ability to trigger deployments when new images become available, and then showing an actual set of concrete examples around how you can accomplish some of these really dynamic and complex patterns for deployment, canary deployments, blue-green, different ways of doing blue-green because everybody seems to have their own variety, and how you can accomplish each of these within the framework that already exists in Jenkins. We've also gone and done a number of refinements to both the user experience, and next slide please, to the user experience around builds, and we've also ensured that builds and deployments can properly surface up the appropriate errors to the users as necessary. The flip side of having really good web applications is you want stateful apps. A partner touched on a lot of the details of this. The stateful set is really kind of the translation of the idea of you've got these scale-out web services that don't have any dependencies and they don't need any state. It's really easy to create 10 of them, throw away two of them, bring back four more. You don't really care who's in there. When you're talking about scaled-out databases, scaled-out applications, it becomes very important to be able to reason about each of those members. In this bottom right, I have a quick example here, and I'm not going to dive too much on it, but the idea that if you have a cluster of something, whether it's a scale-out database like Galera or Postgres, or if you're running something like ZooKeeper or EtsyD, you might actually have a very specific set of cluster members that you expect to exist. You don't actually care about where it's running, but what you do care about is that you can uniquely identify each of the members of that set. If one of them goes away, you can safely bring it back. The safety is actually really critical, and this was a big feature in Kubernetes 1.5 that we're continuing to focus on, which is how to use an administrator reason about what's happening on the cluster. The idea of I'm just going to take this machine away and then bring a new machine back, and the cluster is going to go create this new instance of running containers somewhere, and it's just going to magically pick up all the data I had before. That's a large amount of trust. You're trusting that OpenShift and Kubernetes are going to take that instance of the database, detach the storage correctly, move it to a different machine. It's not going to run two of them at the same time. That's really critical. It's very easy. You're supposed to have exactly three of something, and suddenly there's four. You don't quite have that same level of guarantee that you had before that two members could vote and override the third, instead you actually have people fighting. In Kubernetes, we focused on, with StapleSets, we focused on making sure that there's a predictable process whereby we create only three instances. We bind them to storage because obviously if you have to use local disk, you can definitely run into issues where a machine dies and you want to avoid it. This ties into the persistent volume support that's always been in Kubernetes and allows you to detach volumes, reattach them. Then in the worst case, and this is where the functionality really kind of stops today and where we want to continue, is when something does happen, the cluster gets partitioned. One of these machines is off there and you can't actually talk to that machine to figure out whether it's shut down correctly, the dreaded network partition. What Kubernetes does today is it fails safe. It stops. It doesn't try to go and do something special and say, oh, no, no. I'm pretty sure this is gone. I'm going to give you another one. That's how you end up with four instances when there are only, I think, three. In future releases, what we want to do is take concepts that have existed for a very long time in the system and world, fencing, shoot the other node in the head, storage lockout from products like Sandlock and Pacemaker. All of these concepts have existed for a very long time to make HA easy or make HA even possible on traditional Linux systems. We actually want to help integrate those with Kubernetes and the stateful application frameworks. What that allows us to do then is to say, oops, if you're not running on a cloud provider, and you don't have access to world-class storage volumes, but instead you're building it yourself, how do you have the confidence that you can do what the big cloud providers do? You can detach that disk and ensure that the data isn't going to get corrupted because two pods are running to it at the same time. We're going to work on ensuring that network fencing is something that can exist at the platform level. These are incremental steps we'll take over time, but I'm very excited about the possibilities we have to really make high availability for applications, something that's innate and fundamental to the point where you can rely on it the same way across your entire application portfolio, even for classes of applications that don't run well on container platforms, even with these steps, to give people the kinds of incremental steps that are necessary to run real application workloads. Next slide, please. In combination with that, the OpenShift UI has begun to add storage support or, sorry, stateful set support. We're focusing on how do you visualize what's going on in the same way that you can look at replication sets, replica sets, replication controllers, deployments, deployment configs. The stateful set is going to be another important new member of that. Obviously, if you're using a cluster database and you're attaching it to web applications, it's probably a pretty important thing to know what the health of all three of the members are and to also help you be able to make correct decisions about what's going on in your stateful set. Next slide, please. I mentioned before build failures. This is just a lot of the iterative refinement. The OpenShift UI has existed since the very beginning. It's focused on developer use cases. What we're really trying to do is expose all of the power of the platform without necessarily overwhelming developers. That can be a really tough balance. There's a lot of people who beat me up on a day-to-day basis because we either show too much, we tell you about pods, or we don't show enough, we don't show you the details of active deadline seconds on a pod. What we try to do with the Developer Console and OpenShift is find that balance between what are the things that developers need to know to safely run their applications, to make changes and to iterate. We expect, over time, to bring more user interfaces together with OpenShift to offer different views. If you're someone who's a citizen developer who's using a business process engine on top of OpenShift, we'd like it to be easier to fit those on top of the platform without necessarily having to integrate in. And to offer that, you know, the difference of experience over the full lifecycle of software development, not just the particular kinds of web development we're talking about here. Next slide. Some other big changes. OpenShift's integrated registry is always a point of discussion when we talk about, well, if you have this containerized platform and you don't have any way to track the images that run on that platform, and if you don't have any way to easily bring new images onto that platform, and if you don't have any way to combine the secrets that you might need to go get images from other spots, it's very difficult, you actually have to build that on top. And so starting from the very beginning in OpenShift, we said images are a fundamental part of the workflow for the cluster. We've continued to refine the model, but I would describe it as this. The goal of the integrated registry in OpenShift is to help bridge the gap between every other registry in the world and what's running on your production clusters. It doesn't mean necessarily that OpenShift has to own that resource. It can still be a downstream from an authoritative repo, and that's very important to us integrating with external registries. But it also means that OpenShift can provide things that make your cluster safer and more reliable. So in OpenShift 3.5, we actually added a mode. We had previously supported the ability to, in OpenShift, when you set up OpenShift to point to a remote registry, you could talk to the integrated registry. The integrated registry would then go pull the image and you would proxy it through the server. What that meant was that the credentials that controlled access to that remote registry never make it to a node. Instead, the registry was doing that proxy for you. In OpenShift 3.5, we added the ability to mirror that content, which means that if you have a central authoritative registry and you're pushing images to it when it needs to be distributed out to 10 or 15 or 100 clusters or even just one cluster that you want to keep separate from your authoritative store, OpenShift can act as a cache. You can instruct applications that want to pull that image that they should go to the local registry instead. Local registry, as before, would talk to the remote system with the authentication. Then once you begin pulling that image, it creates a mirror. This actually leverages the capability that's been in the upstream Docker registry for a long time, but it does that using all of the security and all of the control of OpenShift. It keeps that local content in the registry that's associated with the particular OpenShift installation. That remote registry goes down, image pulls still work. As we start to make further steps down this road, imagine federated clusters across the world. How do you ensure that the images that are available, if that registry goes down, that could be downtime for any application? What we really want to ensure is that you can have authoritative registries that are strongly and centrally controlled, and then use the OpenShift clusters as the way to distribute and manage those applications. Next slide, please. This is the big slide. There's 10 things listed here, and there's 100 things on the back end. We spent a lot of time working on improving the install experience for OpenShift. From the beginning, we had made a bet on Ansible. Even before Red Hat acquired Ansible, our operations teams had decided that they wanted to make a shift to a newer model of doing configuration management. They took that as an opportunity to pull in best practices that we've leveraged in our OpenShift online hosting. As part of the 3.5 release and continuing work that was done in 3.4 and 3.3, we've continued to refine, automate, streamline, guarantee the idempotency of playbook updates, and try to ensure that the most common operational patterns are easy, reliable, and safe. Now, this doesn't mean that everybody in the world may want to use exactly every detail of these playbooks, but our focus has been a platform should be easy to operate. When you do need to do complex things like installing Nuage alongside an OpenShift cluster, or if you want to set up persistent storage like Ceph and Gluster, or you need to make sure that the firewall rules on all of those machines are consistent, we want to offer an out-of-the-box default way to install OpenShift that is a secure Kubernetes distribution from top to bottom that makes no compromises about dealing with the real world problems that people have when deploying unbearable, when deploying to rel, when deploying to the cloud, and try to provide that overall framework where there are problems that the Ansible playbooks aren't going to solve, and that we continue to add new capabilities like rolling update of certificates, making blue-green deployments easier. Each of these things builds on the other, we're able to reuse more of the playbooks we've already put in play, and it makes the operational the day two and day three experiences of OpenShift even more powerful. Next slide, please. A partner talked about network policy. OpenShift has had the multi-tenant SDN plugin from the beginning through OpenShift SDN. We also had the more flat mode using OVS. If you think of multi-tenant SDN as one extreme where only two people can only talk to each other if an administrator sets it, and the open mode is the other extreme, no isolation. Network policy, the OpenShift engineers who worked on OpenShift SDN had actually been involved in the network policy design discussions. The goal was just to add that level of flexibility, so instead of just having two options, one at either end of the security spectrum, you could give your users a choice, or you could give your administrators a choice to more flexibly compose different applications in different namespaces. There's much more that's going to be done here over time, but the goal is everything that comes out of the Kubernetes project in terms of policy, we believe we work in the Kubernetes project to ensure that those things are successful, are stable, are reliable, and can be made secure. That security part is something I'll touch on in just a minute. Next slide, please. Finally, I talked about local dev. We made a big switch from our previous vagrant-based approach to using MiniShift. MiniShift is a downstream fork of MiniCube that does pretty much everything that MiniCube does, a few small variations to start an OpenShift cluster on a local VM. We're doing this mostly to make it easier to get access to local development environments that can mimic production. The VM's an important part of that, not everybody runs on Linux, unfortunately, including the presentation team. As a result, when they don't, they need an environment that can reproduce production. The goal of Kubernetes is to give you a local environment or to give you a consistent application environment. We think it's fundamental that you have a consistent development environment all the way from your laptop to the cloud. Skip, come next slide, please, one more. There have been a bunch of SCL updates. This is just continued iteration on supported versions of community ecosystem images that we drive through the Red Hat community and through the open source communities in CentOS and Fedora. Next slide, please. I think I spent most of my time talking about what has happened. I did touch on some of the things we want to see. Our big themes coming up in 3.6 and in subsequent releases, the service integration to the service catalog, a partner brought this up. This is a critical effort. If you're running large clusters with many applications, you need some way to decouple developers from the dependencies they want to own, so the dependencies they either shouldn't own or don't need to own. That's really important to us, to have a mechanism whereby you can have one extreme, I own everything, I'm responsible for everything, and on the other extreme, you can say, I want to go to a catalog of services that my organization, my operators, my company, my division has made available to me, and I want to select from those. I want that to work as well as anything else in Kubernetes. Security throughout the stack, security, again, is critical. I'll cover that in a minute. Cluster health and reliability. Everything about Kubernetes that makes it unique, I think, from the Google mindset from Borg, is the idea of feedback loops. Your applications are an environment that changes, they're affected by external conditions, they change over time, they change during the day, and the more feedback loops, the more ways that we can take the current state, feed it back into how the platform runs, means that that's less maintenance that operators have to do. The basic ideas of a replica set that if a node goes away, you bring back a new copy of a pod, or nodes getting evacuated when they stop checking in, or pod health and pod readiness where the pods themselves report up whether they're successful, we want to take more of that through resource metrics over time. Finally, multi-cluster management is also very important to us. If you can skip ahead. The service catalogue will be an easy way through the service catalogue work that's happening in Kubernetes. If you have questions about this afterwards, the person you should speak to is in the back of the room, he's got a pink mohawk, he's really hard to miss. Paul's helped drive this in the Kubernetes community, and we're incredibly excited about what it means. You register things in a catalogue, back end brokers, which is part of the open service broker API, which donated by Cloud Foundry to an open foundation, and we're working with, or sorry, it's part of an open foundation, and we're working with them to bring that to Kubernetes. Each of those services is provisioned on the back end, can be on cluster, off cluster, can be something in the cloud, could be an account service, could be an account in a database. The goal is to make it easy for operators to do as a service, as a service. If you think about the kinds of challenges you have in your organizations, you have processes for spinning up and provisioning machines, we think of service catalogue and service broker as a way of standardizing the processes of how you make things available to your end applications, how you separate control from the administrator to the end user. Next slide. We've got a lot of new UI work coming in 3.6 as part of the service catalogue. There's just a couple small sneak peeks. You can see this in OpenShift Master. If you download your own OC cluster up on the latest version, you'll be able to see this. I'll leave that as an exercise to the reader. Next slide. Security is really fundamental. There's been a lot of talk recently about improved ways of managing secrets on the platform. Some of the things that are being discussed that'll be done in Kubernetes in an OpenShift, there's a lot of desire to encrypt secrets at rest when they're stored in LCD. We want to do more to subdivide who can access secrets. So a secret can declare, you know, finding ways of ensuring that secrets can declare who they're used for and then having that be strongly authorized by the platform. A secret that you want to use to pull images for import is different than a secret you want a pod to be able to access, which is different than a secret that you want to expose to a router to serve as an end certificate for your public website. We want to make, also want to make external secret integration. This will happen over several releases. This is a journey, not a destination, and what we want to do is make secret management as easy and as part of the platform as possible because that ensures that you can bring together and standardize how you deal with secrets as an organization. Even if you use external stores for secrets, we still want to make that flow from where the secrets are stored to where the secrets are used as comprehensive as possible. Next slide. Secured by default, partner referenced, are back. We're going to start turning this on. We're going to encourage integrations and things in the community to actually use and run with security because obviously if you develop an application and you give no thought to security, you end up with a problem later on. And so out of the box in the Kubernetes community, we think it's really critical that people take the security of their containers into account from day one. Not everybody needs to run software downloaded off the internet is rude. It's just an idea. Next slide. A lot of work in cluster reliability, things that'll come over the next couple of releases when nodes run out of resources and the resources that pods are actually using, surfacing that up, ensuring that it can't interfere with other applications. A more active management of the containers on the cluster. If a set of applications are overloading one particular node, we'd like to feed that loop in, we'd like to feed that back up at a loop where you can take those and move those to other parts of the cluster if necessary. So this is part of the long-term evolution of Kubernetes. We're getting closer and closer and one of these days we'll get there. Next slide, please. That didn't come out well. So this was a nice animated diagram, but I think it was unfortunately eaten by Windows. Our goal really is we think about federation and where we want to go. It's not about one cluster. It's not about two clusters. It's not even about five clusters. At the end of the day, clusters represent a security isolation and failure boundary. We know that there's different ways people use clusters. Most of the large open-shift customers we're aware of in production typically have two data centers. They don't have three, they don't have one, they have two. And so what we need to do is build models that enable federation in the Kubernetes sense to work well. Applications that are spread across multiple clusters. But we also need a good security model because the idea of a federated cluster that controls two full clusters means you now have a new super root that can control both of those clusters in very details. And we'd actually like to invert that model. We'd like to say we don't actually have to trust federation. A federation is just another client of the platform. But ultimately, when you start talking about more and more clusters, we believe that there really does exist a need for a central pane of glass for end users who want a self-service on the platform. If I want to see a list of projects, I don't want to just see a list of projects that I have in one cluster. I don't want to go to US East 1 and US East 2 and US East 3 and US East 4 and APAC 1 and APAC 2. I don't want to have to go through a list of things to find where all my applications are. The promise, really, of the centralized application platform is a place where you can go to see where all of your applications live. You can create security boundaries that span geographies, regions, clusters. And you can build applications that when they want to deliver to a specific environment, you can do so and get that high availability and reliability. So over time, we'd like to introduce new higher level concepts that take top level users, projects, policy and quota, push them down into clusters and ensure you can build and sustain models where you have many different ways of delivering application to many different clusters. And with that, I have run out of time. As I said, I could only get to a few things. If you have other questions, please come find me. I'm happy to talk at length about any of this. And thank you for coming.