 Now, here to tell us about the future of OpenShift, Red Hat's own Clayton Coleman. Hey everybody, we'll give him just a minute for people to filter in. Apparently, tonight's going to be a good night. I'm looking for that hello world. All right, so for those of you who don't know me, my name is Clayton Coleman, architect for OpenShift. I've worked on OpenShift for about four years now. I've worked on Kubernetes since it was started. And to me, it's been really important to position why OpenShift exists. Like, why does OpenShift exist? We have Kubernetes, we have Docker. We have all of these tools out there in the ecosystem. What do we create OpenShift for and what's it trying to accomplish? I think of the left side of this chart as it's about Kubernetes. And the right side of the chart is about everything else that isn't Kubernetes. And so OpenShift is about bringing these two together. It's about making infrastructure available for use. Chris talked about this this morning with containers and what advantages containers give to us. Portability is really important. The ability to run something on your laptop and run it in production was the original promise of Docker. And I think one of the things that we've done with Kubernetes in OpenShift is really trying to bring that there. So OC cluster up for anybody who's familiar with it is a command you can run on your local laptop, starts up a container that runs a containerized OpenShift. You can test out your applications and then when you need to scale, you're just running the same application everywhere. That's really important. But on the application side, it's not just about running applications. It's about how those applications come to be. And it's not just about the what, it's the process that gets you to that point. So it's packaging and deployment. It's a config driven approach so that you can define your applications. And there's lots of ways to define config, right? Everybody has their own approach. And instead of trying to pick a particular solution, one of the things we focused on is trying to be as consumable as possible because OpenShift sees use all the way from the very far simplest possible 12-factor app where there's literally one input, your source code, and everything works from there, all the way to the far end, which is you use it to build very, very complex applications with many moving parts. And any solution that has to span that is pretty difficult to represent with one single tool. And we understand that and we try to keep that in mind every time we design, which is everything that the really simple app uses, we want to be able to move all the way to the right end of the spectrum. So OpenShift exists because OpenShift exists as a separate thing from Kubernetes, partially because when Kubernetes was started, we made a very conscious decision that we wanted Kubernetes to be an effective core for running applications. And we wanted to deliver it as fast as possible. This is one of the key goals of the Kubernetes open source project from the very beginning, which was actually put something real in people's hands. And so we looked at what Kubernetes would be an application runtime. We said, well, there's a ton of things that don't fit in that policy. It's how administrators deal with a platform. It's how you keep this running. It's how you install it. It's how you deal with all the existing systems that you have in your enterprise already, authentication, policy management, LDAP. These are all things that are required to be able to run not just a platform for applications, but a tentative platform for applications. And so we worked with the Kubernetes team. We kind of split the responsibility. So the OpenShift engineers work on both Kubernetes and OpenShift. And OpenShift is really everything else that's necessary. It's the asterisk on the end of application platform. It's the stuff that actually makes it possible to run applications in a shared environment. And so that focus from the beginning is we want Kubernetes to be usable in a tentative fashion. We want to optimize for team software development. So we talked about the 12-factor app. All I do is push code and applications tick through. And that is one end of the spectrum. And for a lot of people, that's maybe all they need. But on the other end of the spectrum is I want to deploy an existing application, and Chris alluded to this earlier, is I want to deploy an existing application onto my platform. And I don't want to have to rewrite it in order to do that. So how do we build enough flexibility into a process that can go from one end to the other? So on the very far end in OpenShift, we have the really simple build pipeline concept. And I'll talk about some of the things we've added. But we want to be able to fit into flows that already exist for end users. So if you've already got a continuous deployment pipeline, if you already have a build system, if you already have an extremely complicated process-driven lifecycle, at any of those points, we want to have tools that are valuable to use and consume and fit within the needs of that organization. And build is a great example of this. A lot of people run external build systems that are working perfectly fine for them. They don't need to change them. We want to adapt and be able to run the applications that come out the other end of that pipeline without necessarily forcing people to change everything. We wanted to be able to create and deploy applications in seconds. So the idea of this is everybody, the first thing that you see when you run Docker is you do Docker Run and you give it an image. And something starts in seconds. We want to be able to do the same thing on top of Kubernetes. And so, again, when we started Kubernetes, there's been a lot of development in the ecosystem since then. There's a lot of new ways to build and run applications. Everybody's got their own package manager today. And our focus has always been on, we just want to make it easy for someone to see a list of things that they can consume, that someone's made available for them, that actually just work. And so in OpenShift, we have the concept of templates. Over time, you're likely to see other things in the ecosystem brought in. But the core idea is I push a button and I see value. And I see that right away. And it builds on top of the things in Kubernetes. But there's aspects of it that need to be taken account like security and how this fits into the rest of the application. Does it need to be exposed to the outside world or not? We want to be able to run all the stateful parts of the application. That's part of what I'm going to talk about today. And we want to do this easily. Ease of use is really important. I'm sure if there's some people here who've used OpenShift to say it's very complex. There's a lot of moving pieces. And there's this balance between the real world is about big complex applications at all ends of the spectrum that some of these are really easy to use. Some of these take entire teams weeks to get set up to do a single deployment. And instead of trying to make everything be the Fisher price equivalent of application deployment, what we want to do is meet people where they are and find the specific points of ease of use on the spectrum. So I can push a button and see an application deployed in seconds. All the way to, if I build a big config setup process, I have an existing setup process in my organization that can define and manage all this. I'm using Ansible or I'm using Chef or Puppet. At the end of the day, we have these tools that you can still get the ability to deploy something easily even at the other end. And so our goals with OpenShift is to keep driving in the community, make it the best place to run applications, provide well integrated tools, ensure that applications remain portable to any environment, and to any environment means Kubernetes. But it may also mean if you choose not to use Kubernetes or you want to take your applications outside, we're not going to try and lock everything into Kubernetes. We think Kubernetes has value, and we want to make it easy to build images and run images on a containerized platform. But we always understand there will always be edge cases in all of this, and we try not to be too opinionated about the environment that you run in. We think that Kubernetes is a great place to run applications, and we think OpenShift is a better, more secure place to run applications today. But the end goal is that asterisk for environment is something that's containerized that runs images and it does it well so that you don't have to think about it. And then finally, and this is kind of a, this is a new one, and again Matt alluded to this early on, was we have more and more infrastructure, and more and more clouds. We have machines just sitting under our desks that are, you know, 10 times more powerful than your computer from 20 years ago. All of that capacity is just going to waste. And some of it's not going to waste if it's powered down, but the idea of being able to take advantage of this to make things work at scale, to be able to test big complex applications easily, which was very difficult before. The more you can buy into this model, the more you're able to take advantage of, if you've got a cluster and it's got a thousand nodes on it, maybe I can run a thousand nodes in my application and see what happens. But down the road, I might. So this is just standard architectural diagram of OpenShift. I don't even think it's really that important for what we're going to talk about because what we're talking about is OpenShift 3.3. I talked about a little bit of the principles of where we're going. The highlights were on the infrastructure level, you know, when you talk about the system and down of running applications, containers, and the runtime environment. The next level up, which is making sure that the system is easy to administer, is secure, has enough policy points of control for administrators to be able to integrate to a wide range of processes, and then the developer in ease of use on top of that platform. So I'm going to talk a little bit about most of these. And again, this is OpenShift 3.3, which was released in September. OpenShift 3.4 is coming out very soon now. OpenShift 3.4 has an even longer list of things. OpenShift 3.5 will come after that. And this is really driven by both the Docker, the Kubernetes, and the OpenShift communities. There's a lot of exciting development happening, which is, it seems like every three or four months you just get a whole new bucket full of really useful features that solve real problems. And to me, that's the most exciting thing about working in these ecosystems is there's an incredible amount of people working together. Diane alluded to all of the folks working together to build containers and in the containerized ecosystem. There's just a ton of new stuff that every day we find a new way to bring value to applications and developing applications. So the first and I think most significant from a developer perspective is we looked around and we said Jenkins is the de facto developer continuous integration tool. Not everybody uses it, right? There's lots of alternatives. It's a rich ecosystem. But if you pulled a random set of people, maybe I'll just ask in this room, how many people here use the CI CD system as part of their pipelines? Okay. How many people use Jenkins? And so we said, we don't think that Jenkins is the be all end all. But Jenkins has some of the same value that some of the tools in the real ecosystem do, which is it's used just enough places and there's just enough people working on it that there's a strong community that has a solution for almost everything. There's a ton of great documentation out there and Jenkins can be a beast sometimes. But at the end of the day, we understand that for the most people Jenkins provides a really exciting tool. As part of some of the work that's coming in Jenkins, it's a pretty old project. Jenkins is really, the Jenkins community has embraced the idea of pipelines and building concepts in Jenkins that really try to fit the idea of taking software in a process from development through testing through rollout to staging to production. And again, this is a point on the spectrum. Not everybody runs like that. But we felt that there was a really great opportunity to integrate just enough of Jenkins into OpenShift to make it easy to stand up and deploy a Jenkins cluster and to be able to use it on top of this infrastructure. So as part of OpenShift 3.3 in tech preview, Jenkins, when you go into OpenShift and you launch a build, one of the options for a build is that you can say, well, this build uses a Jenkins file and this is a concept that's been added to Jenkins. A Jenkins file is really a lot like a Docker file. It describes a process. Docker file describes a process for creating image. A Jenkins file describes a development pipeline. There's a lot of work going on in, this is still very early in the Jenkins evolution, of course. But that pipeline is really the idea of, I want to say, here's the steps I go through and I want the infrastructure to make it happen. And with the integration with OpenShift, the Jenkins that runs on top of the cluster integrates the Kubernetes plug-in from Jenkins. It's integrated from an authorization perspective in OpenShift 3.4. So as a user on OpenShift, if I need to get access to a Jenkins instance, I can press a button. Jenkins comes up. I can integrate into a build pipeline. That Jenkins instance can use the platform securely under the same policies and controls that I normally have in that project to build and run apps. And from the UI, we've integrated this into the user interface. We haven't tried to replace the Jenkins user interface, although I'm sure there's some people out there who would be really excited about that. We've tried to integrate it in a way that brings to a developer who's thinking about how I go from beginning to end, a snapshot view of showing what the most important process is. So the UI is showing a set of builds. I can easily take an OpenShift application and switch it over to Jenkins to use Jenkins just by putting a Jenkins file in the source code repository. And then I can define an arbitrary complex pipeline if I wanted. That pipeline, each of those stages is actually using the OpenShift environment. So it's starting containers. It's starting images. This might be building images. It might be building war files. It might be running a completely custom process. It might be running Ansible to publish to a production environment. And so we see there's a lot of... there's a lot of flexibility in this space. And one of the things that we'll be looking at over the next few releases is how to make this easier to consume in a way that doesn't necessarily lock you into any particular platform. You could take this pipeline and run it on another Kubernetes system. You could potentially move it off Jenkins. And we think of this as like a really powerful way to instead of us reinventing a workflow engine, finding workflow engines that people are already using, integrating those communities, supporting and making those secure and easy to use, and then making the entire platform really just represent bringing together the most successful communities out there. There's been a ton of other new user experience changes in OpenShift 1.3. The focus, the OpenShift web console is a developer and application owner focused web console, which is a little bit different than some of the other consoles that exist for Kubernetes, I would say. The Kubernetes dashboard in 1.4 is really great. It's still mostly focused on the operational experience of running Kubernetes objects. In the OpenShift console, we've tried to focus on what's the flow that someone takes going from an idea to production. It's a pretty familiar concept. And the console is intended to make it easy for you to start ideas and also to maintain ideas. So in the OpenShift console, before I showed showing build pipelines, you can get an added glance of what your team or your extended team is doing, but it could also just be you iterating and you just want to see what's going on, run it to production. And depending on whether your production environment is your dev environment, it may actually be that this console is just showing you a personal dev tools, but if you're the kind of team that splits responsibilities or has combined responsibilities across both dev and ops, the OpenShift console is really focused on giving you just enough information to help you decide what's going on in production. So integration of metrics with the user interface, a ton of new, I talked about push button applications. We've really tried to flesh out the experience of I can define a fairly complex application and offer it to my developers. Those developers in a single click can provision and deploy it and extending those capabilities. Also taking images from the Docker Hub, as is running those, making it easier to bring the things that you're already using together in one spot. I talked a little bit in the beginning about stateful applications. And so you can read this title, Pet Sets for Stateful Services. So the Pet Set was a concept in Kubernetes that we shipped as Tech Preview in 1.3. It's now been renamed, so I would call this Stateful Sets for Pet Services. But the idea is... The idea here is that in the real world, there's two types of applications. There's the ones that don't care about their state and you can replicate an infinite number of them. And there's the ones that do where all the money is. And the ones where all the money is is that might be your database, that might be your session store, that could be a message queue. And most of these things are pretty finicky when it comes to you get it wrong and everything goes dark. And so one of the goals is to move the application ecosystem to have a flexible infrastructure, you need concepts that make running services with state easier. And so you can use the existing concepts in Kubernetes 1.0 and 1.1 and 1.2 to accomplish this. But it was obvious that we wanted to build a concept that fit the model of there might be one of me and there never should be more than one of me. And I need access to a database and I have a network identity. I'll talk a lot more about this. I have a talk on Wednesday, Stateful Sets at KubeCon. And I will go into much more detail of this. But at the basic level, we want to make it easy to run databases, message queues, caches, high availability applications, leader elected services on Kubernetes to do it naturally. And there is no magic here. So I want to make that really clear. We all pretend that every solution that comes out is this magic solution that's just going to, oh my gosh, I'm running a HA Stateful Database in production and I've got three levels of failover and if anything happens on my infrastructure, it just automatically works. That's not reality. The reality is there's still a lot of work that operations teams need to work around these kinds of concepts. But we want to make sure that it's possible for an administrator to be able to see what's going on and let the infrastructure do what it does best, which is monitoring health, bringing components in and connecting Stateful applications to attached storage or to security policies. And then on top, allow a very powerful and flexible means for operational teams to manage large clusters. So if you have 10,000 databases that you want to run for developers, most of those databases are probably going to be in a class of dev policy. They're not very important. They don't need to be tuned to a high level. But operationally, you want to make sure that they all keep ticking over. So there's the large scale, broad swath of simple Stateful surfaces. But then there's also, I wouldn't recommend doing this, but if you want to run SAP HANA or something like that on top of OpenShift, we're trying to build the primitives that make it easier and possible to run those safely and then help people move up those levels. And it'll take time. So this is the first step in OpenShift 3.3 and in Kubernetes 1.3, and there's a lot of exciting work coming in the future. This gets a little bit more into the guts. I apologize for digging too down in the weeds, but an important part of Kubernetes is making it easy to run applications portably. And so if you're somebody who's working with Docker images today, you might be using those Docker images in several different environments, or you might be testing those locally. And a really important part of Kubernetes that we don't talk about a lot is making it easy to take applications and run them in multiple spots by putting just enough glue that an application running on development works the same way as an application running on production. And so a couple of the concepts that came into Kubernetes in 1.3 that are part of OpenShift 3.3 config maps, this is really just the idea that just like secrets, you can store those, you can manage configuration as part of the platform or define it as part of your application, which gives anybody who comes and looks at that config a better idea of where that config actually comes from. You don't have to use this. It's a tool, it's a building block pattern that can be composed into more complex forms. That's tied with a little bit with what we call init containers. The init container is the idea that, and this is kind of the big difference between you and maybe the Kubernetes mindset and where we all started with containers, which is when we all started with containers, we're like, great, I'm going to run my container in production. But with Kubernetes, one of the really important concepts was the pod, which is it's not usually, it might just be one container, but sometimes you need two containers together. You've got a container that's your web server and a container that's your Node.js application. Or you've got a cache service co-located with all of your web services. And so this idea of putting containers together is something that's at the bedrock of Kubernetes. It means that you can ship parts of an application independently. So you can ship the web serving differently from the log processing. And init containers are really a way of adding a third point of variation there, which is before all those containers start up, you may want to do something. You might want to download the dependencies for your application on demand from an object store. Or you might want to check to see whether your database service is actually running before you start your Java legacy enterprise application because it doesn't do so well when the database isn't running when it starts up. You might want to generate a config file. And so init containers is really just an expression of we want to make it easier for system administrators, application authors to pull together the pieces of real applications. Chris talked about object-oriented programming. Well, it's not just about services and microservices. It's about different teams working together. So an operations team that's building, it's building monitoring tools or building log processing tools from the data team that sit alongside the web application team and trying to bring all those together in a way that can split responsibilities down the middle is really important to ensuring that you can run all these different classes of applications on OpenShift and Kubernetes. And then finally, the downward API is just another way of saying, I've got all these containers that are running legacy applications that expect to find certain files in certain locations. So it might be an Apache, it would be the configuration file for Apache. It might be a database configuration file. The downward API is really the mechanism whereby you can take some of the dynamic nature of where you're running stuff and put it into a container that doesn't have to care. So this idea of separating out the application from knowing about its environment. It looks at the file system, it sees some files. It works the same on a development environment if you have those files in place as well as in a production environment. And it keeps the application somewhat decoupled from its environment. Again, it doesn't stop you from doing more complex solutions using tools like ZooKeeper or XED to share a config across a large fleet of applications. But it does offer another point in the spectrum of flexibility for running applications. A big feature that came up in OpenShift 33 was AB routing. So there's lots of different ways that people do rollouts. The simplest one is you say, I'm going to shut everything down, do an upgrade and bring it back up. That's something that's been in OpenShift since the beginning. There's a slightly more sophisticated version which is I want to do a rolling update. Rolling updates work for a large percentage of software, but there's always some wrinkles in that process. You don't really get a chance to test the new stuff in an actual production environment. And that's actually really important. As much as we try to make development, they're the same, they don't. You can guarantee you're probably not running Oracle on your laptop. And so those sorts of setups lead into some more sophisticated stories. So blue-green services or blue-green deployments is something that, again, is possible in Kubernetes. It's gotten better doc and better examples, better setup over the last couple of releases to make it easier. In OpenShift 13, we've added another concept which is the idea of AB routing. So you can use AB routing for a lot of that, two back-in services, one of which might be the official supposed to be running in production. And I'm going to move a percentage of traffic to another service, like a test service, or it could be something like I'm testing a new version of my software for end-users before they actually deploy. And so AB routing is just another tool in the toolbox of real operations teams have to deal with real problems around big, complex applications. And this reuses, again, all of the Kubernetes concepts, and hopefully over time this kind of stuff will make it into Kubernetes. We've been working a lot with the Kubernetes community to take these features and make sure they're consumable. But for the foreseeable future, our goal is to make it as easy as possible to split traffic, to deal with real applications in place, and to work them through the system. So there's been some work as well on the image registry, and I'm starting to run a little bit out of time to talk a lot. We've made a bunch of integration points available in builds. I alluded to that before. We have Jenkins, but we also recognize that people bring builds and tools from outside the ecosystem and want to make them easy to use with the platform. So with OpenShift 3.3, we've really focused on just a lot of little improvements that make OpenShift a good target when external build systems are targeting it. You build your images and push them to the platform, or you build your war files and push them to the platform, and you push those codes, and you tell the platform to use OpenShift to accomplish the goals you have. So, a ton of exciting stuff in OpenShift 3.3. There's a huge number of exciting things coming in Kubernetes 1.4, 1.5, OpenShift 1.3.4, 3.5. Our goals make Kubernetes more extensible and multi-tenant aware. So taking all the things you can get with OpenShift today, making it available to an even broader community, making Cubic extensible is a really big goal for us because when Kubernetes is extensible, anything that somebody in the community can build and use, we'd like to be able to take that and offer that as part of OpenShift and make it available in a supportive way to customers. Continuing to evolve the idea of how you deploy on the platform, Jenkins Pipelines becoming even more integrated in terms of security and reliability and control, and then just a general focus on ensuring that you can run OpenShift from the simplest possible deployment easily to the very largest scales and to do that with a complex set of administrative and networking integrations as well as, you know, meeting individual deployments where they are. And then in the future, I've probably spent like three days talking about all this stuff. So as a sneak peek for this, you can come and find me and there's a number of developers and engineers from the OpenShift team. We'll be happy to talk everyone's ear off about all the great stuff that's kind of becoming in the future. And then please come to the Stateful Sets talk on Wednesday if you're interested in running productionized enterprise applications on OpenShift and Kubernetes. With that, I'd like to invite Mike Barrett and Joe Fernandez to come up and have some Q&A with Clayton. So if there's questions, raise your hands and we'll give you the microphone or you can come stand up. Hi. I'm following OpenShift since almost the beginning and I saw when the switch from V2 to V3 you started to build this container D. Is that correct? Yeah, can you make a parallel between container D and what is going on with Kubernetes? Are you referring to container D from Docker? Are you talking about GearD? So GearD was in a lot of respects an evolutionary step working with Docker and the container ecosystem to get a feel for what we wanted to do on the host. Most of the ideas from GearD actually ended up being part of Kubernetes. So we took a lot of the lessons learned from the node level stuff, worked with a lot of the teams from Google and so GearD eventually was replaced by what we call the node agent or the kubelet in Kubernetes and I think at this point most of the ideas that were generated through that process are part of Kubernetes now. I did forget to put that on there so the question was TCP routing so in OpenShift 3.0 we launched with the router which is the idea of HTTP or HTTPS load balancing and it also supported what's called SNI, server name indication which just means if you have an SSL connection you can load balance it but an additional part of that is there were some lower level primitives in Kubernetes starting in 1.1 and 1.2 and what we've tried to do is there's kind of the easy stuff which is the web load balancing and then there's all the insane software out there that makes very specific assumptions about how it can be load balanced and so in Kubernetes in OpenShift 1.1 there was the node ports concept in OpenShift 3.3 we enabled the cloud load balancer which basically means you can use the same Kubernetes service load balancer in a cloud environment and say I have a service that needs to be load balanced that's TCP only you can enable that in cloud environments and then in I guess it was 3.3 in 3.3 we added the initial work for if metal and you want to load balance a service you can set up your networking so that you can do transparent, highly available TCP based load balancing we also have integration with F5 the F5 integration only does HTTP and HTTPS today but the goal would be at some point to turn that into TCP load balancing for it as well when Clayton refers to the router we ship with a default HA proxy router but the router is one of those pluggable components because we know working with a number of enterprise customers that not everybody is going to make the same choices that we made so F5 is one of our plugins and now the work we're doing with Ingress upstream is making that even more pluggable but that goes for our onboard networking which is we ship with an SDN network based on OpenVswitch but we work with all the SDN partners to make that pluggable Clayton talked about our build system our CI stuff so you can do builds on OpenShift you can do CI, CD on OpenShift but we also make sure that we work with what you have and if you don't want to do builds at all or you want to do a hybrid scenario where maybe your builds are doing application binaries and then our build service is doing just the container containerization that's what we're trying to enable to make sure you can fit it in your environment so in 3.3 there's three or four different ways that you can do TCP load balancing they're not done through routes though because of the slightly different needs of TCP load balancing and some of the documentation is still in the process of being improved for that Is it coming throughout or something? It's tough because we actually want to do that through ingress and Kubernetes so I think as a timing thing it's possible today and you can use all the same Kubernetes primitives that you want and we also have added support that makes it work on bare metal if you're willing to set up the network configuration for it and then we will see a convergence of that probably in another few releases Any update on the policies and network overlay technologies you guys built in with Kubernetes? Sure, so in 3.3 we added egress policy so egress policy is something that had been discussed in Kubernetes but we actually have a large number of customers who need this requirement now so egress policy with the out of the box SDN will firewall you can define firewalls per tenant basically for a project and that will block outgoing traffic from those containers and you can set up you know it's a whitelisting blacklisting setup so you can do whatever you want for that the goal in 3.4 is to enable no sorry 3.5 is to enable ingress policy so this is the work that's been done in Kubernetes around the ingress policy object the network policy objects the goal for that for OpenShift SDN would be 3.5 and they've already started working so the pieces of that and then there's a story coming kind of over the next couple releases is to make it easier to use vendor SDN solutions that may already support network policy so there's no one answer there should be possible in 3.3 to define exit traffic and then ingress traffic you could probably script it on your own and we're gonna get better on the I think egress policy is also a good example of how do you take a cloud native platform and make it work in a traditional enterprise environment right so you spin up all these services on Kubernetes and containers and they need to talk to existing services in your data center but you have all these IT rules about who can talk to a particular service and what IP address or IP range they can come from and on Kubernetes the IP address it's unknown right so this whole notion of taking services making sure that they talk at fixed IP addresses or a fixed IP range so that when they try to connect to your Oracle database or whatever it's actually allowed that's a lot of what Red Hat works on right these are problems maybe that Google wouldn't encounter in their data centers but we encounter it every day in the data centers of enterprise customers that we deal with whether it's on the commercial side or on the public sector side it's kind of part of the bringing Kubernetes to the enterprise if you will that Red Hat specializes in yeah so I noticed having followed OpenShift of course from the beginning there's been OpenShift specific features like secrets and security context that ended up getting migrated into upstream Kubernetes and I was wondering if you could talk a little bit about what the process is on the Red Hat side for targeting what might be a good fit for Kubernetes and then how you work with the community to find the Kubernetes community to figure out what so I think this is the million dollar question and it's part of the longer term evolution of Kubernetes which is how do we define what Kubernetes is that everybody has the same and how do people bring their own special sauce and collaborate to find solutions that benefit multiple people so in the early days it was easy the core Kubernetes 1.0 mission was run containerized applications at scale and we accomplished that and the core OpenShift 3.0 goal at the same time was add a policy multi-tenant layer so we could run multi-tenant containers and then we also knew that we had a whole bunch of requirements like I want to be able to roll updates to applications is a crazy concept I know like I have to update my applications that was something that we added as part of OpenShift in 1.0 because it was not time to do it in cube there was no bandwidth and so now what we've done is we kind of worked through this process is if it's a feature that'll benefit everybody in the entire world we try to put it into core Kubernetes and so we think of multi-tenancy is a good example of that Kubernetes has to be multi-tenant if the core isn't multi-tenant aware then we really end up having security and that's partially why OpenShift is different is a different set of source code or is built you know it's downstream from Kubernetes is because the only way to apply all of that security policy is to rebuild cube to build in the pieces of cube and so our goal is to actually move all of that into Kubernetes over you know several leases egress policy is actually a good example of experimentation so what we like to do is one of the things we're good at Red Hat is supporting customers that we invest in if we put something out there ensuring customers have a clean migration path over multiple years so anything that you run on OpenShift today is going to keep working and we'll give everyone a migration path so that you do have zero down time migration as these features make it into cube it's kind of insulating people from that that cost and on some of the other features in the middle it's actually really a tough choice so a good example is the service catalog work this is something that a bunch of companies Google and IBM and Fujitsu and I believe some pivotal folks involved as well on a large number of companies coming together to say Kubernetes needs a really great service catalog concept that allows you to pick services so you don't have to go stand up everything yourself but you can consume something an IT organization has provided for you that's something where from the very beginning we believe the best place to do that is in the Kubernetes community and it may not actually be part of Kubernetes core it'll be a key extension that we expect most people to run with that was kind of in the middle kind of choice and this is part of the dynamic between Mike and I on the product management side and Clay and his colleagues on the engineering side is you know we we're kind of competing with trying to get some stuff to market quickly and respond to customer demand but always trying to push as much as we can as far enough upstream as we can so that it can be maintained by a broader community so we did come out early in OpenShift with concepts like like deployment configuration like you know authorization that didn't actually yet exist in Kubernetes 1.0, 1.1, 1.2 but then we worked to bring those upstream and in bringing that upstream we're now committed to making sure that our initial implementations are sort of subsumed or integrated with the upstream and again our commitment is to support this over multiple years in multiple instances. I think there's a little attention to this which is sorry Dan. All right these guys are going to be around for the rest of the day and for lunch but I do want to get to the next speaker so thank you very much and hit them up at lunchtime.