 pretty much the easiest job in the world which is to entertain a bunch of people who have fresh beers and food in front of them. So this should go very very smoothly. So CoreOS was acquired by Red Hat about three and a half months ago and what I wanted to do was just walk through some of the things that we're working on. It's pretty easy for me to talk through some of this and give you a couple of live demos of it because a lot of it is things that were inside of the tectonic product that we are going to be bringing to OpenShift over time. So really this is not a lot of new brand new announcements but really just familiarizing folks that are inside of the OpenShift community with some of the things that we had been doing inside of CoreOS and inside of tectonic. So the first thing that if you're not familiar with this, this is the tectonic console which is an administrative console on top of Kubernetes and one of the things that we spent a lot of time doing at CoreOS was rethinking the way that enterprise software was delivered and ensuring that when people get enterprise software it has a lot of the capabilities of a cloud service. Now when we think about a cloud service there's essentially two pieces. There's the hosting, that is a very traditional business where you stick a server in a rack and you give it an IP and you sell it to somebody. And then there's what we eventually termed automated operations which is this idea that it's not just the server and the IP but also services on top, databases, load balancers, etc. And those services are unique because the operations are automated, the upgrades are automated, monitoring is automated and so there's a lot that you get out of that by default and so we wanted to make sure that when we delivered software to people and that started with operating system and eventually with Kubernetes that you could also automate those operations because as a software company we're not also going to sell you a server. So we're automated operations ended and where it will begin again inside of OpenShift is we have this one-click update inside of Tectonic where the software it gets a little recursive where we're actually hosting all the components of Kubernetes on top of Kubernetes and don't worry we do it in a way that's safe. This cluster is my personal cluster and it's been up for probably eight or nine months and it always surprises me because every time I log in I see all the tweaks and stuff that I saw mock-ups of months before on my live cluster and I never did anything because the software is just constantly updating and so you'll notice inside this system that all the components of Kubernetes like the scheduler is in here and they're running as pods which has a bunch of cloud like properties. One is that I'm able to come in and actually edit the pod and upgrade the scheduler over time and that's how we power these automated operations. So you can upgrade from Kubernetes 1.5 or Tectonic 1.5 to 1.6 to 1.7 to 1.7.1.1.2.3 all with a single click and you actually get live telemetry back of how those things going and you can do everything that you do normally like drill down into the individual pods and see how much memory and CPU they're using and get monitoring and metrics data back. And these are the sorts of things that we'll start to pour into OpenShift and was part of the announcement during the acquisition of this automated operation stuff. So that's some color around what we mean by automated operations. The other thing is that sort of the namesake of the company was CoreOS, an operating system which we eventually renamed to container Linux with some success of that rename. It's always challenging to rename a product. But the automation and the automated operations don't just go down to the Kubernetes layer. They go all the way down to the foundation of the actual operating system. And so this is a brief demo. If you keep looking down here, it's looping. But what we had done inside of the operating system is that Kubernetes is actually in control of the exact version of software that's running on each node. And that status and that information gets pushed back up to the Kubernetes control plane. Reboots are controlled across the cluster in case of security updates. And you end up with a system where when we release a version of Tectonic, you get not just Kubernetes that have set version, you get the operating system in the set version, you get the Docker version of the set version. And this entire stack of software is controlled together. And it's all controlled through the Kubernetes API so you can control, monitor and view what's actually happening in real time using Kube and KubeCuttle. So those are two big things that we plan to bring to OpenShift. The other thing which we've open sourced a few of the kind of secret sauce pieces of Tectonic and they're now available on the OpenShift GitHub around monitoring. We ended up building in what we call the Prometheus operator and then a bunch of technology around monitoring inside of Tectonic so that you get immediate insight, not just across the application but across, as you saw in the previous demo, actually how the Kubernetes control plane is running so you can dig in, debug issues and that sort of thing over time, whether they're host-level issues, pod-level issues or individual components like services of the Kubernetes control plane. All right. So that's kind of our preview of a few things that we've started to do that are OpenShift specific. And then the other thing is we announced today a thing that we call the operator framework. And I'm going to run through and give a quick overview of what that looks like and what we're trying to do here. So this is actually my keynote today from now. And so you're my practice audience. So you didn't know you were in a beta but welcome. There's some joke in here about being acquired by Red Hat. I already blew that one. So operators we introduced two years ago. And the idea of operators, we introduced operator for a database at CD and operator for monitoring system for Meteos. And the idea of operators is that they're these kube-native applications that run in pods and are managed via kube APIs. And so by run in pods, I mean you deploy the operator on your cluster and it's just a normal Kubernetes deployment. And then managed with Kubernetes APIs means that you deploy a resource that's a brand new type of Kubernetes resource. It's not a deployment. It's not a pod. It's not a stateful set. It's an SED cluster in the case of SED. And the act of deploying this operator caused this new API to appear on your cluster. Very magical. By analogy, what we're trying to do with operators is something that's impossible to do on the public cloud, which is I have my application, whatever it is. It might be some cool open source project like Cassandra or it might be something like an SAP integration that's specific to my organization. And I want to make that available on the public cloud so people can deploy copies of that application. You can't do that on the public cloud. Amazon or Azure or whoever, they're not going to let you just introduce a new, you know, service that you can use the Amazon or Azure command line tools to work with. And so what you can do is you can use Kubernetes to make that service available. And this means that by making it available on Kubernetes, it runs across all the clouds. And you can use this API to not just deploy compute network and storage with containers, but some higher level service as well, your application. Now, we have some feedback that this has worked really well. So Ticketmaster was a CoreOS customer. And they use the Prometheus operator for monitoring. And they just let teams deploy monitoring services for the applications on top of the cluster. And today, they're now up to a couple hundred instances where teams are self-managing their monitoring infrastructure because it's just a manifest that says, I want to deploy a Prometheus cluster. I want to be available at this host name. And it needs to monitor these things. And so by lowering the barrier to managing a piece of software, you get more consumption of it, which is exactly how the clouds grow so quickly. And we're hoping that by taking advantage of that success of cloud, but bringing it to Kubernetes, we can grow the overall base of Kubernetes software. So our goals here are to bring more operators into the ecosystem and make them in use by more people. So the operator framework is this toolkit, where we're making it easier for people to build these Kube native apps, like we've done with that CD, like we've done with Prometheus, and make them manageable across lots of different Kubernetes clusters, of course, including OpenShift. You can check it out at github.com slash operator dash framework. And it has two components. It has an SDK, which is a bunch of tools for doing the hard parts of building one of these operators, tracking related Kube resources, test scaffolding, vendoring of the correct libraries. And it looks like this. Jokingly, one of the Google engineers has called this a similar project that he was working on, Kube on Rails, but you create a new version of an operator using the Sdk command line tool and describe it and then a scaffolding gets created for you. And Philip Whitrock has been working on a similar project and we're looking to bring them together in a SIG inside of Kubernetes, which is up for Fuzzle. The other piece is operator life cycle management. So you have these operators, but it's a little cumbersome, you have like this YAML file, you got to deploy it, and then how do you upgrade it and what the version got deployed? There's just a bunch of questions. And so what we're trying to do with operator life cycle management is maintain a catalog so you can go in and say these are the versions that are available to me, make it available to specific namespaces so that cluster admin has control over what people are deploying as their monitoring tool or their database, track those instances across namespaces so that people like the folks at Ticketmaster are able to figure out how many instances exist, and then of course apply updates in case there's some problem in the piece of software like the monitoring stack has a security issue. So it looks like this, we have these manifests, we put them in a catalog and then you're able to deploy them across namespaces. And the OLM, the operator life cycle management is really solving this, well, how do I deliver my app onto the Kubernetes hybrid cloud? And you can do this with things built with the operator SDK, but you can also do this with Helm charts or the Kubernetes built-in types, there's docs on the repo if you're interested. So quick recap, it's open source, it's up here, Starstuff, because that's how open source software wins is lots of GitHub stars. And the next steps here, we want to make more operators more easily and bring more users to those. And the why is we want to make Kubernetes the dominant API for cloud native applications moving forward. We believe at Red Hat, I believe as somebody who's been in this ecosystem for the last five years that this is our opportunity to make an actual compute network storage infrastructure that can run anywhere from somebody's laptop to somebody's data center to somebody's public cloud. If you want to find any of us who've been working on this, these are the faces, Kelly is right there. In particular, I don't know where Rob and Jimmy are, I think they're on plane somewhere, lost in Amsterdam. And that's all I got. Thank you very much for your attention.