 All right everybody, hopefully you've heard this term operator before, we're gonna dig in a little bit. So you might have heard me speak about this, I've done this at a number of different Kube cons and elsewhere, hopefully maybe at some of your customer sites as well. And I want to start this with kind of talking about the goal of why do these operators exist. I came from CoreOS and we introduced this concept I think in 2016 or something like that and it's all about having a SaaS like experience on your own infrastructure. So you heard some of the folks at Broadcom and others talking about how individual teams are experts in their own domain and they want to remain that way and they don't need to be these really broad experts on, don't need to know Terraform if that's not part of their job. Think about that SaaS experience that you get when somebody wants to pull off a database from a cloud service and then use it. You're not an expert in MongoDB. You're not an expert in Postgres. But you know how to connect to that. You know how your application works with it. That's the goal that we're aiming for with this operator concept. And this is across all different types of applications, databases, Qs, storage, other like DevOps tasks, AIML, all this kind of stuff. All, totally works great with an operator on OpenShift. So just to level set, what is this thing? What is an operator? Well, it's taking that knowledge of an application from the experts, whoever built this thing in either an open-source community or a commercial entity and it's packing that into a piece of software that then drives a Kubernetes cluster. So this is, you know, at the end of the day you're going to have some deployments and staple sets and maybe some secrets and maybe a config map. All the things that would drive an application. But as you get to a reasonably complex application, it's not just going to be, I don't know, five of these things. It might be like 900 objects if you're running, you know, this really highly scale up multiple tier application. And you can bake all that expertise into an operator such that you're not having to wrestle with all of these objects. So we built something called the operator framework to help you do this. This is a number of different sub tools inside of the framework itself. The main one being the operator SDK, which helps you build these operators. How do you take the knowledge that's in your head about your application and put it down into a piece of software? And then once you have these built, the operator lifecycle manager helps you actually run these on a cluster. And if you think about, you know, some of these really large open ship clusters that we've heard about today, you know, it's not just going to be one or two of them. You're going to have maybe nine clusters and they're, you know, 30 to 40 nodes. You're not probably going to be running one operator. There's going to be a few of them. And so you need to handle the permissions. How do I wire up the lifecycle of these? How do I upgrade them? All of that is what the lifecycle manager does. And then hopefully you've heard of operator hub IO at this point. This is a community listing of operators for the entire Kubernetes community to discover and play around with those. You also get some of that content inside of your OpenShift 4 cluster as well. So the SDK, what does it actually look like under the hood? So there's actually three different flavors of this SDK. The first one is our Helm SDK. And this is great for folks that have invested in charts for Helm already, so you've got teams that understand that whole flow. What you can do is bake that into an operator so that you have this immutable artifact. You know, an operator is really some code inside of a container, just like anything else in Kubernetes. And you can now stash that away and then hand that off to any QA teams, other teams that might want to consume your application and run it in their environment, their CI system, whatever it is. And the nice thing is this is all driven based off of Kubernetes extensions so that you're just talking to a Kubernetes API to deploy out your WordPress site or anything else, your scale out application. Now, if you have an investment in something like Ansible, you can also make an Ansible-based operator. And this is taking your existing playbooks and other Ansible modules and once again, wrapping them in that operator SDK so it's reacting to cluster events. And you can deploy your applications that way. So think about all the automation that you get out of Ansible, even if you need to, say, poke or prod an external load balancer, like a hardware load balancer, and then bring that into the Kubernetes world and reuse all that existing knowledge that you have, but bring that into the Kubernetes event stream. And then lastly, we have our Go SDK. And this is based on the same tools upstream Kubernetes uses to build Kubernetes itself. You know, Kubernetes, especially OpenShift, is just a series of these control loops, these operators that are running. And so the Go SDK is kind of the cream of the crop for if you want to build something really super smart. And we're going to bring up some of our partners here in a few minutes. And they are primarily using this SDK because they've got a ton of power and a ton of complexity in their operators. And so at some point, especially, actually, who here has deployed Helm? And then you start to template more and more things about your chart as you have more users using it. I bet that every once in a while, you'll just have a bunch of variables in there. And you kind of wish that you would start it with a complete programming language sometimes. The nice thing is with this SDK, you can kind of move between these and get that best of both worlds. You're also going to hear us talk about this operator capability model. So once we have these SDKs, what are you actually producing at the end of the day? So you want to have a high quality operator. You want that SAS-like experience running on your own infrastructure. And so the way we think about this is trying to move operators as far to the right-hand side of this as possible, to these Phase 5, these autopilot operators. And just think about like your ops hero. This is like a person on your operations team that is always on it, reacts in milliseconds, knows every single combination of every config flag, what you should be doing, every best practice about a database or something like that. This is what that autopilot is describing. And we want all of the operators in the world to get to this level. But that doesn't always happen at first. And so these different capabilities in between just kind of communicate that out to the community. Down here, you can see the different types of SDKs. So some of the helm operators are just interacting with Kubernetes objects. So there's not like log processing or doing anomaly detection is not something that you can do there, that you can do in some of the other frameworks. And so that's kind of where you see the spectrum on the end of it. So a bunch of these operators have already been produced, which is great. We have a certification program for them. You can find these in your OpenShift 4 cluster today if you've got one up. And they really run the game across all these different categories, which is really great. For security tools, monitoring tools, storage, databases, security scanners, all kinds of stuff. And remember, these folks have built their expertise into these, so you don't need to be an expert in any of these technologies to use them really successfully at scale. And we're really excited about this. And so if you are working on an operator and you want to get it certified, please let us know. There's a whole kind of thriving community of operators as well out in the upstream for folks just working on different pieces of technology. A little bit more about some of the SDKs and where they are today. So our Helm SDK is underway getting support for Helm 3, which was just released last week. We're really excited about that. Our Ansible operator SDK is going to get its 1.0 version out here, hopefully this quarter. So they've been doing a lot of hard work on that using their new UBI base image if you're familiar with that. And then the Golang library is always getting new versions of Kubernetes. So we're looking at Kubernetes 114, changing around some of the way that the modules work as go. The community has changed that up as well. A whole bunch of other really great stuff going on there, and further tying all of those together. So I mentioned this operator hub and that you can get these inside of your OpenShift cluster. This is what that looks like today. So you've got the community listing operator hub.io. You can find that inside of your OpenShift cluster as well as Red Hat products. This is things like our Kafka product, AMQ streams, a number of other things, as well as certified operators from a bunch of the partners that are here upstairs doing a bunch of really great work. One cool new feature that I want to call out is if you are packaging an operator, we have this new bundle editor. The CSV is the metadata file that describes an operator. And it's got its version and the description, some of the permissions that it needs, et cetera, how it gets upgraded. And you can actually build these live on operator hub.io now, which is really nice. This really great form that you can see here that is helping you fill out that whole thing and get it submitted so it shows up for the community. So if you haven't checked that out, please go check that out. And now I wanted to dive into just a few different things under the hood that I think are kind of interesting to y'all as maybe you're getting your first dose of this and you're in an OpenShift 3 world, or maybe you've interacted with some of this and these are some of the ways that this is getting better in successive OpenShift versions. So in OpenShift 4.1, we did static dependency resolution. This is a really key part of operators and the lifecycle manager is actually you can have operators that depend on each other. So I bet everybody here has an application that probably has a logging stack that it depends on. So if you've got your EFK stack or whatever it is, you could actually defer that entire thing to an operator already deployed on your cluster so that your team doesn't have that operational burden. You just say, hey, I need a stack in prod and in dev and staging, et cetera, and you go do that. So in 4.1, this is static, but in 4.2, it's now automated. So through the lifecycle manager, it will go find based on the Kubernetes CRDs that you've said you depend on, which would be like that logging stack. It'll go find an operator that works for you and install that. So it's the same thing if you were looking for a database or a queue or a web server or any of those types of things, a caching layer that you can automatically do this across your cluster, which is really great. Now you can do this for all the certified and Red Hat products and all the stuff I've been talking about, but even more powerful is doing it for your internal applications. So picture that you've got a database administration team that runs a tier of databases for folks. You can start discovering those and using a very specific flavor of your company's implementation via an operator. Or if you've got like a standard rate limiting tier that you use and you wanna produce an operator for that and then share that amongst like 10 or 20 different teams, it's a really powerful way for one small team to have a big impact across your organization. So if that sounds like something that's interesting to you, sounds like the Omnitrax folks are going down that path already, which was really cool to see. Take a look at some of this technology. Also really important is operators are typically interacting with cluster-wide resources or at least the CRDs are registered cluster-wide. And so it's important to have a little bit of indirection because you don't want all of your users installing CRDs. What you do is you use the Lifecycle Manager to do that. And so that is gonna safely install them as well as wire up a bunch of permissions. So I wanna install this operator in a specific namespace, go generator service account, attach that to a dynamic role that only needs the minimum set of required security permissions and then make that all work for me. So what you can do now in OpenShift 4.2 is actually choose like kind of a custom run level that these can reach up to. So if you wanna lock down your environment a little bit more for either a specific environment or a class of users or specific teams, divisions, whatever, you can now do that in OpenShift via OLM. One that I'm super excited about is gonna help the experience for all of our partners and teams that are building operators have a more seamless experience is to be able to auto install one click CRs. A CR is just the instance of a cluster resource. And so if you have something like a metering stack or a chargeback stack or maybe even like a storage stack and you're only gonna run one of them inside of OpenShift now you'll just click a single button and just get that instantiated versus something like a database operator where you might run 30 of them. You actually need to, you know, each team is gonna make their own objects. Their prod database, their staging database and their namespaces from their quota. So this is just something that's gonna be really awesome for products that are more like cluster add-ons. Think like security scanners, one click install it from any of the partners that work on OpenShift and better is be a really nice way to get started with that. As well as a bunch of OpenShift products themselves. So if you've checked out our new service mesh offerings via operator or the serverless and K-native technologies, logging, container storage, et cetera, all kind of operate in this cluster singleton way. So you get a much smoother installation path. Also something that's coming in 2020, excuse me, is simplifying the object model for operators. So we're gonna introduce a new single operator object. Right now this is kind of split between a few different versions of objects just for reasons that we're not gonna get into. And so what this is gonna allow you to do is have a really easy local dev and registration process for if you are developing an operator internally, especially making it very easy to load into your cluster. So what that looks like is you would use one of these SDKs on the CLI to build a type of operator, push it to a container registry with all of its assets just like you would any other container image. And then you can pull and start that on a cluster. Today we've got some tools for doing validation of this process, but we wanna make it really really easy for engineering teams inside of your organizations to build these operators, empower themselves to be a little bit more automated in how they're doing deploys, doing GitOps workflows and that type of thing. Something else that's really cool is especially for any of our partners in the room is in OpenShift you can extend our console to integrate your product into the experience. You can also do this for internal applications that you have and so some things you could do like our OpenShift dedicated product that we talked about earlier, we swap out some logos and put some banners up and all that, it's all driven by an operator and you can do this inside of your organization as well if you've got a branded infrastructure team and you wanna put their logos up there and broadcast maintenance messages and all that type of thing. As well as point to locations for getting CLI downloads that are inside of your firewall, for example, or that you know you've built and vetted. You can all do this via the operator which is really powerful. And then our third party partners can also register, here you can see an example of a couch-based operator that has said, hey I have a dashboard that is useful to users of this cluster because they're gonna be using couch-based. And so you can start registering those inside of this menu and we're gonna be introducing more and more of these over time. Also really important is for in the deployment of these operators when you're actually making your instances of your databases and queues and things, having a UI to do this is really helpful, not everybody lives in OC or CoupeCuttle or not everybody has hooked this up to a Jenkins pipeline for deploying these things. And so we are actually gonna auto build some UIs based on the open API schema and your CRDs. So if you weren't aware, open API schema I believe is gonna be required in Coupe116 for new CRDs going forward and so we can actually use that rich data to say, this is optional, this is required, this is an integer, this has a very specific set of inputs that are allowed and then we can build Kubernetes-specific widgets for that. So if you are making a database and you want to take some cluster quota and have some limits and requests on it, we actually have a special little widget that actually helps you do that and can do some smart validation because it understands Kubernetes. Same thing if you have an operator that requires a secret either to be passed into it or it's gonna generate it for you, it knows that hey this is a secret path, I can open up a dropdown and actually you can pick from a secret. So one way just to keep these things easier and easier to use as you have engineers that maybe are on a whole spectrum of using Kubernetes, some folks are really deep into it and some people aren't. This is what that looks like over on the right hand side here, you can see that this is a dropdown that says, hey pick your secret that you wanna do and this is all a bunch of secrets on the cluster. You've got these Boolean toggles and things like that. You can also contribute these yourself so our console is open sourced just like the rest of OpenShift and so we'd love to work with you on those if you have them. All right, the last thing is super exciting. Who here uses the open service broker or at least has heard of this project and any of the binding capabilities that it has? All right, a few people. So we're bringing that to the operator world and what this is gonna look like is right now it's a separate operator that you can find on operator hub and what it's gonna do is look across your cluster and try to fulfill these binding requests and effectively what those are are label queries on each side of that. So a front end needs something from a back end. You can match those up via label query and then gets those secrets registered within those applications and this operator makes that happen. We'd like to see this probably built into the life cycle manager over time. It's kind of an experimentation phase right now and I know this is really important for our ecosystem of partners as well where you end up using a lot of these operators together. Like I said, you might have a database and a logging stack and then your front end of your application and you wanna pass secrets amongst all of those applications. Also a new in four two is a new topology view in our developer focused UI. These bindings and relationships are represented in that as well. So if you haven't checked that out it's pretty cool. So if you dynamically update these it'll show those being wired together. So once again, if you're not, if you're a command line person, totally fine but if you like to use the UI to see the representation of these applications a little bit more powerful there as well. And you can check this out on Operator Hub. It's a one click install inside of an OpenShift cluster today. I think that is all I had on this side of it. So we're gonna have a panel here.