 Thank you, Diane. Hi, everybody. So hopefully you aren't too sick of the term operator. We're going to talk about it for a little more. And I want to talk about the state of some of the tools that we have to build operators, the state of the ecosystem, and kind of where we go from here. Just want to give a little bit of background on operators in case you've heard the term, but you don't know where it came from. It basically maps to the adoption phases of apps in Kubernetes, much like we've heard from all the folks on stage today. And we kind of went from stateless applications with just scale out like nginx pods and caching and some of the stuff that you heard about earlier. Very easy to replicate. But then really quickly, you want to get into stateful workloads. And we've got things in Kubernetes, like the container storage interface and staple sets to help you do that. And you can get a very simple Postgres deployment going where you're just sharing storage. It's following the pod around in the cluster. But very quickly, we've moved on to complete distributed systems. So you don't have all the primitives in Kubernetes that you need to do things like data rebalancing. There's no kube object for data rebalancing. There's no kube object for backups or seamless upgrading of an operator. And so this is where running these complex distributed systems need one level higher than the Kubernetes API. And if you haven't guessed what provides this, it's an operator. And what is an operator at its core is really taking expertise from the folks that know how to run and scale that software, what it takes to install it, to fail it over, to scale it up, scale it down, whatever you need to do, take the operational expertise and bake it into a piece of software such that you can combine that piece of software with some desired configuration and out pops a bunch of Kubernetes objects. So we're not interacting. If you've ever run a complex distributed system on kube and you've got like 35 different YAML objects for all kinds of stuff, it gets kind of crazy. All you need to do is interact with one kind of top level config for the operator. The exciting thing about this is that you're building on Kubernetes primitives. So since all this stuff ends up as deployments and staple sets and config maps and secrets, you're not reinventing how any of that works. We've got thousands of engineers that work upstream on Kubernetes making secret handling really great, making services discovery really great. You don't need to reinvent that. But what you can do is use all that. If you think of that as your toolkit, you can use all of that to then construct your application. Doesn't have to be like a traditional three tiered web app. It can be all kinds of architectures. You can have anything under the sun. And because then you're using that Kubernetes under the hood, these are now truly hybrid apps. They can work on any conformant Kubernetes cluster. And then you have uniform deploy, debug experience using KubeCuttle, using any tooling that you've already built around the Kubernetes API. And this is really great when you have engineers that are either moving between teams or, as we heard about earlier, oh, everyone's kind of doing 90% of the same thing in either their Jenkins pipeline or whatever the tooling that they built to deploy applications. So you can share a bunch of that knowledge using an operator as well. Do we want that? Does that sound great? Is that the nirvana that we want to be here? All right, then we're going to talk about how we get that. And it starts out with the operator framework. This was a collection of open source tools that we introduced about a year ago now. And it really has two main use cases. We've got folks that are in the community and for builders that are distributing software. So this allows you to very easily create an operator around your application. And this can be either like a database that you might sell as a commercial vendor. It might be an open source ecosystem like the TensorFlow community, or it might be an internal application within your bank, your insurance company, your e-commerce shop, whatever you do. And then at some point, you need to actually deploy that out in a cluster. And that's done by some end users, whether those are SREs within your organization. They might be customers of yours. And they want to keep this stuff up to date in the days of all these security vulnerabilities, of which we just had a new one last week. This stuff needs to be up to date. You need to have a stream of updates coming down. We think a stream of updates to the operator, which then operates the software, makes a ton of sense. So if you haven't looked at the SDK and the Lifecycle Manager and operator metering, those are the components of the framework, at least the three main ones. And these are tools that roughly map to building new operators, running those operators on a set of clusters, and then collecting metrics and other things at scale once you have, you know, you're running 1,000 databases with an operator. These are all housed in a vendor-neutral GitHub org, operator-framework. And we've got a collection of all this software, as well as a few other projects there as well. One thing that's really important to understand when we're talking about operators is we have this maturity model. And this is really important because these things can be really complex. And every application is a little bit different, but we want to be able to map these so we can talk about them in conjunction with each other. And at the bottom here, you'll see some of the flavors of the SDK, which we're going to talk about. And these are roughly how this technology maps to these maturity models. This isn't like a perfect diagram. You know, you can get kind of between a bunch of these phases. And you don't have to use any of our SDKs, as well. But this roughly gets from, you know, table stakes is installing upgrade. Everybody here is doing that with software today, so you just have to have that in this modern ecosystem. But all the way to the right-hand side of this is if you picture, like, the smartest cloud service that you've ever seen, you know, this is something that is horizontally and vertically auto-scaling, auto-tuning based on the number of requests or the type of workload that it's processing. You know, it's something that's auto-upgrading, auto-backup. Whatever it is, auto-failover. This, picture that cloud service, this is what that category is all about. And what these SDKs and all the community that we're building around this is designed to do is to get everybody to this phase five of this maturity model. And we think that's kind of the key for having this truly hybrid cloud and the experience that you actually want, that cloud-like experience, but on-prem, on any hardware you want in your private data centers behind your lockdown environment on Amazon, whatever it is, this truly cloud-like experience powered by Kubernetes. So how do we get there? The first step is taking either one of our SDKs off the shelf, or you can just, you know, build an operator yourself. And what this feeds into is these three flavors share some things in common. And the testing framework is the first, and this is really, really important because if you picture like a database operator, it's a Postgres operator, let's say, there's gonna be this desired state loop in there that's comparing all the Postgres's that you have out there if they need to do leader election, if they need to do failover, if they need to be backed up, whatever it is. And you want that to work. You don't want it, you know, you're in production and your databases are getting deleted by your operator and you don't have your backups or your backups aren't running. You know, that's really, really critical that this is well-tested. And so our SDKs help you do that no matter what technology that you're using. And then we go into some of the we have called a scorecard and this is just something that's like a kind of black box testing for an operator to make sure, can I instantiate this on a cluster when I give it a custom resource? Does it actually do something? There's a proxy that will instrument the Kubernetes API calls that it's making so we can kind of verify that it's doing the right thing. And with all of this, you can just kind of rest assured that you have a very high quality operator because you know, you're trusting your staple workloads on your complex apps to this thing. And then eventually we have an operator hub which we're going to talk about that houses all of this. So I want to talk through some of the SDKs because in the high level that you get some of the previous presentations, you know, just dig a little bit into the technical details so you kind of know what you're getting into if you want to go home and build one of these. And the first and easiest way to get started is with our Helm SDK. I call this like the quote unquote no code. This is different than Kelsey Hightower's no code thing if you're familiar with that. Pretty funny. And what this allows you to do is we've written all the tooling for you to take an existing Helm chart, build it into an operator and then run that on a cluster. So it's constantly looking for changes to your desired configuration which is like your Helm values.yaml and then applying those out kind of rerunning that templating. And this is what it all takes to do it is you can just go say, this is a, I want to take the stable Tomcat chart, build it into an operator. The nice thing about what this is doing under the hood is it is a container build. So we like containers because they're mutable artifacts, we can version them and then we know that we're going to get the same deployment outside when we run it on this cluster or that cluster. You get the same thing with an operator. This Helm chart is now built into a container and versioned so that when you run that on any number of your clusters and like a QA process and kick it off in a Jenkins pipeline, you know that you're always going to get the same output given the same input. And then because I said, it's, you know, Kube native tooling, you know, if you have Kubecuddle get Tomcats or OC get Tomcats here, you see we've got two of them running, totally native into the experience that you're used to. All these things get written out to like the Kubernetes audit log. You can put our back around them. All just a really great experience, all Kube native. And what you're operating against is that Tomcat object that you see which is a custom resource. It's an instance of a custom resource definition. And the exciting thing is that is your API surface now. You can change your operators much as you want, implementing new features, adding new things, but this object is what your end users are going to be coding against. And so we have an Ansible SDK that gets you much of the exact same experience, but if you have an investment in Ansible Playbooks and that you're kind of, you know, maybe you have a more Opsi background than like a traditional software development background, this is a really great way to get started there. The input instead of a chart is an Ansible Playbook and you map those playbooks to certain events that happen on the cluster. You know, I change this value and run this playbook kind of thing and you get the exact same experience on the end. So same object schema, you get the same experience on OC. So it doesn't matter what technology that you use to build these. And last, the last flavor of our SDK is our Go SDK. And this is, you know, kind of the cream of the crop, the most powerful SDK that we have. And under the hood, this is using all the same tooling as Kubernetes developers use upstream. So you can write extremely powerful operators and this is what a lot of the staple workloads like the Mongos and the Redis and Couchbase and Crunchy Data, all these folks use because they just need a lot of control over what's happening in the operator. And this code chunk that I have here is a really simple stubbed out desired state loop. This is what your operator is constantly running. And so when I talk about, you know, bringing the operational expertise to that you have about your piece of software, this is what you're writing. So here we're just saying, oh, if, you know, we don't have any Tomcats, this is an initial deployment. So here's the code to go construct one of those. It's these staple sets, it's create this config map, generate this TLS cert, map it into these pods, you know, et cetera, et cetera. You would stub all that out there. And then you've just got this constant loop running about checking, you know, all the parameters of your Tomcats. And, you know, if your operator knows how to do a new thing in the future, then it can upgrade those into that is a really key part of that. And so that's the kind of, you know, we wrote all the logic to talk to the Kubernetes API and all those types of things so that you can just focus on writing this desired state loop. So we're really, really excited about how operators have taken off across the industry. I mean, you've heard it a bunch here. You know, it was a big thing at Red Hat Summit last week. And it's kind of just taken off. And so we've got a bunch of great partners that are writing operators that have listed them on Operator Hub. And the idea here is to have your end-user software engineers, the folks that you are supporting or if you're on one of these teams, to get workloads running really quickly in a production-ready environment without having to be an expert. So if you need to run a Spark cluster, you need to know how to use Spark, of course, but you don't need to be an expert in, oh, this component, when it comes up, it does service discovery and talks to this thing and load balancing works like this. You know, you just need to be kind of at a very high level times every workload that you run from, you know, messaging queues to staple workloads, to machine learning, AI workloads, all that type of stuff. I wanted to call out one example, which is kind of cool, which is Amazon has a service operator. This is listed on Operator Hub with all the other operators that we have. And what this does is translate Amazon objects like an S3 bucket or an RDS database or whatever into Kubernetes CRDs. So once again, you're using that Kube native tooling. You can put R back around who can create these, who can modify them. But behind the scenes, it's actually talking to Amazon creating, you know, all those objects for you. And so I think that's where we're gonna see the operator's concept going is starting to talk to other remote resources, you know, connecting with internal workflows that your organization might have, much like we heard from SIX earlier about, you know, that you have this new project request form. It goes and actually populates something out on the cluster and reports back. I think we'll see a lot more of that type of stuff once we get a little more advanced. So I told you, we have to iterate through that desired state loop. And it's interesting to see how that actually either informs your knowledge of what your application actually does. And so we're gonna hear from our panelists in a second, hopefully about some things that they've learned about their own applications, just by building an operator. Because you have to really shift your mindset into how this works. You're using the Kubernetes cluster as your source of truth. You know, you don't want to require outside input because then you're not gonna get the same result if you just give this operator somebody else. And then ultimately at the end of the day you're just gonna make all these kube objects. And it's always worth keeping in mind as you're building these operators that what we're going for here is what you've seen a lot of the folks up on stage talking about which is self-service for engineers. Whether it's a UI like you see here or on a command line, you're interacting in this case with a MongoDB replica set. So this gets you, you know, an HA production ready form of MongoDB. You can use that and, you know, keep it in a Git repo for example and do GitOps. But it's self-service so you can tune this if you've got settings you want to move from staging to production. You don't have to get your admins involved. You don't need to, you know, deal with your central IT team which is a really, really powerful concept when you want to have, you know, 40 teams sharing a cluster. This is the only way that you can kind of scale to that. And I mentioned GitOps and you've heard it today. The GitOps power is really, really cool when this works. So you can, you know, think about like a pull request maybe that you've seen recently for a bunch of CUBE objects. I mentioned sometimes you have this, you know, a complex app that's made up of 35 Kubernetes objects. You're reviewing someone who made a change that has to go talk to maybe 20 of those things. It's a new secret that you need to wire through everything. Now you're reviewing these 35 things. You're not maybe an expert in all of them because it's a front end and a back end for example. And so you're kind of like, yeah, that PR seems good but I don't really know how this actually bubbles through everything. But wouldn't you rather just go look at two of these very high level YAML objects and say, oh yeah, I can see that, you know, we're scaling up the MongoDB and we, you know, aren't touching the front end. Or, oh yeah, we're tweaking both of these configurations to have better security policy, whatever it is. And so this is the power of this, is especially as you onboard more and more teams onto a cluster, if you want to introspect what's going on, you need a higher level view versus, you know, you might have 1500 pods as we saw or, you know, we've got clusters that have thousands and thousands and thousands of pods on them. Very hard to see what's going on. And then as an admin, you have full insight into the operators that are running and, you know, we are talking about that stream of updates. You need to know exactly, maybe in production you don't subscribe to a stream of updates from Couchbase, for example, but in staging or dev or in everyone's individual environment, you do. And this is really powerful but you also need to be able to see what versions is everything at. What channels am I on and different namespaces? So you can see that inside of the OpenShift console. You can also interact with these via the command line. This is the operator life cycle manager at play behind the scenes. And it's all built on CRDs as well so you can interact with them any way that you want. And most importantly, lock down the R back around them. So I want to encourage you to try this out. We've got a getting started guide that ties together the entire operator framework but you can also look at getting started guides for all the flavors of SDKs that we saw. So if you've got Helm charts, if you want to try the Ansible SDK or give it a whirl for the entire go SDK, you can find all that on the top link. We also have an operator special interest group that meets as part of OpenShift Commons. And this is a group of folks like yourselves that are solving problems together, showing off things that they've made, trying to figure out what's the best practice for doing X or Y. And some things that we're working on in this group are things like, if you're familiar with the open service broker and its binding concept, bringing that into the operator ecosystem, what does that look like? How do operators work together? If you've got a cluster monitoring operator and a database operator, can you auto-orchestrate monitoring of that database? Those types of things are all things that we're talking about in this SIG so we'd love to have you and bring your use cases to that as well. And then lastly, if you just want to consume some operators or see which ones are out there, operatorhub.io is constantly updated. I think we've got 30 or 40 operators on there right now. You can also find these inside of an OpenShift 4 cluster as you saw this morning. So really exciting community that we're building there. And if you have an operator that you want to list, please let us know. It's kind of a pull request process that you can do on GitHub as well. So with that, we're going to start our panel so if our panelists could start coming up here. And I'll be around afterwards. We'd love to take your questions about building operators a little bit later on.