 Hi. Welcome. Thank you very much for coming. As Diane said, I'm Sebastian. I work at Red Hat on operators, and I lead the teams that are doing all of that. So it's not just me. It's mainly a lot of engineers working with passion on this project. I'm going to go into detail today about operators and what an operator is before and what kind of tools we built at Red Hat to try to help the community and help ISVs to build these operators and to also distribute them and operate them in production. That is the whole goal of this endeavor. So containers brought us a lot of simplicity, complications as well, of course, but simplicity. Before, as a developer, what I had to do when I wanted to start developing my software, just start not doing anything else. I would have to know how to configure, let's say, if it was a web app, my Postgres database or my MySQL database locally, that means installing it, configuring it, operating systems help, but you still had to go through config files until it finally ran in the configuration that you wanted. If you wanted to use something like Redis, you would have to do the same thing. What containers brought us is the ability, at least for developers, to do something very simple. Dr. Paul, Postgres, or Redis, run it. I can start developing. I don't have to do anything else. That's pretty tremendous. It shaves off hours of time, and it also allows a whole category of people who weren't able to do this at all, who couldn't access that to start working on their code and focusing on their own problems. Now, the problem with Docker containers and all of these things is it doesn't translate completely. All these promises don't translate completely into the production world. That's why we have things like Kubernetes. Once running something locally doesn't mean that you can run it on a cluster with failover tolerance and also things like managing upgrades of them. Also, in general, Kubernetes helps for these things as well. It comes with tremendous amount of generic features to help you, for example, build an application and put storage behind it, and if your application dies, storage will follow. It gives you high-level primitives like secrets and all kinds of other APIs that you can use to build and deploy your applications. But today, when you use Kubernetes, it's still a lot of manual steps. An application is not just a single package in a container that you run somewhere. It's usually a lot of domain-specific knowledge that is in the heads of domain experts, of operations people. This is internal to companies for their own software, but also for big open-source projects and big commercial projects. All of these projects require a lot of knowledge to operate. Just handling upgrades, like from one version to another, or the scaling up and down of such an application based on concrete and important metrics. All of that, what operators bring, is a pattern. In 2016, CoreOS came out with that pattern that we could build on top of Kubernetes, leverage all these generic APIs, and build a system that does all of this automatically. You transform all your domain-specific knowledge into code, and you leverage and extend the Kubernetes APIs to run that code, to deploy that on top of not just one Kubernetes cluster, but multiple clusters across multiple cloud providers and also locally. What that gives you is something that's akin to a cloud service. Today, you can go to Amazon and ask for RDS for a Postgres database, and you won't worry about a lot of things. You'll worry about others. And the same thing we want to bring to open-source and commercial software. We want people to be able to provide that same mechanism and feeling of being able to deploy instances of your completely production-ready services one by one, one after the others. I'll show later an example about how you can do it with a Prometheus or an SED operator. This pattern lends itself very well to complex and stateful applications, but it's not only for that. You can also help deploy web applications and other stateless apps with it. And it lends itself particularly well to more complex distributed applications like Kafka or Cassandra. But also, we'll see Postgres, for example, is not that easy, even if it's not distributed per se. There's a lot of operators that are being built across the industry. We have a link that currently we have them referenced in this GitHub repository called Awesome Operators. I'm going to talk about various ways how we're going to bring them more forward than that. But as you can see, this is a variety of software. Red Hat has contributed to their own, like at SED and Prometheus through the acquisition of CoreOS, but also things like Kafka. And then we also have external ISVs and also open-source solutions that are building their own, like for Redis and for Postgres or like Crunchy is doing for Postgres. This allows you to use the same open-source software that you know as a very simple way without having to know the complexities of operating it. What's the operator framework? Well, despite building operators, it all sounds nice and easy. Here's a new promised land. Here's something new for you to put all your code and it'll do magically what you want. Well, that's not how it really works in practice. In practice, you have to know a lot about Kubernetes. You have to know a lot of details. You have to build a lot of... You have to run that code. You have to deploy that code. You have to do a lot of things that can be done in a generic way, in a way where you write it once and it's reused by multiple people. The Operator SDK is one of those projects that we've done. The Operator SDK allows you to create new scaffolding for your code, generate that new code, update that code, and also make sure that you don't have to know all the intricacies of the Kubernetes API when you're starting and that you can focus on your domain-specific knowledge. If I'm somebody who knows how to automate Postgres, I'm going to be able to start focusing on that and not on a lot of more complexities that Kubernetes brings with it. I'm also going to need to package and test that. Testing an Operator in a repeatable way is core to how it works because if you can test something repeatedly, then you can automate it automatically as well. These are all things that come together and the Operator SDK provides libraries to test, to do things like metrics, to expose your internal domain-specific knowledge in terms of a Kubernetes API. That's what it does. Once you have an Operator, let's say, for example, a Memcache Operator that when you send it a new question, it will generate a new Memcache service for you. Once you have that, you're going to, and many more Operators, you're going to want to distribute them and manage them. In order to distribute and manage them, we created something called the Marketplace and Operator Lifecycle Manager. What it does, Operator Lifecycle Manager is kind of an Operator for Operators. You're able to take Operators, deploy them, update them automatically, subscribe to new versions of them if you want it to happen automatically, and always be assured that you have a certain amount of Operators that you decided on your cluster. The other thing that it does for that is it creates, and I'll show it in a demo shortly, a nice UI to manage all of that. Then we also have Metering. Metering is a component of the framework that allows you to then extract a lot of data out of your Operators and know how they're running. How are we doing? How many tables are there in that Postgres database? How much does this instance there cost in terms of AWS costs? All of these are things that can be extracted and then reported on with a Metering so that teams can do things like showback and chargeback and all of these things. There will be a completely in-depth session about Metering later in the conference. Without further ado, I'm going to demo a bit of these things because I think that this is one of the more interesting ways of understanding how this all works. So what I have here is a cluster that is installed. It's OpenShift, it's running the latest version, and on there I have the Operator Lifecycle Manager and the Operator Marketplace installed. What the Marketplace does, and this is a very early preview of it, we're going to release a version very soon, what the Marketplace does is it allows anyone to push Operators to Quay.io and eventually other registries. And using these Operators pushed to Quay.io and make them appear in any single cluster that has the Marketplace installed. So this allows ISVs to have one common place of building and exposing Operators. Red Hat will be helping and certifying these Operators and making sure that they work in a fully automatic way and according to a certain set of standards. And our tools, like the framework, will help validate those Operators automatically continuously over time so that somebody who uses something that is in this Marketplace can have the guarantee that it's continuously updated if one can also say at every step, yes, I want to check on every upgrade, of course, but by default you can have it automatically upgraded just like your phone does it for different apps and you can trust that they're going to automatically handle things like failures and other things like this. Once you have it something in the Marketplace, the operator of a cluster can decide who is allowed to install these applications and ask the operator to deploy new instances of them. This is all done later by through what we call the Operator Lifecycle Manager. Once I have the Operator Lifecycle Manager installed, I can subscribe to Operators. In this case, I've already subscribed to, for example, and I can see that my operator is installed. My operator provides me with a bunch of different operations that I can do, like, for example, creating a new at-cd cluster. Before we do that, let's have a look. Do we already have an at-cd cluster running? I can see here, yes, I have one. It's called Example. It has three pods. That's quite nice. Let's do a new one ourselves. In this case, you can see that I'm going to... This is Kubernetes API. Now we're going to a bit more into the details, but basically every single Kubernetes API looks like this. I'm pretty sure most of you know. This is an extension to that API specifically to spin up new at-cd clusters, fully managed at-cd clusters. We're not going to call this one Example since we already have one, so we're going to call it KubeCon. A size of three is the default, so three nodes. I'm going to create it, and you can see that I'm taken to this new instance page where I can see that my KubeCon cluster is being created. And then I can go to the instances. I can go here. I can see that right now there's only one member yet, but three pods. I can change the size. We're not going to do this right now. We'll do it soon. Still spinning up. We can see that it now has three pods. And also, you can also see every single Kubernetes resource that has been created by this, by the operator for this specific instance, like, for example, these Kubernetes services. Because that's also the goal. If you want to be able to install a resource easily, you should be able to uninstall a resource easily. So we have this Kubernetes, this at-cd cluster running on Kubernetes. Now we can, for example, increase the size because it's time to scale up. We update it. In this case, OLM is doing the work for us. Since size is a very generic keyword, we abstract that keyword and create a UI on top of it. This is not possible for every single operator out there. That's why you can always go down in the YAML and create it. But with time, we'll have more and more of these generic patterns abstracted and will help do things in a very easy manner. So you can see that the size is now at five pods. You can check that the operator has done its job. So what happens when I increase the size here is that the cluster doesn't do anything. What happens is that the operator sees a change. It sees that it wants that the cluster was supposed to be five and it is currently three. And then what it does is it spins up new instances. And then if I scale back down to three, it'll scale them back down. So that's one of the example of how multiple at-cd clusters can be managed inside of a single Kubernetes cluster in an easy manner. Another nice example is Prometheus. Prometheus is the same as at-cd. I'm not going to run through all of it. But Prometheus has a couple of interesting things. And the reason I'm calling it out is to just touch on them real quickly. So Prometheus, as you see, has a lot of different APIs. One of these APIs is Prometheus itself. If I want to deploy a new Prometheus instance, I can do it here and configure it. But a couple of other interesting things is that we've taken advantage of the Kubernetes API and provided a couple of objects that one can deploy with KubeControl or OC, like, for example, Prometheus rules or service monitors. Service monitors are particularly interesting. They are objects that allow you to direct the Prometheus operator to start watching other services. So, for example, if I am deploying at-cd, I can create a service monitor for Prometheus to be watching at-cd's metrics. I can also go much further. I can use the SDK, which is in the beginning, and this is something we are actually doing. There's pull requests for that. I can use the SDK to automatically create service monitors. And then I can use OLM's dependency system to say this new at-cd cluster has the optional dependency on Prometheus. So that once you deploy it, everything gets deployed automatically, including the Prometheus. That is what we want to be able to do here. Bring not only individual operators, but bring composability between all these operators and bring one single API, the Kubernetes API, to manage all these instances and deploy them. So that's the bulk of it. That's the bulk of the demo. I'm going to have other sessions where I'm going to be going way more into details, but that is what we're trying to build for the future of application management on top of OpenShift. One thing that was probably mentioned this morning also that's interesting, all of OpenShift itself is built with operators. Not all of them use the SDK. Some do, some don't, just like every other operator out there. You don't have to use the SDK to use OLM. Everything is composable, but not completely tightly coupled. And that's something important to understand because that means that OpenShift from the ground up is operators. The lowest base, all the way to the applications running on top, which is quite an achievement from a lot of people working very hard for sure. All right, so let's go back to the slides. So where do we get started? How do we use this? How do we get with the community? Well, we're pretty open. It's quite easy. All these projects are, of course, open source. You can use them all. There's a GitHub organization called Operator Framework. The slides will be, I will put the slides, I will give a link to the slides. You will be able to go to, you can go to the Operator Framework Organization. You can see all the projects that I have demoed. There's a Getting Started Guide. Every single project has its own Getting Started, but there's also a Global Getting Started Guide that joins all of this together. So if you want to see everything working together, that's a great place to go and it's quite easy to follow. There's an OpenShift SIG where we meet every month and people come and demo their operators, talk about their operators, talk about the challenges they have with these operators. It's been quite fun to interact with everyone on that monthly call. And then we have a Slack channel that's quite active on Kubernetes operators. This is for everyone. This is to talk about operators in general, not just the SDK. By all means, please drop by. There's always somebody to answer questions there and I found it quite fun there too. And then our mailing list. Lastly, that's the Operator Framework mailing list. If you have any problems that are outside of the GitHub repositories, we can talk there. That's the main thing. One more shout-out. There's a lot of sessions about operators doing the SCOOPCON. There's one on Tuesday with Diane and I, which is very similar to this one, but you should still come. I'll squeeze out a few new things. Then there's a deep dive. In this one, I'm actually going to be building an operator. So I'll use the time to build. I think I'll do, it'll be a memcache operator, maybe something else, but a very simple operator. We'll do it from scratch. It'll be a more technical session in which we'll use go to build an operator. And if I have time, I'll also show off how you can do an operator with Ansible. Then on Thursday, there's a talk by Robin Chance about, no, first, there's a keynote by Rob in the morning about operators. So that's always fun as well. Then there's a talk by Chance and Rob about the metering framework. And I encourage you to go there because that is a vast subject that I didn't cover besides saying a few words about it. And it's fascinating. So that's it. The last thing is a workshop. It's sold out, unfortunately, but you can always knock at the door. We'll see. That's it. Thank you very much. I appreciate being here and being able to talk. Thanks.