 So do you have kids? I do. And one of the beautiful things that happens to me when I travel is that I know I can rely on somebody taking care of them. Like if something happens to them, they will be taking care. So if they fall, they will be taken to the hospital if that needs to happen, or if they need to be fed, somebody will give them the food. So I'm pretty sure that having somebody looking after something that is your responsibility without you having to be there is really cool. My name is Jorge Morales, and I will be presenting to you today about automating stateful applications with Kubernetes operators. Graham Dumpton is with me, but he did a talk yesterday, and he's really exhausted, so I will be talking today. I come from Spain. I'm mostly a Java developer. I work on Red Hat as an OpenShift developer advocate. And Graham is Australian, but since he won't be talking, I want to introduce him. Scaling stateless applications on Kubernetes is really easy. So you just need to run a single command like this, kubectl, scale, the name of your deployment, the name of the replicas that you want, and the platform will take care of it. So how does this happen? There is the command line, you from the command line just request a desired state to happen, to be materialized on the cluster. And then there is a set of controllers on the platform Kubernetes that is monitoring the desired state that you want, which in this case is three. It will look into the actual state of the cluster for how many replicas that are in the cluster. In this case it's one. And then what it was saying is, hey, we are not matching whatever this guy wanted, so let's make that happen. So the controller will scale your application to three replicas. This is really straightforward. But what about applications that are more complex? What about applications that store data? Things like databases and type of software that have specific role depending on which instance of your application you're running. And this can be more challenging, but running a database, to be honest, is also really easy. Just keep CTL run, the image for your database, and you have your database up and running. But hey, running this software over time, it is much more complicated. Why? Was it because this type of software, stateful software, has specific state, has specific requirements. They need to be maybe resized in a specific way. When you have a database, it's not the same to resize the database, depending on which type of database it is. So every database might need to take specific actions. Also, upgrading the software might be difficult. Why? Because maybe the schema of your database has changed. So going from one version to a different version of your database might require specific steps to happen in the data that is stored. Reconfiguring your database, making backups and restore those are actions that need to happen over time. Why? Because you want to make sure that your application is healthy, that you are able to provide and guarantees to your users that everything will be taken care for. Healing a database, for example, is also another specific thing or complex thing that can happen. Sometimes when a database, I want one of the instances of a database goes down, you may require to bring up a new instance, but when you do that, you may need to rebalance the information that is storing the database. So that is complex task. Every application that runs on Kubernetes, it's installed once. But over time, you need to configure, you need to manage it, you need to upgrade it over time. So these are tasks that will happen on a regular basis on the software as you run. Also, patching applications is critical to security. And when you run software and production on an enterprise-grade production, security is critical for your business. Anything that is automated, that is not automated, this is slowing you down. Every time that a human intervention needs to happen, the delivery process of your software is gonna be delayed. If I need to have a manual approval to roll out my next release to production, I am reliant on whoever needs to manually approve my software to be able to approve it. And that adds a delay for this software to be rolled out. So automation is something that we, mostly, always want. So what if Kubernetes knew how to do some of these things? What if Kubernetes had all this knowledge on how to manage all this software? Just think about this guy, Grant Doe. This guy has been working on a database company for over 19 years. He's fictional, so don't look at the profile. And he's been working on a database company for over 19 years. Can you imagine the amount of knowledge that he's gathered on how to run this database in that time? And more importantly, how many of these guys does every company has? So if you're looking to be companies, there may be a few of them. Usually, I used to work in a really big company and whenever we were working on a project, we needed to wait on this guy to be able to help us to do the work related to the relational database that was important. So we were having to wait on his availability to be able to progress our software, development our software delivery. If you're looking to smaller companies, this guy may not even exist. Why? Because this type of profile with this expertise is really expensive and sometimes it's really scarce. So it's difficult to find. So what if we could have all the knowledge that this guy has about how to run this database and we can create a software version of his knowledge that knows how to run this database over time, how to configure it, how to upgrade it, how to do everything that is required. If we could do that, we could just leverage his expertise everywhere. So it doesn't really matter where your company will be running, whether it's cloud or on premises. As long as you are running Kubernetes version of deployment, you will be able to leverage the expertise provided as in a box, in a product that is in a box way. So now, your company, no matter how big or small it is, can have a software version of this expertise running in his deployment, meaning that you will be able to have production grade databases running without needing to have a production grade expert on that technology. So this is what the operators are. Operators are automated software managers for Kubernetes applications. They manage the install and the life cycle of your Kubernetes applications. Okay, so what is the recipe? So far, we have seen what are Kubernetes operators. How do we create these operators? Kubernetes provides way to extend the platform. So you don't need to modify Kubernetes source base, you need to fork it in order to provide your custom behavior. There is a special mechanism in Kubernetes, controllers, custom resource definitions that allow you to extend the platform, to provide your own behavior, to provide a way for you to define how your software will be defined so the platform will manage it. So the first thing that you need to add into your Kubernetes platform is a controller. This is what basically is the operator. It's a controller running as a Docker image or container image in the platform that will monitor the cluster for specific instances of your application. So you define with a CRD, custom resource definition, how your application will look like. This example is a made up example of a production-ready database. So in this case, I define, hey, my software will be a production-ready database and I want it to have these characteristics. So every time a user wants to create an instance of my application, it will need to provide this configuration. Different values for the configuration, but at the end of the day, this is how I define my software. So once the user, the end user creates one instance of this specific custom resource, what he does is be a JSON definition, he deploys that into the cluster. So he says, hey, this is my specific instance of my production-ready database. He puts it into the cluster and then the controller, the operator, will look into what you want it to deploy and will create all the required resources in the cluster for that to happen. Whether that is specific staple sets or replica sets, config maps, secrets, all of that, the operator will be watching for your definition of your instance and will be reconciling. So what happens is if you make a change to that definition, the operator will still be monitoring for your definition. Like I explained at the beginning with the scale example, if I say, hey, my production-ready database, instead of now being version 203, it's going to be version 204, when I deploy that jamming into the cluster, the operator will say, hey, this guy wants an upgraded version of his database. Let's go and do it. Whatever the process is for upgrading the database, it is hard-coded, it is encoded into the operator. That is the logic, the operational logic that the operator provides for you and you don't need to be an expert on how to roll out a new version of these database. You don't need to know anything. The platform will take care of going from one version to another. Whatever the steps are required for that to happen, they will be taken care of for you. So now you are running databases or you are running a stateful software in a Kubernetes platform with much more confidence knowing that even though you are not an expert on that software, you can run it safely in a production-ready way. So there is a set of things that are being provided by Red High and CoreOS, which is a company we acquired some time ago, that helps on creating operators and I'm going to lightly describe some of them. The operator framework, it is just an umbrella of a set of projects that helps with the creation and the management of operators. This is the GitHub organization where you can find all of these projects. So one of the things that you might be wondering is, hey, this is cool, the operators. I deploy an operator on the platform and it will take care of my software, of my application, of my databases, of my stateful applications. But what is taking care of my operators? At the end of the day, the operator is also an application. So we can have the same pattern encoding in CoreOS where we have specific software taking care of the lifecycle of our operators. So when there is a new version of our operator, it will roll it out. So this is called the operator lifecycle monitor and it's a special version of an operator that manages operators. So with this, one of the things that I can say is, hey, watch out for these operators on whatever the source for these operators are and if there is a new release of them, just roll it out. I can subscribe to specific channels, distribution channels for operators and when a new operator is found, it will roll out the version of the operator. This description, I can decide whether I want it to be manual, so I will need to go and say, hey, upgrade my operator now because I know I can do it with confidence. There is some people that doesn't trust the automation that much. Or I can make it automated. Like you do on your phones. If you're looking to do your phones, you usually have a marketplace where you go and you say, hey, this is the software that I want installed on my phone. I can mark them to automatically upgrade my phone and the phone will be always up to date without me needing to do anything or I might need to go into the marketplace of my phone and say, hey, upgrade this application. You go there, you check the list of updates that you have a little bit lower and you decide when you do the updates. This is called over-the-air updates. That means that you can decide or the platform will connect to all these operators' sources and will say, hey, these are all the operators that are available and I will be able to upgrade them without your intervention. For this to happen, of course, there needs to be a marketplace. Like in your phone, I said you go to your marketplace in your phone and you see a list of the software that you have available to install on your phone. Same way on a Kubernetes platform or an OpenShift platform, you need to have a marketplace. So this is another project on this operator framework, umbrella organization, that helps train marketplaces on Kubernetes platforms. This is one of the versions of the one on OpenShift. And of course, to be able to have operators, one of the things that need to happen is that the ISPs, the software creators, start packaging their software as operators. Without that, you can have a marketplace, you can have a framework, you can have everything, but it will be empty. So one thing that starts to happen is that we are seeing a lot of ISPs, software vendors, producing their software as operators. Why? Because this is a way for them to provide their software to anybody. With the confidence that they will be able to run it, they will be able to have it upgraded and run in production with confidence. Some people previously will have decided not to go for specific technology based on the expertise that they have in-house. So if I don't have, for example, expertise on Redis on my company, I might not allow my developers to install Redis. Why? Because then, once Redis is in production, hey, guess what? It's difficult to manage. And I don't know. And I might have a lot of pain. I might have to involve all my SREs to be on call, on duty, overnight, long times, to be able to upgrade from one person of Redis to the next one. With an operator, ISPs guarantee that the knowledge they have as the producers of the software is put into the software. So you know that they are giving you all the knowledge that you need to run that introduction. All these ISPs are providing their software through an operator app, which Red had started in connecting and collaboration with Amazon, Google, and Microsoft. And this is where we are curating operators to be available for the global use. So you might find operators everywhere in GitHub, but those operators who knows whether they work or not. So in the operator app, what we do is we go through a thoughtful process of QAing all these operators to guarantee that they work as they should and they comply with specific standards of quality. This operator app, in the case of OpenShift, it is one of the sources for the marketplace. So that means that when you go to the marketplace that you previously saw, it will connect to this operator app and it will make all of these operators ready for you to use on the platform. Another sub-project of this operator framework that is really interesting is the operator method. What is this for? I need to get information on the usage of my software. Not really I as a developer, but companies, enterprises, will most likely sometimes need to be sure to know how much it's an every software that they allow their developers to use. It's an every team is consuming. Just think about private software that has licenses. If anybody can just go into the marketplace and then load, ready and use it for free, you don't know how much you want to charge. There is companies that charge each and every team for the amount of resources they use and this is another example of a resource that they will be using. So they need to be able to get information on how much ready it's an every team is using. So eventually they will be able or they can't be able to charge it. This operator method framework, it does some reports that tells you about the usage of each and every operator that is used on the platform or needs an every instance of the operators that is being used on the platform. So then whether they want to charge you or just give you some information on what you are using, it's up to them, but this provides the capability for you to get that information. Okay, so far we have seen what are operators. How do we build our own operator? There is also in the operator framework, umbrella, there is also an operator SDK. And SDK makes it easy for you to create operators. How? By providing you capabilities to initially scaffold and create some base code for your operator and bootstrap your new operator projects. Then this SDK also provides some extension points that covers the most common operator use case like backup, restore, upgrade, install, all these patterns that can be encoded into an operator and then it also provides high level API abstractions for your operational logic. Just think about what an operator requires usually is to interact with the Kubernetes API in a specific way, monitor on how Kubernetes resources when they are deployed, when they are upgraded so they can act, they can do what is called a reconciliation look. So all these patterns that are difficult to code in any language, we provide high level abstraction APIs. So it's easier to use the Kubernetes API. You no longer need to do all of these tasks one by one. You no longer need to know the Kubernetes API in depth because these high level APIs provides you an easy way for you to interact with this Kubernetes API. So all of this, it's part of what it's provided in the SDK and this is based on the expertise that we have been gathering Reha and CoroS for the last couple of years since CoroS started working on operator and created the operator part. Okay, so let's look at a step-by-step example of how to create one of these operators. What are the steps that need to be required? And we are going to look into the latest release which is 050 of the SDK. The SDK is evolving inside of new practices and new patterns that we learned over time but this SDK is really interesting. First thing that you will need to do is bootstrap your project. So you need to create new operator project using the SDK command line, the CLI. So you say, hey, operator SDK, you create a new production-ready-dv operator which is your project and then you decide whenever you create your operator whether that will be cluster scope or namespace scope. The difference is whether you want to make your software available for everybody that runs that is in your cluster or maybe for specific people that is in your cluster. Sometimes you need to make some features available for everybody. Sometimes you want to make a feature that's available for a specific set of people. Then what you do is you define your new custom resource APIs. So let's say that you want to create a production-ready database. So what the first thing you say is, hey, this is going to be my API. You define the type of resource that you are going to be working with and the version of that resource that is going to be managed. Then once you have defined the API, you go into the code and you define the spec and the status. So the spec and the status is the specification of your resource and what information the cluster will give you about the resource once it's deployed. You define all the properties that your resource will need to have. For example, for a database that might be version, number of replicas or any specific behavior. Whenever we look into example, we'll see a more detailed spec. And the status is the information that the cluster will be giving you. So once you deploy one instance of this resource into the cluster, the cluster will be going through this reconciliation loop. It will be monitoring your application how it's running and we'll get information into your resource. So whenever you query for it, like A, give me information about my production-ready database, the instance that I run it, you get some information on what is the status, whether it's okay, the number of current replicas that are running, all of them. Once you have defined this, how your custom resource will look like, what you need to do is generate all the code for it. So with the generate Kubernetes, what it will do, the SDK will generate all the boilerplate code for your operator based on the definition of your CRT that you have done. And then what you need to do is create a specific controller that will be monitoring, watching and do the reconciliation for your CRT. And this is where all the logic will reside. This is the only thing or this is where you need to put all the take care that you are putting on the logic. So this means that this controller will be monitoring your production-ready database or maybe also some other things like the configuration associated to your production-ready database. And when there is a change, it will do the specific actions related to that change. Just think about if I change the number of replicas from two to three, the logic that will say, hey, now there is a new instance, there is two instances or there is three instances defined or required, there is two instances on the cluster, create a new one, all of that logic will be encoded here. Once you have done all the logic, what you need to do is make that available to the cluster. How do you do that? You first deploy the definition of your custom resource, which is, hey, I will be managing production-ready databases and they will have this spec and this status and all these fields of this configuration. Then you package all the logic, the controllers and the API in a container image. You deploy that container image into the cluster and then you apply the required resources and rules for that to be used by the cluster. So those are usually roles, secrets, service accounts that will be required by the operator to be run and then, of course, the operator definition, which is, hey, my operator is encoded in this container image, stored in Koyayo or Docker Hub or any other container registry, go and pull it and start using it. Of course, this can be used with the operator logic I can manage it because that it will auto discover this from, for example, operator have. And then once you have your operator available in your cluster, the only thing that you need to do is create instances of your software. So you have the operator available, you can now allow any of your developers or the users of your platform, I mean, whether to create instances of those specific software that you are managing. Okay, so far so good, I guess. Let's get a taste of it. Let's see how all this works a little bit. I'm going to show you first, let me see how, what time, yeah. Let's see an example how this is done through a CLI and there is a little like animated if showing to you and then I will do the same similar things through the UI which is much easier to understand the grasp. So this is, there is a definition, this is an HCD for example, cluster definition where I say, hey, I got, I want to have three instances of my application and I want it to be the version 339. In this example, I had a cluster already with two instances of the application and it's version 338. So what, when I deploy this resource what needs to happen is that it will do a scale of my application from two to three and it will also change the version from 338 to 339. So the controller what it's doing is going through observe, observe, analyze, cycle. So it's, that is the control loop and it will, it's observing the current state of the cluster. As I said, there is two instances of my application with version 338. It will analyze the difference. So hey, this guy wanted version 39 and this guy wanted three instances instead of two. So in the app loop, this is the logic that will happen inside your controller. What it will happen is that it will create one more instance. It will, because it's a database and this is encoding into the logic of the operator, it will do a backup of the information on that, on that it's the cluster and then it will upgrade it to 39. Why? Because if there is an error on the upgrade process, the operator from the backup will be able to restore it. So all that information on how to go from one release to another can be encoded into the operator. So you don't know no longer to do all this kind of thing that you usually do as a regular user that doesn't have a huge expertise on this somewhere. So this is the first thing that happens is you install the operator through a Jamel, JSON definition. This is Kubernetes. Everything goes through JSON and Jamel which is really a pain in the ass. Then it has deployed an instance of the HCD. Now it's putting some information into the HCD. Now we delete, it's showing how, if you delete one of the instances of the cluster, it will just get any of the instances of the cluster. Okay, so this was a simple example but then there is things like more complex example. You can encode into an operator, you can encode into a custom resource definition. You can encode any type of application that can be your own application. So just think about AM producing an application on a trading platform that has some, given GPDR in Europe, has some affinity restrictions on that affinity. So whenever I deploy my software because I have some basis on Germany, maybe UK, Spain, France, I want it to be defined in this way. So maybe do a replication factor two of the scale back up every hour. But then on geography, I can say, hey, restrict my data to Europe, I'm preferable to Germany based on the GPDR. So these kind of definitions of your application are specific to you. Every time you're creating an application, you are the one that knows how the applications can be defined and how it will be operated. Then you put all the logic on this controller. So this operator will really make sense to put all the expertise. And this is what ISV's software vendors are doing in order to provide their software to be massively adopted. Okay, so now I'm going to show you how this works in an OpenCIP platform. It's really nice, you know. This is an OpenCIP for cluster, which is not GA yet. So it's gonna be released in the next couple of months. So I log into a cluster. I'm gonna log in as an administrator. And for those of you that are familiar with OpenCIP, you will probably glance at first that the UI has changed dramatically. This is based on the merge with CoreOS. So this is what you'll see now, how your UI looks like. So we go to the catalog and here we can see whether we have installed operators. In this cluster, in this namespace, I don't have anything installed, any operator installed. And here is your marketplace. So here you can go to the marketplace. It's connecting to the operator hub and it's showing you all the operators that are available on the operator hub and that can be installed. Whether they are community-based or production-based, they are provided by Reha, they are provided by different vendors. So you just select the operator and it gives you a little description of what the operator is. You just install it. So here in this example, I'm going to install it just for me in this specific namespace, running for Asia. And I'm going to say, hey, there is only one channel confused, which is the final, and then the approval strategy automatic of manual. I'm gonna say, hey, update the operator every time there is any version. So I just subscribe to this operator channel and then what is happening right now is that the operator will be installed on my cluster. We can go to install operators and we should see in a moment, hopefully, how the operator are installed. Yeah, here, now. So I go to the operator, this is where I should see, let's create a different namespace because I was playing yesterday with this one. Oh, here it is. Good, I was scared. So the operator, we got the operator. As you see, the status will be changing over time. So it will give you some information of what is going on behind the scenes by the operator lifecycle manager. So right now he's connected to the operator app. He's checking that, hey, whatever this operator requires to be running the cluster, it is available, there is some requirements, they were met. And now install succeeded, that means that now I have my operator available in the cluster. So I can go into the operator and now you can see all of the things this operator can manage. So this was an AMQ stream which is Kafka running on Kubernetes and with Kafka you can create a Kafka server. You can create a Kafka Connect cluster or Kafka Connect S2I image which gives you the ability to build in the cluster topics and user, Kafka topics and users. So let's create a Kafka cluster. So we can go here, create new and you'll see here, now this is the, I don't know if it's speaking up. This is the definition of the Kafka cluster in Jamel representation. This is not really nice. Working with Jamel and JSON in Kubernetes as I said really challenging. So eventually in the future this will be transformed into a nice UI, graphical form where you will have drop downs and this kind of things for all the values. But in the meantime, this is how you define the cluster, all the properties that you can define. So this is for each and every application that will be available in the app. Each and every application will have different configurations. So you will need to look into the configuration into the specific documentation for each application. But basically this is what you can define when you create a Kafka cluster. So let's create it, we create a cluster and now my cluster will be installing. So the operator that I have right now and if I look into the pods in this namespace I can see now that is creating a cluster. It was defined with three replicas. So now it's creating the suitkeeper. First initially Kafka requires suitkeeper for coordination for cluster combination. So it's installing, as you can see the readiness is changing. So there is one ready already, container is not ready, the second one is ready, the third one is ready. Once the suitkeeper is fully installed it's installing the Kafka cluster itself. So there will be also three Kafka instances. Right now they are creating one by one. So the operator is taking care of installing Kafka for you. You don't need to know, hey, how to install Kafka. There is three instances you require suitkeeper, you require to put certificates anywhere. All of that is done for you. So right now with this configuration there is some certificates to connect to the Kafka cluster. Those are already available. There is some users to connect to the Kafka clusters. Those are there, you can create more users via creating the Kafka users. With the operators of the operator will take care of each and everything specific to your application. Once we have deployed this Kafka cluster and once everything is ready. Okay, it seems to be ready now. We can see, if we go to the pods. Well, there is a huge amount of pods here but not only the pods, if we look into the CRDs. We should see here all the CRDs that this Kafka created. We can look into the cluster. This is the cluster that we created. And if we go into it, install operators. All instances, the Kafka one. We can see a list of resources that were created. So with the cluster, when I installed the cluster, as I said, there was some to keep it installed. There was also the Kafka instances itself but there is also some more stuff. So there is some certificate for the cluster communication or the intern members communication of the cluster. There is some definition of the certificates that you will be using from the external CLI whenever you want to connect to the cluster. So all of this is managed by the operator. There is a huge amount of things that were provided by the operator that in other ways, it will be really difficult for you to know how to deal with, how to manage it. Now we can go into the operator again and we can say, hey, it's great to copy. So it's as easy as creating the topic. I'm gonna use the false asia cluster that we created, the Kafka cluster that I created. Let's call this topic false asia. And I'm gonna create this topic. So what is happening right now is this is not really deploying any new container image. This is not deploying anything on the cluster. This is interacting with Kafka given the knowledge the operator has and it's standing Kafka Hayley. Create a topic inside Kafka so it can be used externally. That's very fine, that topic was created. So we're doing that because I don't have a Kafka installed on my cluster. I'm gonna just connect to one of the pods. Let's say one of the operator, one of the Kafka pods and SSH into it. And then this is Kafka topics list. I think it's, this they will tell me that I need a zookeeper and I'm gonna use the local host. Zookeeper runs on this port, so with this I should be able to, let's see the command. That's my definition. So with this command I'm connecting to the Kafka cluster and I'm looking for the topics and the topic is there. So we have seen that it's easy to deploy and manage and work with stateful software on a cluster. And then it's away from a user point of view. This is much more easy than what it will be dealing with all the JSON and Jammer. I'm from an operational point of view. This is easier to leverage to any company because they don't need to have any expertise. They just can rely on the expertise that you as the vendor of the author of the software provides for them to use. Okay, so wrapping up some helpful resources for you guys to look into. There is the operator framework and the SDK on GitHub. It's under the operator framework organization. The operator have that I described before where you can find all these operators. These in operator have IO and then some blogs and resources that talks about operators. Of course I'm going to advertise those coming from us for this operators and blog open shift category operator framework. I'm gonna tweet the link to the slides after the talk. So if you just watch for post Asia hashtag, you'll find it there. So, and one last thing is we have an interactive learning portal for operators. Not only for operators, but we have a section for operators. If you go to learn.openshift.com slash operator framework, there is a huge amount of information on how to run operators, how to create operators, how to use operators on Kubernetes or OpenShift. And with this, thank you very much. I don't know if we have time for, yeah, we have some time for questions. So up to you if you want to, yeah. No, no, you can run operators with previous releases of OpenShift. The only thing is that OpenShift 4 provides all of these marketplace already integrated in your release of the marketplace. But with 311, you can already use operators. Operators at the end of the day doesn't really require anything from OpenShift. The only thing that OpenShift provides is the integration with the marketplace and the UI so you can interact with them in an easy way. Any release from I think is based on Kubernetes 111 or 110 that supports CRDs, which is the only thing that is required by the operators can use operators. So this operator framework, you don't require OpenShift, you can deploy it on any Kubernetes platform, of course. And that was the topic of my talk, I use OpenShift because I work on OpenShift, but it was easier for me to show it in this platform. You can use it on any Kubernetes platform from 111. You just define your CRD, you deploy your operator, and it works. Then how to discover that in a plain Kubernetes platform is a little bit more difficult. You might require just to deploy the Jays on the jammels yourself, but guys, let me give you a mic because I cannot hear you, so. So the question is the downtime, do we ever manage the downtime or, you know, greater than the second time that we need to run collecting of an entry that will not have greater than the metrics of our database to improve their operators. So if we work in enterprise environment, we think it's a big deal, so we are not connected to anything outside of the environment that we have yet, back on the operating system. So the first question was if there is a downtime whenever you upgrade your operator, no, but that is specific to each and every operator, how that is encoded into the operator. There shouldn't be, so upgrading the operator itself, it shouldn't have any effect unless there is a failure, of course, because at the end of the day, the operator what it does is just monitor the cluster for your software. That means that if I, even if I undefloy the operator itself, not the CRD, but the operator, your software, your database will still be running because the operator, the only thing that does is manage your application, but once it's deployed, it's there. So I could even remove the operator. I can't upgrade to any release of the operator and that will be there. Then, specific upgrading of your instance, like, hey, I got a Redis two, three, one, and I want to upgrade it to two, four, for example. That is depending on Redis. If Redis supports and encoding the operator logic to upgrade, do a hot deployment of the instance with zero downtime, then fine. So that is specific for each and every application. You will need to read the documentation of what are the changes for each and every application. That's why usually you will most likely have like automated upgrades for your operator itself, but for the instances of your application, you might not want to apply automated, they will not apply automated upgrades. So that means that the operator itself, you can safely, like with your phone, say, upgrade my operator to whatever release because that has no effect on your application. Then upgrading your application from version two, three, to version two, four, you need to go into the definition of your database and say, hey, upgrade it to two, four, now that I have an operator that knows how to manage two, four releases. So that is a manual step that you need to make. And that is based on you knowing that you can upgrade with zero downtime or maybe apply that upgrade on a window timeframe that you know you can do it. And then for the second question is the operator itself, for the monitoring of the operators, all the information that is gathered is for you. So that means that the operator mentoring framework, what it does is gather information about how the instances of the applications are used and those reports are created for you in the platform. So if you are using, for example, OpenShift, you will get a nice report in OpenShift where you say, hey, for these operators, how many people and it will give you all the information based on the role-based access control of the platform. Hey, these teams, these users, use so much of this and this and that. If you are using on a plain Kubernetes platform, the information that it will be given to you will be similar. But at the end of the day, information is local to your cluster. It will require the operator framework will require connectivity to the external to be able to upgrade the operator itself, for example. So when you have like the auto updates enabled for the operator, like on your phone, if there is no connectivity to the outside, there will be no way to connect to the outside. So that means that if you want to have that, you will have to have like a proxy for the operators to be able to get changes. Like you do for when you are a Java developer with Maven having an access to an active factory where you have a proxy that makes that, we will provide a similar piece. So if your database, for example, the question is if you are running some software that for example gets has analytics capability embedded into it and that analytics might run maybe in the cloud, how to get that analytics connecting to your cluster. So that is if the software, for example, if I have Redis and the Redis software get some analytics and based on that analytics, that's some reconfiguration of my instances or improving the performance. If that analytics system has to live outside the cluster, if there is no connectivity, then I will not get analytics. So if I require that connectivity to the external system, I might need to allow it. For example, there is an example already available in the cluster, which is MongoDB. MongoDB has some interaction with Ops Manager, which is the licensing server for MongoDB. So whenever you deploy MongoDB, MongoDB will need to connect to the Ops Manager to say, hey, does this license is valid and you can deploy MongoDB? So in that case, you need to provide a proxy to be able to go to the Ops Manager or you will not be able to run that software internally. Okay, any more questions? In any case, we will be around because I think it's about time. We will be around if you have more questions just reach out to us. Thank you very much.