 Hi, good morning, everybody. So as you said, I'm Jessica Forester. I work on the OpenShift team, been with OpenShift for a while now. And as part of OpenShift 4, we've been looking at what it means to have operators. So really quickly, when I'm talking about operators, what am I actually talking about? And really, all this is a way to package together, then deploy and manage your application on Kubernetes. Whether your application is something as complicated as the CUBE API server itself, or your own application that you want to give to other people to deploy on Kubernetes and make it easier for them to maintain over time. Operators are just a CUBE native way to do that. And so specifically when I say Kubernetes application, I'm talking about something that, as I said, deploys on Kubernetes, but something that is managed through the Kubernetes API, through CUBE control, through OC, if you're using OpenShift. And so the configuration contract for that application becomes an API and is more maintainable moving forward. So the OpenShift platform, particularly in 4.0, is almost entirely Kubernetes applications as I defined them before. So from the almost very bottom of the Kubernetes OpenShift API servers to the controller managers, the scheduler, the SDN, the registry, the web console, the Prometheus, the list goes on. And as you mentioned in the intro, over 40 of them, I think we may be up to 50 of them at this point, the list keeps growing because as we break the platform apart and make it self-managing, each of these applications needs its own things. And so, that many applications, we wrote a lot of operators. I don't want to get too far into what we actually did there because some of my colleagues are talking about this in greater detail later. So if you're interested in how we're operating the platform, go listen to them talk about this later. So when you write this many operators, you learn a few things that you realize, okay, maybe we shouldn't have done these things. Some of them, you know, we continue to work through. We keep finding new ones every day. So we may not have all the answers yet, but we at least have learned if you're getting started writing operators for your application, then there may be things that you should avoid. So I'm going to walk through a few of these today. And some of them may not be very technical. Some of these are about software engineering, right? So as an application author, your operator is probably not my operator, right? So it's important for you to remember you know how to operate your software. So when you're looking to get started, there are a lot of examples out there already, we've written over 40 of them. Ours may not be right for you. And, you know, if you look at OpenShift, even within OpenShift, they're not the same. So if you look at the Kubernetes API server, it has unique problems, right? It's what's controlling the platform, and if it rolls over too quickly, guess what, everything that was watching the API server suddenly has lost its connection. So it has unique things in its operator that are managing how it rolls itself out and responds to change. Our stateful applications like Prometheus and SDD, they have their own unique challenges, right? They have to worry about how they deal with disaster recovery. And then something as simple as our web console, right? It doesn't require a lot. It just has to be able to roll itself out. It has to, you know, use existing Kubernetes concepts to do that, so it doesn't require much over time. Its operator can be very simple and have very little configuration. So I told you don't start with someone else's, so how do you get started by not overthinking it? Don't think about it so much that you don't know where to start, right? And we saw this over and over on the teams that they're like, but I have to do this. I have to do this, and I have to do this. My operator has to do all these things. This is what an administrator has to do to run my software. And what was happening was people were just struggling to get starting. Just start, just start writing it. Start small, right? MVP, we write software all the time. What do we do? We build MVPs. Our operator should have an MVP. Install your application on Kubernetes, step one, right? Come up with what your manifest should be to run your app on Kubernetes. Put that into code. Have that deploy out. Step two, figure out how am I going to roll out a configuration change of my application? Not even an update. Just the administrator wants to change something, tweak something about my app. What do I need to do to listen to change? Macha, if you were here before, talked about controllers. Operators are controllers, so there are patterns there, and I'm not going to repeat everything that Macha said, so if you weren't in here, please go listen to his talk. And it's the same rules, right? You should be watching for changes coming in and then addressing that change. And then, you know, once you get to that point, you can handle config change, you can handle upgrades. Then start thinking about what is the day-to-maintenance of my application, right? Backups, pruning, rotation, your application, you know what the rules are, what does it need to do? And try and operationalize what a human would normally have done and put it into code, make it easier for them. So the next problem we ran into was too much configuration. We're trying to make it easier, right? Easier to run applications, that's the operator pattern. So you need to actually hide some of the complexity of Kubernetes from the user, make it easier to run. And if you know Kubernetes, the simplest example that I could think of was the pod template. And last night, I was like, I wonder how many fields were up to the Kubernetes pod template, and I counted them, okay? 30 top-level fields on the Kubernetes pod template. If you dig down into containers, each container is specified, another 20 fields, some of which have some. So when you're talking about what do I need to actually let someone configure for my application, you know how to run it on Kubernetes. They don't need to know how to run it on Kubernetes. And that's what the pod template is, it's how to run it on Kubernetes. So really what you need to care about is what is unique, what is special about your application, and then bring that up into an API that you can define with a CRD. So the next one I wanna talk about is something we've hit with some of our applications as we're migrating over from our historical cluster topology to the new one out of the box where we only really have two kinds of nodes. So we only have master nodes and worker nodes. And so we've had to think about, okay, what makes something special enough that it needs to be on a dedicated machine, versus it could run anywhere. And so we run into teams making assumptions that kinds of nodes existed that didn't, or kinds of storage existed that didn't. And so if you wanna make it really easy for your application to be adopted on Kubernetes out of the box on the platform, you should try to avoid making assumptions about what's there, if you can. And so when you can, rely on the cluster's defaults. And I'm gonna say specifically which kinds of defaults I'm talking about. So it needs to be things that the cluster admin is really under control of. So things that can change over time. First example, I wanna talk about node types. So I mentioned this, so out of the box with OpenShift 4, we're only giving you two node times. The master nodes where the platform needs to run. And the worker nodes where we're expecting applications to run. But we're expecting a cluster administrator is going to come in and change all of that. They're gonna want to sear their workloads places. They may have certain applications that they know need special machines. And maybe your application is one of those. Storage, right? Storage is gonna be different from environment to environment. And in four, we're actually doing work to try and make it easier to set up the right default storage out of the box. So if you're running on Amazon, we're setting up EBS volumes provisioning out of the box. But maybe you require specific things. So if you do need something special, let the cluster do the hard part. As an operator, author, you should be saying what the restrictions are. So if that's resource constraints, if you need GPU, huge pages, any other device plugins like Kubernetes now has, if you need special storage access modes, so if you need ReadWrite mini volumes, specify that on the cube resources that you create through your operator. And then if those things are not available, when the operator goes to roll out the application and Kubernetes goes to try and respond to that, you'll be able to roll that information up to the user to say, hey, I can't fulfill this request because the administrator hasn't set up this thing for you yet. And then as a cluster administrator, or for the cluster administrator, you should allow them to configure certain things about your application for the operator. As I mentioned, steering workloads. So a good example of this for us, I want to open shift on the platform, would be our logging stack. So elastic search, it tends to be very large. It takes a lot of memory. It usually requires special machines. Many of our customers like to put it onto its own special dedicated set of machines. So out of the box, we still wanted it to be able to install easily. Not really something we would put on a master machine. It's not core platform. So we wanted to make sure that it would install out of the box no matter what. But it needs to be able to be configured to go to infrastructure machines. Similar with Prometheus. Prometheus, we want to make sure that it can go to other machines that are dedicated just for that. Storage classes. So just because out of the box, there might be a really easy provision and solution and it'll work, you know, it'll be fine. Maybe it doesn't work under the load you need. Maybe you need higher performance rates. Let them just change what storage class. The cluster admin can define all the storage classes. They're providing on their cluster. So when someone's deploying the application, just let them pick a different one. They don't need to know all the information about it. Storage class. So the last one that I want to talk about is a little bit more of a complex topic that we run into in OpenShift because we're doing a lot of coordination between a lot of different operators. And so in our case, we actually have top level config that many operators are working together reading from and then operating on the platform. So as part of this, we have cases, you know, where we want the admins to define config, but then we also need the machine to observe the platform and provide additional information into the underlying operators to then control the controller manager, the API servers, and other pieces. And so what you don't want to have happen is your admin and your machine trying to fight over the same parts of the configuration. And so instead what we decided to do was to break this down and we said anywhere we have this problem, distinctly separate the parts of the configuration. This is clearly for the administrator and this is clearly for the computer so that there's no question who wrote what. We always know where that source of truth is coming from. So I'm going to end there. I wanted to open it up for questions. So I tried to keep this as high level as possible because the controller talks us right before mine. But if you have questions about what we've done, why we did things the way we did, and I'm happy to answer those. So we were already deploying all these applications on Kubernetes. So what changed, there were a few changes as to how we were getting configuration into the deployment level. So what our Kubernetes objects looks like might have changed but when it got down to the levels of the binaries, I don't think much has changed there. Both, yeah. So we ourselves as Red Hat are writing our own operators to operate the platform. We are also working with other people to write operators for their applications to then be easy to run on the platform. And in some cases, these might just be a singleton that's for the whole cluster. So you install something that adds something to the cluster. Some of these kinds of things, you can think of like Federation if you're familiar with Kubernetes. It's not core platform, something that can be added on. But then additionally, there are things that we actually have today like Prometheus for end users. So we don't actually give you access into the cluster level Prometheus as an end user only as a cluster administrator, right? But you as a user might want to do more with the Prometheus yourself. So the same Prometheus operator can be used by anyone for their own and also runs the core cluster Prometheus. And then, yes, at the end, if you just as an application author just for yourself want to have an operator for your application, just to make it easier, you could do that too. At least for the OpenShift stuff, we have the Prometheus stack installed by default and it's been done in such a way that our core stuff is monitored by that. So all of the operators and the applications are reporting up metrics and a set of alerts that then come out through the cluster monitoring stack. It already has some PB level alerting in the existing stack, but we keep adding. Well, thank you everyone.