 Welcome to yet another OpenShift Commons briefing and today we've got Rob Somsky and some other guests. And we're going to talk about what's going on in the Operator Framework world. That's OLM and SDK and lots of other good bits. So Rob, I'm going to let you introduce yourself, Pocket, for as long as you need, and then we'll have live Q&A and a conversation afterwards. So take it away, Rob. All right, sounds good. Hey everybody, I'm Rob Somsky. I'm a PM for OpenShifts and I've been looking after the Operator Framework since it was announced as part of CoreOS several years ago now. So I'm here to give you guys an update on some exciting news, what's going on technology-wise, feature-wise, and when you can expect certain things. So we'll jump right on into it. The first big news is I think we've got enough votes and I don't think it's official yet, but the Operator Framework is going to be joining the CNCF as an incubating project. This includes both the SDK and the Lifecycle Manager. So we're going to break down what each of those are and why they're important, but we're super excited this kind of builds on all the momentum and open-source work that we've been doing. Being part of the CNCF will be hopefully a big win for the project. Get more users, more contributors. You solidify this as the framework for managing operators across the QB ecosystem, so we're super excited about that. So thanks to everybody that either commented or voted and was involved in this process. It took a little bit longer than we thought, but that's always the case with these things. So yeah, we're excited about that and so that's the first big news. Let's go jump in really quick, but I just always want to start with what is an operator just in case you're watching this stream and you're not as familiar with this. And so a core, really an operator is a piece of technology that sits in the middle between users and a cluster and it runs software. So in this example here you can see the operators in the middle and what the operators are doing is it deeply understands both knowledge on how to run a type of application. So like a MySQL database or a distributed system, a Kafka cluster, whatever it is. So picture all the operational knowledge needed to install that, to upgrade it, to scale it, to monitor it, to back it up, whatever it needs to do for that specific application. You build that into a piece of software that then outputs Kubernetes on the other side of it. So making all of the objects that need to happen, wiring them all up, generating secrets, storing those in the secrets in Kubernetes, managing the RBAC, knowing how to upgrade certain parts of the components in the correct order, using staple sets when they need to be storing staple data, using deployments when they're stateless, et cetera, et cetera. This is what the operator does. And so the operator framework is here to manage all of that, to help you build this piece of software, but then help you manage it and upgrade it over time, which is a really important part of it. And what we want to get to is a cloud-like experience. So picture getting your favorite database or machine learning service from a cloud provider, but we want that to work against Kubernetes anywhere that it's running. So the operator is the key there. It's providing that cloud-like experience, but then interfacing with Kube so you get this hybrid cloud experience at the same time. So that's what we're going after. So let's dig in a little bit further. So we see operators kind of taking off across every different kind of vertical you can have in software, because you want that SaaS experience, and all of these categories that you see on the screen here are just getting more complicated as software gets more complicated. You know, before, you might have just run a single MySQL or a single Postgres in a VM against a local disk, and you backed it up every once in a while and maybe do a tape library. And that was it. Now we've got no SQL databases. We've got all these big data stacks where you're scaling out different parts of the application related to load and how much you're storing monitoring solutions that need to react as containers are coming and going in milliseconds. And so the throughput there is really high and also how dynamic the system is very high. And so you need different ways of running that type of software to keep up with Kubernetes and this modern software. Messaging services are super popular in financial services and all these different systems that need to talk to each other. And so those become critical. When you scale those, can you monitor them? Do you know what's going on? Are you setting them up in the most secure way? All that really matters. And then lastly, I think storage is a really interesting one because if you think about it, we don't want to be tied to a single local disk anymore, but in our Kube clusters, we actually have tons of local disks. As you have 100 nodes, you've got 100 or more disks. So what some of these storage providers do is slice up those disks into network addressable storage, but the Kube cluster understands through its PV and PVC mechanisms. And so you kind of run storage on the cluster for the cluster, which is really exciting. And that's a whole distributed system too. Cef cluster, a bunch of other technologies are used there as well. So that's where operators play. I'm not going to throw up the logos of any of these, but every major vendor, I think in all of these categories, has an operator at this point. So take a look out our place for discovering those. Operatorhub.io. This is a listing of a bunch of open source and community operators. Red Hat also certifies operators, and so you can find those inside of your OpenShift cluster as well, alongside all those community offerings and Red Hat products. So breaking down what the operator framework is, it's really these three main pieces. The operator SDK for building operators. So this is a new style of building software, interacting with Kubernetes APIs and how to best do that, as well as hooking up a distributed system into operational knowledge embedded in code. And so what the SDK allows you to do is start with a bunch of scaffolding for code, where you just have to bring the knowledge of your application. If you're a Postgres admin, you kind of know how to administer Postgres. So we're going to help you express that in either Go code, Ansible code, or in a Helm chart. And so that's how we help you build an operator. And so we'll talk about some of the new features coming in these SDKs, but they're pretty robust right now. And we've got a ton of folks that are building really interesting things with them. Next is the Lifecycle Manager. This is the thing that helps you. Once you've built this operator and it's packaged up, how to get it out to all your customers. How do a hundred different people go install this on their cube clusters? How do they manage it? How do they make sure that all the correct permissions and security is set up correctly? The Lifecycle Manager helps you do this. And then I already mentioned Operator Hub I.O. for discovering operators that have been published by different authors out there on the Internet. So I talked about the SDK and how we have these three different flavors of it. There's Helm, Ansible, and Go. There are also other open source frameworks for creating operators out of Java and some other things like that. So if you're interested in your favorite language there's probably something out there for you and somebody has already started doing that. People have written these and all kinds of things. So a different part of the SDK than just dealing with writing code is all of the other important stuff that comes along with it. So how do I package this up so that I can go run it through a CI pipeline? The testing is really key here because if you think about it, this operator is going to be managing really important and really complex distributed systems and that's why you're expressing it in code, but you want to validate that that code is correct. And so we have some tools that are built into the SDK and the operators are instrumented to help you do things like when I kill an important part of this, does the operator resurrect it? Or if I go mess with some important configuration variables that I need to ensure remain in place, does the operator go replace those back to what they should be? When I send a bunch more requests into my distributed system, does the operator scale up the correct component accordingly? Those are all things that we want to express in testing pipelines and so that people can really test the code that they're writing and then your customers can all just append that, yes, you are orchestrating MySQL and Kafka correctly. So you can check this out there on GitHub under the operator framework org and we've got, like I said, these three flavors. You see these arrows that go from left to right? That is talking about what we have, we call it maturity model. And the maturity model is just how smart is this operator, basically. And it depends on each application kind of where you need to fall on this, but installing and upgrading is kind of, you mentioned on all of them and that's really the bare minimum. I need to be able to install whatever this application is and then update all the components of whatever the application is installing. So this is for a traditional like scale out database, this might be the MySQL and Postgres processes themselves, any authentication and rate limiting proxies you might have in there. If you have read replicas, updating and orchestrating how those are all connected together, all kind of falls into that. And so all of our SDKs help you with that part of your application lifecycle. Now you see the day two operations term mentioned for Ansible and Go. That refers to all of the other dynamic reconfiguration that you would do for any sort of application. So in the database example we just talked about, what if you started with a single node and you installed and you could upgrade that and that's awesome, but then your app actually gets started. Recording has started. And you need to go out and, you know, add a read replicator or I actually didn't need this to be HA, but now I want it to be HA. All those are day two operations where you can say, hey, operator, just start backing up this thing. Hey, scale it out automatically and have the operator do it. That's all day two. And then you start getting into smarter stuff. So metrics and alerting, orchestrating, monitoring pipelines, doing automatic tuning of the workload based on how it's running are all really important. And so that's where you need higher level primitives in the SDK and the language that you're using to express all that logic. And so that's where some of these different types of code bases can work for you or against you, depending on the type of application that you're running. Jumping over to the life cycle manager. So once you've done all that, you've built your operator and you've written all your code, you would think, oh, I just need to run it, right? You just go put it on the cluster. Well, there's actually a lot of different needs here related to running and installing these operators because you've got different personas that are involved. So I've got the three kind of main ones here at the top. I've got an operator developer who might be building an operator, either internal to your organization or is tweaking something about an operator. And they have very specific needs where they need to, you know, they want to test against a live cluster. And so you've got some things that they can do there, especially if you want to register it with a catalog. You've got a cluster admin who is looking after the whole cluster themselves. They're probably not experts in all of the applications that are running on the cluster, but they do want to ensure that if there are certain CBEs in some of the software that's running on the cluster that that stuff stays up to date and, you know, managing dependencies between some of the teams may be at a very high level. So you need to update all these operators and get a good sense of how healthy they are, what's going into those updates, you know, if there is any security content in there, et cetera, et cetera. So that's kind of the purview of the cluster admin. And then the cluster user really just wants to run databases and messaging queues and storage services or whatever it is. So they just needed a mechanism for discovering which operators are installed on the cluster and then interacting with them. You know, hey, give me a MySQL database and this namespace and call it test, and then I want a different one with a different configuration in prod and give me some graphs for, you know, all the monitoring that the operator sets up. You know, that's the purview of the actual user there. So when you dig into it at the very bottom here, you see there's bubbles for dependency resolution, CRD lifecycle, and collision detection. This is where you start to, you know, need a system for installing all these operators because what they're doing is they can depend on each other. So you might have a serverless operator that's, you know, running Knative, which is a serverless framework for, you know, writing functions and having an event-driven application. But that actually needs a Kafka queue for it to consume its messages. And so it might depend on a Kafka operator. And remember, you as a cluster user don't want to be an expert in either one of those things because you just, you know, want to write your event-driven application. So you install these two operators that actually depend on each other, and so the cluster admin can help manage some of those dependencies for you. And OpenShift has this dependency resolution built in based on the lifecycle manager. And then what these use at the core is the CRD, which is a custom resource definition. And that is the ability to plug into the Kubernetes whole model for doing RBAC and extensions to the core. And those need to be life-cycled. So they have versions just like a deployment and a stable set, and a pod have an API group version in kind. CRDs have the same thing. And so they need to be upgraded and managed as they might go from alpha-debated to stable. Or as new features in Kubernetes come in, like some of the new validation logic that was just shipped where you can say, you know, this is a field as an integer, and I want you to enforce that, upgrading into all of those new capabilities as well. And then lastly, you don't want two operators colliding over trying to manage the same type of resource. So if you know if there were two Kafka operators out there, you don't want them both trying to operate on the same Kafka cluster. So there's some collision detection that's built into the lifecycle manager. So it protects the cluster from itself. It really kind of happens all under the hood without you really having to notice it. So that's a quick overview of kind of where these two pieces of software sit and why they're so important. And then let's go into some specifics about new stuff that's coming. So in OpenShift 4.5, we are first introducing a new bundle format for operators. And the bundle is really all the metadata that those two systems create in order to install and upgrade and manage operators. So this is really on an operator author, is the person that's really going to interact with this. And the exciting thing about this is it just breaks apart some of the dependencies we had on these manifests that you had to build so that they're a little bit easier to run. And what's really exciting is they're actually going to be packaged just as containers, just like anything else on the platform. And so when you build an operator release, you'll have all the cube objects that represent all the RBAC that this thing requires. And here's the service accounts I want you to create. And here, I can then be installed cluster-wide or I only work in a namespace. All that metadata about how it works is bundled up into a container image, and then you can just mirror that into whatever container registry you want. So OpenShift will have some bigger bundles of these operators that represent our certified community operators, but you can also just build your own and distribute them. So one of our partners might want to do that, for example. And so all you do is you pull this container and you say, you register it in the cluster and say, hey, this is a new operator that I want to use. It's represented by this container image. Make it available in my cluster, and it goes. You can see this new operator object on the right-hand side. Makes that happen. We're also going to be using this new format on Operator Hub and kind of build it into all the tools. So the SDK builds these things. Olin knows how to run it, et cetera. So that's really exciting. It's kind of more under the hood, but it matters if you are building an operator. The next is new capabilities inside of our package manager tool, OPM, to start building customized versions of all these. So instead of just a one-off operator, ships version one and then version two, you, at a big banker insurance company or something like that, might want to curate your own set of catalogs. These are the operators that we have tested and we know our high quality and work well. So I want to allow folks to have access to these. We have a new tool called OPM that can help you build and push those to a central place. So you can say instead of registering operators one by one, here is my entire private catalog of operators that I want to use in my cluster. This is really important for clusters that are in disconnected environments. Sometimes government agencies and financial services, things that are running like a stock exchange, aren't connected to the internet and so you want to be able to use operators in that environment just like any other. So these catalogs can be built. You can mirror all of the containers that are required into that specialized environment and then they can use operators all the same. So we're really excited about that tool. This is a bigger feature add for those specific types of customers. Then lastly, all these bundle changes are really all about introducing a new operator API. So before, if you were in the ecosystem, if you used some of these before under the hood, we have a bunch of different manifest files that kind of describe how this thing works. So the cluster service version, the CSV, is typically the main piece of metadata that we have and that would describe all of the CRDs that this operator uses under the hood and how to install them and the permissions and things like that. But then it was a little bit detached from the subscription information, which is how do you upgrade this operator? Does it happen automatically or does it happen manually? And then some other things about how that actually gets installed on the cluster. So we're going to go unify that all into a new operator concept. So this is going to be really exciting because then you would just say, OC or a kubectl get operators instead of having to go look at all these different objects and kind of piece it together yourself. Work just like you would say get pods. You're not going to go hunt around to four different objects to try to figure out what's going on with your pod. So we're pretty excited about that. A lot of under the hood changes, but it's going to be a big UX win at the end. One of the last big things coming in OpenShift 4.5 is a bunch of scaffolding around admission webhooks or webhooks in general. And this is key because if you're familiar with the kube extension mechanisms, you can actually plug into a bunch of the deep parts for when every single object is created in Kubernetes. You can have a yay or nay on whether that object gets created and that's called an admission webhook. And these webhooks are really key to doing advanced functionality because you might want to block every Postgres CRD, so this custom object that an operator uses, that doesn't have two different required pieces of settings in there. You might want to say, hey, go reject that and somebody needs to go re-submit that with the valid settings. And this might not just be that both of those exist, but the values of them can be compared with custom logic so that you know that it is actually truly valid. An example of this is JetStacks, a really popular cert manager for doing let's encrypt certificates for PKI, validates some of the settings against other things that you pass in so that it knows that you're going to get a valid cert out and that your environment and the settings that you're passing in are actually able to create a certificate that they can issue. So that's really exciting. So what we do is we scaffold all these webhooks and if you have to imagine, you know, these have to have TLS and they're secured and they're sitting in the critical path of CUBE operating because it's going to ask for every object you know it's listening for, hey, go send off to this webhook and give it a yay or nay. If the webhook is down, you're not making any objects because it can't make a yay or nay decision. And so what we do is we use the platform, OLM and OpenShift to scaffold all that for you so we rotate your TLS certificates, we generate them in the first place, we set up all the routes and ingress for it to work, we run the pod that runs the webhook and all that type of stuff. So really, really easy, all you do is write your logic for the webhook itself. Mutating webhooks work the same way. You're basically mutating things instead of blocking admission of them. So if you wanted to set a parameter on every single object that goes to the system for like auditing or all kinds of things like that, you can do that. And you can also block mutation if somebody were to enter into a bad state for that operator, just like the settings you wanted to block on create, you can also do that on edit. So a good example of this is CUBE DB is an operator that helps you run DBs on CUBE and it actually prevents accidental deletions. So it can look at the cluster and figure out if it thinks that you didn't clean up. You know, you didn't delete all the objects, you just happened to delete this one critical one. Did you really mean to delete that? Or some behaviors that it can power. So that's really exciting. And then the last part of this is going to be in later OpenShift releases, which is the CRDs owned by an operator will be able to do conversion webhooks, which is kind of the last webhook, which is when you upgrade a CRD from like an alpha to a beta or a beta to a stable API you might need to mutate existing CRDs that are already in the cluster for them to, you know, meet that new standard. And so you'll be able to do those in the same manner in a successive OpenShift release. All right, so that's all in 4.5. Now I just want to touch a little bit more on the future and what's coming in the next year or so. And one of the things that we've been working on is an easier update graph. So if you picture when you're updating from version 1 to 2 to 3 to 4, some of those versions might be able to be skipped. Some of them might not. Some of them might not work with each other. Some of them might have a bug later on that you want to revoke that update. So all the typical tools that you would use to run a SAS or to manage packages on a system, we're bringing into OLM. And so we have a lot of that today. But what we don't really have is just an easy way to say, hey, follow just semantic versioning and just kind of make it work how it should. So you can see some examples here on the left. So if you have auto updates on going from 1.1.1 to 1.1.2 to 1.1.3, just kind of work the way you would think. So those would just make it easier for teams that are building operators to just get things updated and then for teams that are consuming them, admins just to get the updates as they would expect. And then if you did have automatic updates on, it works. If you want to have a manual approval, this thing would wait for you to say, yes, I want to update to that version. So you've got both flows that you can unlock. All this is without building an explicit graph, which is behavior that we had before where you could say this version replaces that version and then the next version out would replace that one, etc. So that's exciting. That's going to be coming soon. We're laying out the scaffolding for that today. On the SDK side, we've been integrating for several months now with an upstream project in Kubernetes called KubeBuilder. This is a very low-level framework for building Go-based operators. It's kind of what a lot of the tooling inside of Kubernetes itself uses to build operators or their controllers, what manages all the life cycle of components. We're integrating with that. We don't want to have two competing projects for this. It just all makes sense to combine efforts. What we've been doing is modifying the way that our code scaffolding works to look very similar to these KubeBuilder projects. We're going to add a little wrapper in there so that you can adopt KubeBuilder projects but then gain all the stuff that our SDK adds around it. If you think of KubeBuilder, it's literally some code and some scaffolding, but you don't get the functional testing and all the packaging or CLI for doing all the different workflows you need, a better user experience. That's what you'll get from using the SDK even though the bits under the hood are some of that same KubeBuilder goodness. This also aligns us with the upstream group and we can all work together towards a common goal. We're really excited about that. We're pretty close to this and you'll see that coming for the first 1.0 of the Operator SDK. Next, I talked about testing a few times. We're moving to embrace another open-source project called Cuddle, K-U-T-T-L for our scorecard test. Our scorecard is a way to both do validating and functional testing of an Operator and Cuddle has a really powerful way of doing assertion based testing. You install a version of the Operator and you change say you start 3 replicas and it'll go make sure that you have 3 pods running or whatever needs to be running. Then if you change that to 4, did it actually change to 4, etc. You can test out in a fairly light touch way your Operator's behavior. We're going to integrate this into the SDK and we hope this will make for a lot more mature Operators so that folks can then even test these on their own and you do extensive validation of software before it gets in your environment you can use these tests to validate it that it works exactly correctly in your environment as well. This is important as some Operators may call out to hosted services like a lot of monitoring tools will run an agent on your cluster but then go talk to one of their SaaS services and if you're in a disconnected environment that might not work so some of these testing tools can help you tease some of that stuff out. And working with that team as well. So just to sum it up this is all the new stuff that we were talking about the CSV List Bundles. This is that new bundle format we were talking about. The Simver based Upgrade Logics are not having to build this extensive graph. Being able to bundle functional tests with your Operator building catalogs with Kubernetes tooling so that you can register a set of Operators onto a cluster at the same time. There's more integrated packaging so direct ties between the SDK and OLM especially around that new bundle format and then bringing Coup Builder style Operators into the framework. And then on the cluster admin side some of more advanced dependency resolution and being able to disable that the OPM tool for offline mirroring of all the containers needed for an Operator the new Operator API and then web hooks which is the ability to choose a more fine grain version of an Operator instead of pulling off the latest one of a channel you can choose. So that's all coming and we're pretty excited about that. It kind of meets everybody's needs from cluster admins to Operator developers to Operator users. So that's basically all of the goodness we have coming and roughly the next year-ish so you'll have to pay attention to the GitHub and mailing lists and all that and stay up to date on that and we can always do another briefing of course and we'd love to have your interaction in those communities so if you've got new features you want to see us go in a different direction for something or adopt a use case that you think maybe is not unique to you and that we should address for the entire community we would love to have you on our different community calls, mailing lists GitHub, please interact with us. And so that's all I have I think we're going to take some questions and just have some open discussion for the framework in general. Yeah, well thank you Rob and that was a pretty good tour de force and there was a lot in that packed into that little half an hour so one thing that would be really great there's a couple of other folks that are on the call here, I know Austin's here I'm just going to put you on on view. The announcement should come shortly around the Operator framework and incubation status for the CNCF the vote went through the day before yesterday I think we finally got enough votes to move it over the line and then a whole bunch more came in so that's great huge shout out to the TOC for making this happen and working with us and getting all the jumping through all the hoops was actually worth the effort and we're really pleased to be there so and Austin is one of the organizers for the SDK working group so the sub working group of the Operator works so just say hi Austin and if you wanted to get involved right now where would you, Austin, where would you send people? Yeah, the best place, the central hub for where to get in contact with us is the community repo and the Operator Framework Github and that will point you to the other places but just to give you a preview of what you might see you'll see three working group meetings, the Operator Framework meeting which is the third Tuesday of the month the Operator SDK community meeting which is monthly on the first Wednesday and the Operator Framework OLM working group which is every two weeks on Thursday in addition to that you can join the Operator Framework Google group or file issues on our Github and I'm sure there'll be some more infrastructure and scaffolding created as we move over to the CNCF and all of the other SIGs and stuff like that so watch for some transitioning news and information coming out in the upcoming days and we'll probably have another chat about that sometime online as well so there are a couple of questions and I thought the first one was kind of pretty good all those things that all the goodness you talked about in 4.5 Rob are those coming in 4.5 and if they are are they tech preview and I would add just to clarify those are things that are in OpenShift are they also available for other Kubernetes sure yeah I'll start with that one so OLM works against any cube and so a lot of the features that we talked about the SDK runs on your laptop for the most part and so anything cluster related is going to be OLM and you can use that against any cube so we do some testing on upstream and of course it's built into OpenShift and hopefully with the CNCF adoption we'll see that get built into other cube distros as well so you get all that kind of no matter where you're running depending on which version of OLM you have so in OpenShift for 4.5 the new operator API is going to be not going to be the new API yet because it's going to be read only and so you can optionally enable a feature flag to use that new API but it will only be reading if you want to mutate those objects you will use the old interacting with a CSV directly or a subscription or install plan and then if you over successive releases will have the two way binding on those objects and then eventually deprecate the old objects but that will happen over a long period this is really a preview for folks to check out that new API we'd love feedback, we'd love any of that that you have so that you can treat it more as a tech preview not a GA yet I think maybe because we've had so many conversations in different working groups about disconnected stuff Walid's got a question about not really a question but if you could elaborate a little bit more on disconnected restricted environment enhancements and how people can overcome any possible constraints on how operators are built in the first place assuming internet access and he's also not quite sure what is that CSV Sure, so the CSV is a cluster service version which is just the metadata around an operator and so it's, you know, the operator is a container so where do I go get that container how does it need to run, does it need a special service account when I do need a service account what RBAC does that need to have and so the CSV holds all that information just about how to run and manage this thing, so that's what that is now for disconnected and restricted environments what you need to do is for every mention of an operator's container or containers that it's going to spawn so we call those the operands so if you have a MySQL operator then you would have operand pods of MySQL running on your cluster so you need to basically tell that operator hey, instead of going to get those from Quay or from Docker Hub or from this online registry, I actually need to go get them from my private disconnected registry over here that's available in my restricted environment and so that's what all the features are about is scaffolding the code of the operator such that swapping out those images, you know, that's going to be the same digest and so you're getting the same container but it needs to come from a different place so that's where a lot of the scaffolding for disconnected is happening is moving those container images around inside of the CSV there's a special metadata field for related images and this is all the operand images that this operator is going to stamp out we need to know what those are because you can embed them if you're familiar with Helm charts you'll see that they're basically built as strings and sometimes you'll take in a tag and so if we don't know the exact explicit image we can't mirror them, we can't iterate every single tag that exists on this repo and sometimes there's not even a way to find that out so there's a bunch of scaffolding to help you do that and then that OPM tool will help you so if you say I want to mirror these five operators that actually might translate to 40 or 50 different container images that actually need to make it into your environment and so that tool helps you do that hopefully well, he was pointing out that only a few of the operators support disconnected according to some article he's read here in the chat yeah and so that that represents that those folks need to embrace that little bit of indirection versus like explicitly calling out a specific operator images they need to support folks mirroring it and then referencing that other image sometimes when you have hard coded references you know your hands are kind of tied there that's what that article is most likely about is a listing of which ones are hard coded and which ones aren't I'm wondering if that's something and this is just me thinking out loud so apologies if that's some piece of metadata that we could put on operatorhub.io to show whether something supported disconnected or not exactly that might be yeah so that's something that we're looking to do for both operator hub and inside of OpenShift itself is when you have that related images section filled out we can reasonably assume that this operator has been tested in a disconnected environment and so hopefully over time as more operators are embracing this is that we can then you know maybe put that in our certification pipeline is actually checking you know that this does work in a disconnected environment that kind of thing and Ryan was noting also that in a related update from Minikube which all of us know and loved and wish there was a mini-shift that OLM is available as a plug-in now to Minikube that's awesome that's pretty cool yeah and so yeah having OLM accessible to you as a developer as an operator developer is interesting because that's how you can start testing all those upgrade paths yourself so and you know test that somebody can go from a single node of your application to a multi node version of it to maybe test out backup and restores and that they can you know restore to a disaster recovery environment all that stuff can be orchestrated locally as well so that's important yeah so I'm not going to ask you the question I think that everybody wants to know is really is when is 4.5 actually coming out the door and there's like yeah we all get shot for that saying any dates and time but it should be all of what we're talking about here should be available in the future and because it's all in timing and then in 4.6 which is OpenShift 4.6 what are the the key things that we can look to get in 4.6 of those the next next round of things yeah the main things for that are going to be getting that operator API into a reading and write mode and so though we'll kind of have both of those living side by side in 4.6 and then some more extensive dependency management and so this is something that we've been exploring as operators are starting to get more intertwined as people are building products that you know you have your application and you need a database or you need you know like a Kafka for that event stream like we are talking about but you might want to depend not just on the latest version of a Kafka operator but maybe a very specific version or a range of versions or it's an optional dependency not a required dependency so bringing all of that type of fidelity to the dependency management is something that we're going to be exploring in 4.6 and beyond you know it's not an overnight thing but you know getting new features in 4.6 and some in 4.7 and some in 4.8 etc so and we have a number of SDKs already are there any we've talked about Python SDKs are there any other SDKs in the works at the moment is there anything that from Austin or others that we're looking at building out because we always talk about Helm and Ansible and Go and now the Kube Builder stuff and that why do we maybe how do we get more languages and more approaches into this mix or are there plans for that? Austin do you want to take this one? So the Helm and the Ansible operators are not they're a language of themselves but they are actually under the hood they are Go operators as well and so you can kind of think of them as more like a Shim operator that allows operators in those respective languages and if you wanted something in Python or any other language it's going to require the creation of controller runtime in that link so we're not there yet it's something that we're still considering and it gets brought up every now and then but you're not actively working on it usually gets brought up by me but that's usually something that comes in from the Python community that's what I was going to say Python is probably one of the ones and if you think about it developers that are building against a stack so if you run if you're a big investment bank or something and you write Python you probably want to build an operator in Python makes sense you know so that's where folks are kind of demanding that wide swath of stuff and there's like a Java framework that's outside of our SDK that people use to build operators in Java and that kind of thing that looks like Joe was waving his hand something to add to that and say that if anyone is interested in helping contribute to develop some of those underlying libraries that are so critical to having making it easy to develop the right primitives for operators we're definitely interested in collaborating so if you're out there and you've got a Python operator and you've written a lot of those primitives definitely hit us up and let us know I think probably one of the big hurdles is just having the expertise in those languages and knowing how to basically duplicate the equivalent of a scaffolding tool that can lay down files for a project in your language and then also have the underlying libraries that make it really easy to basically give developers a reconcile method to implement so I think the kind of the first thing that we're looking for is those kind of two building blocks that's what's holding us back really at this point so yeah if anyone's interested in helping us do that by all means let us know I think that part of what my hope is with the the CNCF incubation project that will get more visibility and get more resources and experts to help us build out some of the other pieces that will be helpful to continue to grow the adoption of operators and others. You mentioned Rob also the certified program for operators and I wonder if you could talk a little bit about how people get involved in that or bring their wares to the operator certification that Red Hat has. Sure yes those are the two main flavors of operators we have Community Listing and Operator Hub IO and then we have vendors that are building commercially supported operators or building operators around their commercially supported software and they want a way to kind of get the stamp of approval from Red Hat that this software works well and integrates well with OpenShift and so that's our certification program and basically you can take your same code base and we run it through some testing and we want to make sure that we can jointly support customers as a big part of this program so it's not just technical it's that we have agreements in place for how we can both support customer tickets and escalations and bugs and things like that in a reasonable time frame so that's all that kind of wrapped up into that program. We have a web page I don't have it off the top of my head but if you search for Red Hat Operator Certification you'll find it and that's where you can go and get in touch with some of our engineers and we can start looking at your operator and make sure that it works well get it through our testing pipeline and do some of the other business side of things to get that certification to happen then we've got like it's now I think certified so it's a well oiled machine and across all the categories that we talked about we've got storage vendors we've got all the major databases we've got a bunch of machine learning stuff and some monitoring services and all kinds of things so we'd love to have your product listed there as well. From the community side of things if you go to operatorhub.io there's a whole set of documentation in there to step you through if you have an operator how to get it in there and there's also the I think the interesting thing about the hubs and that is that they're really kind of based on a catalog and what you could resurface and rebuild your own operator hub or whatever using the underpinnings as well and that is out there in the open source land as well so for those of you who are standing up your own operator hubs there is a way, there is an easy way to do that as well so interesting to see that. I think there's about just under 140 community operators in operatorhub.io at the moment and it's a pretty interesting group of folks who have come in not quite randomly but a lot of them are done in conjunction with the they do a certified operator and they also put one in the community as well so it's a pretty easy onboarding process in both ways so we hope that if you are building an operator that you will come and also offer something up in the operatorhub.io and make it available to that let's see what is the advice for enterprise workloads should we stick to 4.3 I don't know that sounds like it's a beyond the scope of an operator conversation that's a big conversation we can touch on it really quick if you want so just understand that in OpenShift we have a new model where we're doing these over the year upgrades with OpenShift 4 and that's to keep you kind of up to date with Kubernetes as it's changing as it's being updated for new features, bugs, security enhancements, etc and so we encourage folks to upgrade often as often as we release so getting from 4.3 to 4.4 to 4.5 is really important for staying on top of the security of your cluster and remember that's the version of the operating system as well is all being managed with that version and the guarantee in all of our API versioning is that we will not break your workloads and that's including over that upgrade you should not experience downtime for your applications and if you do that's a bug and we want to fix it and so we have these tools in place to do these over the year upgrades so that when you're managing a fleet of clusters it's not any harder to manage one cluster as it is 100 and so hopefully you don't have too many excuses to not go from version to version to version you can also skip versions if you want and migrate your applications as well but all of the workloads that we have we're guaranteeing to not break that compatibility with the new API version so the group version in kind looks like Wally wants to be unmuted. Yes I've done that so Wally speak up but you may have to unmute yourself. Hi I am asking this question because for example when you talk about vendor operators like Boardworks NSX from VMware they are certified on a certain release and most of them they certify on 4.3 and some of Red Hat consultants gives us this advice that they say stay on the stable for 3 latest GA don't move to 4.4 and actually that's what I'm trying to do today I'm trying to downgrade to 4.3 but I'm wondering if this is like the good trusted advice or as you say that we should move forward but then I will be missing out in the quality insurance that the vendor testing does basically I know for example VMware they haven't tested 4.4 with their NSX operator which should be so what do you think Rob? So I mean always like if Boardworks or VMware whatever is a hard dependency on you of course listen to the guidance of where that's supported but I know we're constantly talking to both of those vendors on getting them testing the latest versions as soon as possible and certifying those on their side as fast as possible and so we want everybody to kind of move forward together so that we stay up to date with all the work going on in CUBE the feature set going on in OpenShift and then the features inside of their product as well and keeping that all moving in quick succession and so yeah it's a sometimes this is a new model for some of those vendors and so we're working with them so I would continue for you to work with them as well and try to get them into this kind of more cloud like experience of consuming software versus you know set and forget it kind of thing. Just to point out the more you ask for it the more they'll listen. Yeah I also think that everybody recognizes the complexity of the ecosystem that we're in at the moment the interdependencies of Kubernetes operators brings more to the the table OpenShift release cycles you know this is this is a very complex world and we are we're all in conversations constantly with the partners with the upstream folks and trying you know that's I'm not going to keep going back that's part of the wonderful part about being included in the CNCF incubation processes hopefully we'll get more visibility of what the other pieces in parts of the universe of cloud native ecosystem looks like and can have more conversations with these other folks not that like with Coobilder and was it Cuddle? There are a lot of these conversations that have already been going on with Helm 3 and lots of other folks in the background but I think that's just you know this is one of the complexities of such a big ecosystem is managing all those relationships and you know we the more we can do it out in the Open the more we can do it transparently with lots of sunlight on it and lots of eyeballs on it the easier it'll be and I think you know Portworx and the other folks that you mentioned while lead they're all in this community with us together and we're all learning together so it's getting alignment with release cycles, feature sets the more we talk about it the better off we all are so yeah touch base with your operator vendor partners and you know there are I know there are those ones that you mentioned are definitely in conversations with us quite a bit so hopefully we can get there so I'm wondering if there's any questions let's see there's a couple more chat things here coming in Dan is asking is there a roadmap for the Red Hat operators that will be released to the operator hub i.e. will there be advanced cluster management well that was yesterday's topic but we'll let Rob answer that one I don't know if there's a specific roadmap it's kind of up to each team I know that the ACM the advanced cluster management team does have an open source version as is classic with Red Hat I don't know if they package that since you're asking the question I'm going to go ahead and assume no so I can't comment on what that team is going to do there but we want that team to have ownership over the packaging and management of that operator just like we want any of the other open source projects to do that so I can't comment on their roadmap but I hope they list it I would love to see that yeah that would be great and I know we've had an ACM talk yesterday we'll have a few more talks so you'll have opportunities to nudge them in that direction as well so stay tuned and we'll let us know are there other operators that people who are on this call that are missing or that you're looking for besides ACM I think ACM is top of brain for everybody yes there is one someone's asking the ACM recording from yesterday has not been uploaded to YouTube but if you look on the Twitch stream the raw one is available on Twitch unfortunately for the ACM one yesterday the demo went south the demo gods were not with them so we will be redoing that again and there's another one next week so when we said the most important one to them is the open data hub one is there one for open data hub yet I thought maybe there's not actually I thought there was I don't think so yet I know we done anything commons wise with open data hub I looked earlier and I couldn't find anything Diane did you do one earlier we've done lots of briefies I don't think there could be an operator for open data hub open data hub is a thing he bird did a great demo and actually this is a good segue we're having an AMA in a couple of weeks on a Monday if you look at the calendar the open data hub team at noon eastern 9am pacific and I'll look up the calendar in a second but you can ask them that but open data hub per se is not is not something that is this would be a single operator in my thinking of the way that we look at operator hub open data hub has lots of components to it it's more of a reference architecture than a single operator but yeah that's where we're at with that so maybe Rob if you could just take a moment and on your screen share go to the github repo so that we end this with where to go for everything and so people see that as like the final thing going off where they can log an issue find more information and there we go so this is the github repo that's okay the github repo and the google groups are really if you go to the community page under here I think there should be one slash community this is really where you can find out how to get a hold of us to participate in this and as I said at the very beginning this too will change there will be a little probably clean up migration over to we've done a lot of work getting ready for the cncf donation and being incubated by the cncf not drastic changes but you should see some other pieces and parts of this pop up in the coming weeks or so so what I wanted to really do was just Rob do you have any final words that you want to add to here I'll just say that we're really excited about the cncf news and we're excited to get an even stronger community than we already have together through that organization so we'd love to hear from you how you're building operators we want certified operators community operators being listed we just want to make this as big and as powerful as possible and meet everybody's needs so please get involved find us at conferences whatever it is we'd love to hear from you yeah that's the other thing we will be around especially for the cncf kubecon that's coming up in shortly few weeks whatever soon so look for us there we should have a bit of a splash there with the announcement and a lot of us will be online in the chats for the different the many operator related conversations and talks there so you can definitely find us there and we're looking for you and your participation in this community so thanks again everybody for joining us today and we'll be there for you and hopefully you'll join us in this adventure thank you everyone