 Hi, I'm Karina Angel, and I'm here with Andre Tost from IBM. I'm a product manager in the OpenShift team, and Andre is here as he's CTO of IBM Cloudpacks. I'm really excited. We're here to talk about creating custom operator catalogs for OpenShift. We're going to talk briefly about why you'd even want to create a custom operator catalog, steps you can take to create a custom operator catalog, and then do a deeper dive into a specific example showing IBM Cloudpacks custom operator catalog that they have built. So why would you want to create a custom operator catalog? By now, most people are recognizing the benefits of operators in OpenShift and Kubernetes in general. And now they're also recognizing the value of creating custom operators and putting those custom operators into specific catalogs to meet their needs. And some of the use cases that we're seeing right now with our customers, well, first we're seeing customers such as banks or other large enterprises create these catalogs that are specific to their internal requirements and for their internal teams. We're also seeing commercial and government industries that are recognizing the value of these custom operator catalogs and having shared repositories for different verticals, such as government, energy, manufacturing. And then also we have partners that are creating solutions on top of OpenShift and for these solutions to create more value for their customers, they are building catalogs. And Andre, can you talk more about that? Yeah, sure. Thanks, Karina. Yeah, we probably fall into all three of these categories to some degree. So with Cloudpacks, we have a set of capabilities that we want to all expose as independently manageable and deployable capabilities. We have plenty of customers that we work with who are running their clusters in kind of a disconnected mode so that they can't pull things directly from the internet. So we need it to have a way to expose them in an installable fashion that we could bring to that cluster. We also want it, since there is tons of operators already out there, we want it to kind of carve out our own little space for them and an installable, separate catalog actually helps us do that. And then later in this presentation, I'm going to show you what it looks like in a demo. Back to you, Karina. Thanks, Andre. Let's walk through some steps on how to create a custom operator catalog. By now, you've probably been creating catalogs. And to show you the workflow, this is simplified, but you have your custom catalogs, and then you've created a bundle. And then you create your Docker file, and then you go all the way through, and you finally attach your catalog to OpenShift. Now don't worry, I'm going to walk through each of these. All right, first, let's create a bundle image. Now, if this command doesn't look the same as what you're used to using, OpenShift operator SDK is constantly evolving, and now we have a new operator registry, and operator SDK has added all kinds of new enhancements. So that's why. If it doesn't look familiar, let's keep going. So first, let's create your bundle image. And we want to make sure that you have installed your operator SDK on your system and everything you usually need to create your operators. Next, we create a Docker file from that bundle, and then build that into a bundle image. Now, let's make sure that we can push it into your registry. And that's what we're going to do now. Let's push your bundle image into your registry. For this example, I created one, I created repository in quay.io. Now, you don't have to use quay.io. You can use Docker Hub or your favorite registry. We like quay.io. We use it internally. We use it for many things. So let's push that bundle image into your registry. That way, we can make sure that the index can grab that. So now, let's talk about creating your index image. You have a couple of options. You can use OPM, which is great if you have one or a few operators, or you can use IIB. Now, IIB is index image builder. And that's what we use internally at Red Hat. It makes sure that you can build out as many test sequences as you want, or as many tests as you want. It really helps creating those tests and the production indices so it feeds right into your CICD pipelines. It also serializes your index operations. That way, you're not creating these race conditions. You want to make sure everything's done serially, and it takes care of that for you. So we recommend, for anything that scales, if you want your catalog to scale and make it simpler for your pipelines, use index image builder. However, we do still have OPM to create those images for you as well. And then we want to attach your index image to OpenShift. Now, there's several ways that you can do this. One of the most popular is configuring your catalog source, and then go ahead and directly apply that into your OpenShift cluster. And now, Andre, can you show us that deeper dive demo into IBM Cloud Pack so we can see what it looks like before you have the catalog and then once you do have the catalog? OK, yes. Let me share my screen here. I'm actually going to show what the final result looks like once the catalog is built. So what we do internally, and I'll get back to that when we're done with the demo, is that we have multiple separate teams built their operators. And then we kind of have a centralized process where we bring it together and they're building their bundles and that's what we're bringing together to build the index and build the catalog source and then publish from there. So we have kind of a process behind it. I have a picture that shows that. Once the catalog is out there, let me show you what that looks like. So this is a regular OpenShift cluster. It's freshly deployed. And you can see here right away, I sometimes point out that speaks for having a custom operator catalog is that here there's 402 items in here. So there's 400 tiles, right? It makes it hard. I have filters here, right? But like I said earlier, we'd like to have our own kind of collection being able to show our operators and our tiles in there. And the way that works is simply just like Karina was showing earlier, and I'm going to cheat here. We, in fact, we have two catalog sources. So let me put them in here. So this is one, we have a set of common services, common kind of things that we share across many capabilities. So we kind of group them into their own catalog. You can see that there is a poll that every 45 minutes will go and refresh that. And that's all really I need to do is, and I say create, and then it creates this thing and then I'm done. Let me go back and make sure we're creating the second one as well. So just copy and paste it in here. There it is. You see it's kind of like the same. We're pulling this, by the way, from Docker. We have our own domain there. So that's where our operator catalog lives. And we say create. That's pretty much it. So if we go back to the overview tab, we can kind of see that now it's pulling the images and it's doing things. The easiest way to see if it actually worked is to just go back to the operator hub and wouldn't you know what the first one is out here? That's the first catalog I created. And you see that these are more, it's for auditing a sort manager. We have some support for helm, those kinds of things. Let me actually refresh this. By now the second catalog should be there and there it is. We got 19 operators in there right now. And again, this now makes it possible for us to kind of group this in a nice way for only the things that we want to have. I don't want to go into the details. I'll just show you an example. So we have something called the cloud pack for integration. We have an operator for that. We can now say install that thing. We'll put it into a particular namespace and we say install and then it kind of sits there. Now it's gonna do its thing and what makes this interesting is that our operators all have plenty of dependencies on other operators that they pull in as well. And one of the advantages of doing, and you can see this, I'm not doing anything right now, it's pulling in all those dependent operators and it's making sure that those are installed as well. So there's a big benefit about the way that you can define these things in the CSV and then have this being represented in the catalog accordingly. Now one interesting piece to this is that at the same time we have these dependent operators. We also have the common ones that I, that's the catalog I showed first. They go into their own namespace. So you can do this dependency and deployment kind of automatic handling across namespaces. It just doesn't need to be just all in one namespace. And if we were to wait, then something would pop up here. There's a whole bunch more. It takes a while for the dependencies to get resolved and then pull down the operators and then they get installed. So this all happens magically, automatically in the background. Let me go back to the earlier ones. They're in this namespace here. Just show an example of where I could now say, well, here's one of those that I now wanna create an instance for. This is a capability we have in there. It's called the platform navigator. And then I can go in and do some settings and I hit create and that's then actually gonna deploy the workload. Just wanted to show you and I have, because that takes a while because it needs to deploy a whole bunch of other workloads that are dependencies and so forth. And because we kind of reflect the dependency chain that we have with the operators in the workloads as well, of course. So let me kind of show you here is another cluster that I stood up where I have all of this in place. And so you see that I have that same one that was called platform navigator. This is the same operator. In this case, I already deployed an instance of it and it shows that it's all ready to go and I can even see the resources that it manages and so forth and I can go in here. And one thing we did is that we also put a console link in here. This is something that we've been struggling with in the past a lot is where we say we have an automated deployment of something out of whatever kind of catalog but then how do you know that it's done? How do you know where to go for example, go to some UI that then lets you work with this? So in this case, it just need to go here and here's our cloud pack and then I can do my thing with it. So that's all I wanted to show in the demo. Like I said, this is all coming together in the central process. It's then installed. We have, oh, let me show you one more thing here before I forget. We looked earlier at how I can just create the catalog source in there. We also have a helm chart in the developer catalog where I can click on helm. That's not in here, that's interesting. Oh, there it is. And I would have a, in here we have a helm chart that basically does the equivalent of creating the two catalog sources that are showed earlier. So that's even more kind of fully automated and then with these operators we're good to go. So let me stop the sharing here. Go back to the slides. There's a couple of things we wanted to talk about on top of what we just showed. This is kind of the flow. Like I said, we're doing this. We have been doing this for a while. We're an active member of the operator from our community and the SDK community because there's new things emerging and that's been very helpful for us to have that kind of exchange there with the people that are writing the code to see what's coming and how could we utilize it. I'm not gonna go through the details of this slide here. Just wanted to bring it up to show that if you're in any kind of scalable environment then you need to have a CICD process for this, right? If you go back to what Karina showed earlier, this is, you know, you build bundles, you build the index and so forth. Obviously all of that needs to be automated because you wanna do this kind of on a continuous basis. So we have a whole elaborate kind of process developed for this and this is a diagram trying to represent it. Like I said, a distributed organization where we have individual teams or developers, if you will, writing their operators, they're responsible for those, they're building the bundles and then they ship them to us and we have a central process where we kind of package it all together. We then upload this into, in our case, Docker Hub and that leads you to this catalog source that we saw earlier. So one recommendation I would always make is start thinking about how if you make the building of the catalog, or let me rephrase this, you wanna make the building of the catalog kind of a first class citizen in your development process. That's really what it comes down to. So you wanna have a CICD process defined with all the automations that you need to plug into. Another point I wanted to make is versions. That is something that we're struggling with is something that requires a great deal of discipline. I mean, we're building commercial offerings, but I would assume that even if you're like building in-house applications, for example, and building operators for those, you still wanna have a strategy as to how you version things and how you wanna make sure you can do updates and so forth in here. This is just an extract from an internal rule book, if you will, that we have defined and we're shipping that to all the development teams and basically saying, here's how you do versioning, here's how we do upgrades, because obviously the first step is the initial install, like what I've just shown, but then the question is, what if a new version comes along and how do we reflect that accordingly in the respective OLM data? So there again, the recommendation being, you wanna have a versioning strategy, you wanna define kind of how you version your, not just your operators, but also your operands as we call them, so the target workloads. In our case, we're tying them fairly closely together so that we can use the versioning mechanisms in OLM to essentially and effectively version our software. That then, here's the last thing, this is something that we built, is where now what triggered this whole conversation is actually the emergence and arrival of OpenShift 4.6, that's an EUS release, so it's gonna stick around much longer, also with our customers, and we kinda wanna reflect that, that means to us, and maybe that's special to a commercial offering like ourselves, but we also need to now see how can we, on the one hand, have a long supported kind of channel and branch, and at the same time have the ability to have additional new versions come out that maybe have a shorter support window, so in that respect, we're almost mirroring, if you will, what OpenShift does, again with 4.6 that sticks around for, I don't know, 18 months or so, so we need to have versions of our software that support it for that long, at the same time, we wanna be able to have new versions on top of that. So there again, I feel like every time or in our experience, I feel like once we got past the initial how do you build a catalog source and put it in there, which is not that hard, then where we spend most of the time now is to talk about topics like this, how can we automate this, how can we version things, how can we make sure that we can support this, again, coming back to being a commercial offering, and I would only assume that for most of you out there, as you're building your operators, you will hit very similar problems. So with that, that was all we had to show. Karina, you wanna wrap it up? Thank you, Andre. It's been really great working with partners like IBM and Andre in particular and all their teams. And as he mentioned, they're involved in the upstream communities, and they've been really adding a lot of features to the OpenShift container platform, like a lot of the other partners and customers out there. So please join us and the operator framework.io, you'll find all kinds of information, as well as the index image builder. If you wanna collaborate on that, that's at github.com slash release dash engineering slash IIB. So that's great and start creating your custom catalogs and we'd love to see them. Thank you.