 First contributor. I'm also one of the steering committee members for operator framework for whole project and I'm a maintainer on operator SDK specifically And then you want to go ahead and introduce yourself. Yeah. Hi. I'm Vasha. I've been contributing to operator SDK and OLM for past few years. So yeah, that's about me. I work for Red Hat Okay, so show of hands. Who here has used one of these things? Have you ever written an operator? Have you ever used any piece of the Operator Framework SDK OPM OLM? Okay That's always surprised that people actually use their stuff So for those of you who maybe aren't familiar, I did see one or two people who didn't Raise their hands. What is an operator? So an operator is a design pattern for deploying software on top of kubernetes It's a way to structure a program to run on top of kubernetes that rather than statically deploying Your application sort of teaches kubernetes how to fish So you're probably familiar with the behavior of some things that are very close to operators if you've ever used kubernetes like A pod controller. So if you've ever kubectl created a pod the pod controller is the process that sees that line You write in dead city when you execute that command and actually goes and makes it happen It makes a docker container exists somewhere So an operator then is just sort of reproducing that Design pattern with your thing instead of a core kubernetes resource type So you're going to create an operator for your thing. You're going to write a crd that represents its state And you're going to make a controller that reconciles that state with the actual state on the cluster And so operator framework is a collection of tools for making this easier We have a variety of tools for scaffolding deploying managing upgrading distributing testing and all kinds of other stuff these applications Right here's just a neat graphic with some of those things operator sdk like I said a tool for scaffolding them opm's a tool for Registering them so they can be distributed and then olm's like a package manager So we'd like to start with sort of an announcement We've been having this cooking in the works for a couple months now But java operator sdk is officially joining the project as our third federated sub project So for those of you not familiar with java operator sdk, uh, they're a team That's been working on a plugin for operator sdk to generate Java operators as well as the underlying framework That that uses so it's a new architecture. You can generate operators with just like we previously had go Ansible and helm operators now you can write them in java And then the sort of equivalent of controller runtime the framework that allows you to interact with the kube api from within the operator itself Uh, all right, and then this is just what I just described. So they they write the framework. They write the plugin Uh, and we're hoping that this is going to make a closer collaboration between them and the operator sdk team in particular Uh, as well as hoping to garner more contribution from the community because we know A lot of application developers are maybe not the most familiar with go lang if they haven't you know worked with kubernetes before Uh, and java is a very large very popular enterprise language So we're hoping that this all will drive some more contribution to the project or at least some more use Uh, so uh going down the list of things that we support build. So what's been going on in operator sdk So we've been working on re doing sort of the internal architecture of operator sdk And kube builder which we're sort of built on top of And what we've been doing is we've been so re architecting It so rather than imported at like a software library level We're going to try and import kube builder at like a binary level to make things more similar to how like get plugins work If you're familiar with those And the eventual goal of this is to make third party Plugins for operator sdk possible. So like rather than having java operator sdk compiled into operator sdk itself That some third party author would be able to write You know some plug-in like a java library or whatever And you'd be able to go to their website and download it without having to bug us And we're currently in the process of Implementing that in kube builder and and redoing things on the operator sdk end to use that We're hoping to sort of dog food this and turn our own internal plugins the ones we use right now today for go howlman's bowl and now java To sort of dog food this approach so that you know theoretically anybody's Third party plug-in would be just as much of a first-class citizen as the the core architectures we support And hopefully this again this will drive more contribution because People aren't the most familiar with the languages we do support And they'd be able to write their own roll their own if they want which we're actually already familiar with at least a couple projects One for python and one I think for a type script That might be you know interested in joining this once this is possible So that'll be cool And then here's some quick links if you're interested in any of the things I just talked about You can download the slides and and follow these links to look at some of the stuff we've been working on And with that i'm going to pass it off to varsh to talk about olm Now that we heard about how to build operators, let's look into how to manage them So operator framework has come up with a solution called operator lifecycle manager What does olm do olm helps us in installing Uninstalling and managing the entire lifecycle of an operator on a cluster. So what's new which is happening in olm We are moving to a new set of apis and introducing a new version known as olm v1 But the question comes on why are we doing this olm has remained In the community scenario for a very long time Now the reason for doing this is that the existing apis which olm developed Kind of were not isolated and kind of fulfilled a lot of purposes Which is not something which we wanted and due to this anyone who is new to olm When they started onboarding or using the product it became all the more difficult because of this whole Entangled set of complexities which are currently in there Now olm was designed A long time before at the time when crds were still in the beta format So a lot of our design decisions are not relevant to the current scenarios And that's the reason it made sense for us to develop a very new architecture based on the learnings which we had on olm v0 And introduce a new set of apis Now what are those new set of apis and what can you expect out of them? So olm will consist of a focused set of scoped components to quickly go over them We have four set of components one is the operator api Which is brought to us through operator controller. This is going to be the user facing api Where in the user can define that they want to install XYZ package from x dot y dot z version and from channel a and operator api will do it for you Now inside operator api will have this rug pack The component rug pack can be called as a custom installer So its job is to deploy manifests And we'll talk a little bit more about rug pack in the coming slides And then we have this Interesting project known as catalogue d. It's the package server in very simple terms I would call it as a library which contains bundles and contents of multiple other operators The fourth one is a very interesting Component which is known as depi It resolves dependencies operator world is always not easy You have a set of complex dependencies in the sense operator a may depend on operator b and operator c And they may in turn have a lot of other dependencies in each other So depi is a SAT solver which is going to help us with those Now to quickly go over rug pack Ruck pack as I said before can be considered as a custom installer In simple terms, it's a custom controller or a reconciler what it does It reaches out to the data store which contains all the operator data operator contents Which are known as bundles in the olm world. It picks them up It has this component called provisioner, which is nothing but a reconciler And it's going to install them on cluster. So we have three different apis defined here One is the bundle crd. It defines the manifest and the operator contents which need to be present on cluster The second is the bundle deployment crd The bundle deployment in turn points towards the bundle which needs to be installed at that particular instead of time And the third one which I talked about are the provisioners. So provisioners have this interesting concept where right now in rug pack, we have helm provisioner and we have core v1 plane bundle provisioners And these provisioners can in turn be swapped. You could build your own reconciler Which knows your own bundle format and can bring up Stuff from the data source and install it on cluster. So this brings us to a replaceable Component kind of idea where you could bring your own provisioner plug it into rug pack and use it with olm So this is a very quick example on how the cr would look like We have on the left side the bundle deployment And the provisioner class name here is the core rug pack ioplane, which is an existing provisioner already built in the repository And as I said, this could be changed And this refers to app my bundle and the bundle in turn will contain the manifests which are defined on the right side So the next component is the catalog be Just to mention this it's in a very early stage of development So we really are looking for inputs on this side So catalog be is basically a repository or I would say in simple terms a library which contains the operator content That defines the packages and bundles which are required by the operator Now we even design currently includes an aggregated server and a persistent storage, which is hcd to serve the fbc content And we are working towards making them clusters code, but we are open to suggestions on how do you want to go ahead with it But it will expose two kind of apis which are the package level Which in turn expose package level data and the bundle level contents The next component is depi depi as I said is a satsolver Operators are not easy So what it does is it takes in constraints and it also takes in the available Bundles present in the cluster it resolves the dependencies and tells us if an operator is installable Given the dependencies a b and c and it also tells us is if the operator is installable in terms of The condition where it would not break any other existing operator on the cluster So a little bit more about how depi would work. So for example as a user I would say that I want to install package a from channel alpha And it's of the version x dot y dot c now what depi does is it converts whatever my requirements are into variables Which consist of the required packages which are to be installed now things are not easy my operator x may depend on Maybe dependent on operator y and operator c and again operator y and z contents need to be available in the data source Which is the catalog source here now depi checks in if that content is available in the data source Pulse and the dependencies and in turn recursively pulls in the other dependencies Which the operator may be dependent on and then applies a set of constraints So constraints in the sense could be something like out of the available possible solutions I want a solution where I have gvk uniqueness and package uniqueness in the sense I don't want two bundles serving the same gvk content or I don't want two bundles which belong to the same package So given all these set of constraints depi would solve and give us a solution set now the solution set would tell us if a particular package is installable What are its requirements and it's Would that be breaking the other contents in the cluster or not now that we have the solution set? rackpack will in turn go ahead and install the contents on the cluster And the fourth part is the operator controller, which is the user facing api So it's as simple as creating a cr in the cr you would define the package name the version and probably the channel This is still in progress So when the operator cr is created the reconciler for the operator controller will kick in It will resolve the dependencies using depi It will get in the bundle contents from the catalog d and it will install the contents on the cluster using rackpack And this is how the whole system works together So, yeah, this is the architecture which we had in mind, but we are definitely open to suggestions. We do have milestone one and milestone two out Which is a very basic demonstration on what we are looking forward to and what we are envisioning olm to be We definitely welcome you all to try out the demo to try out Things which we have developed and provide us input on whether you like it whether you don't like it Or if it's something which is breaking your existing Scenarios, so we have GitHub repositories for each of these components and we are pretty much Kind of active on olm def or our meetings community meetings happen in there So please do participate and yep the last one last slide is to provide us feedback on this talk Any questions Very healing run to you with a mic so you can Thanks for the overview For if there's somebody out there who's like equally comfortable with java and go or Some have had the trying to decide are there any highlights of features of the the java side that might be interesting or different Or any other context or comparison you could share Unfortunately, no because I haven't worked on the java stuff I'm not particularly aware of any any specific features Um From what I've heard their framework is pretty much just a straight re-implementation of controller runtime. So theoretically Everything that's possible with that should be possible in java um Yeah, whatever you want to do Uh unit tests for what unit tests for the operator itself Again, I'm not entirely clear what you mean. Do you mean would you be able to write unit tests for the operator itself in java? okay, uh Yes, you'd be able to write unit tests in java if you wanted to and you could use whatever Java frameworks, I know there's a bunch out there if you wanted to yeah I mean if you're talking about unit tests for the operator itself. Yes, you can you can write unit tests in pretty much any language So i'm not really As long as we're talking about like unit tests for the operator itself as opposed to like unit tests for the deployed operator There are unit tests available in java right now, but I don't know if it answers your question in terms of n-test java is still looking to develop the entire framework of n-test from go to Java so that's still in progress But basic unit tests are kind of still available in the repository anybody else Hi, do you have any Suggestion on how to end to end test on an operator like to deploy it? I don't know from pipeline to a kind or something like that and then run some So, uh, we have a tool called scorecard that already exists that allows you to run Integration tests on a deployed operator It comes baked in well not baked in you have to bake it yourself But it comes pre-written with a bunch of tests that are like basic reflexive Executions of the various api end points of the deployed operator And then it has functionality to write custom tests for your own You know, you want to test the actual functionality that's unique to your operator Um, you can generate that using the commands in the operator sdk for scorecard Uh, and basically what that does is it it, you know, you you push your operator to the cluster And then you run the scorecard command it generates the scorecard image pushes the scorecard image to the cluster And it it executes against the the controller pods Thanks. Yeah, sure anyone else With the current olem, it's been challenging to use github to Specify we want to install a specific version of an operator and then use github to say I want to upgrade from this version to to the next version. Is the new schema going to help with this? So the new schema which olem is going to introduce is in the form of fbc Uh, so the fbc brings in his json content I think the one of the difficulties you must be facing is through Defining the version through sqlite packages So that's something which we are working on to bring in and change the entire format through fbc, but again, um You mentioned about upgrades through github's but what we suggest is olem's ruckpack will itself do the upgrade for you So it will manage the upgrade cycle and in turns it uh, in turn it facilitates safer installs So in the sense you want to upgrade from v one dot x to say v two dot x and then in You are not sure whether two dot x is going to be successful or not And olem is going to provide a safer install in terms of rolling back in case it's not successful So we kind of suggest to do the upgrades Through olem itself through the inbuilt reconciler, which we have it in backpack And uh, thank you second question Does the new api also replace the csv csv the cluster service versions Oh this entire architecture will kind of remove csv So we are working towards a csv less olem where you may not have csv But the uh, just the manifests of the operator which you want to install That's the idea of the designed right now, but again, it's still in boc Thank you Hey, so just coming back to scorecards, personally when I saw the docs I shied away a bit from it I was wondering if you should talk more about like the benefits over just like nth test and you know what I can get out of it Uh, well, I know it comes pre-baked in with a bunch of I mean relatively simple But we're flexible tests that are just going to make sure like all the end points on your controller are working the way they're supposed to I'm not actually particularly familiar with nth test. I've never personally written tests with it So I can't really say one of the other But I have written custom scorecard tests and it's I don't know was fine. It was very portable because it's baked into an image and you can just push the image anywhere So anywhere, you know, you're pulling your operator image from you can just have the scorecard image live there and it's accessible on whatever cluster here You're running stuff on Like what's for example were you testing with these scorecards? uh, I mean so the like I said the the baked in tests tests like Reflexively like just to make sure that all your end points are operating in the valley You know when you submit requests, they're validated and stuff like that stuff That's not particularly unique to the function of your specific operator And then you write custom tests on top of those that taste, you know when you create A foobar object that the actual stuff that's supposed to exist in the back end comes into existence in the way It's supposed to to test your reconciliation loops So does it do so like creating a custom instance of your resource or uh, yeah I mean it you're the one writing the test so it's going to do that However, you want it to but yeah probably by actually manipulating the state of the cluster cool, thanks Anybody else Okay, well, I think that's it for today then Thank you all for coming if you've got further questions or you just want to chat feel free to come up and say hello