 Hello. So it's about time. We'll go ahead and get started. So I'd like to thank you all for coming to the maintainer track session for Operator Framework. We're going to talk about what we've been working on, new stuff coming down the pipeline, and new people that have joined the project. So let's go ahead and get started. So who are we and why should you listen to us? My name is Jonathan Bercon. I'm an open-source contributor for IBM. I'm currently a member of the steering committee for Operator Framework, which is a CNCF incubating project. I'm also a maintainer of Operator SDK, one of its sub-projects, and previously I've worked in the open-source cloud platform space for a while. I've worked on Kubernetes before that. I worked on Cloud Foundry. And then we also have Attila here from Red Hat. He's going to introduce himself a little bit later on when he's talking about his stuff that he's been working on with Java, Operator SDK. So for those of you who maybe are new, what is Operator Framework? Operator Framework is an open-source toolkit to manage Kubernetes applications called operators in an effective, automated, and scalable way, and to implement these by modifying the Kubernetes API itself. So some of the things we make that you may have heard of include now, as of this year, Java operator SDK, which has just joined the project, and we're really excited about. And then some of our older tools that you've probably used have heard of our Operator SDK, Operator Lifecycle Manager, or OLM, and the Operator Package Manager, or OPM. And if you're even... Did I skip one? Oh, I did. If you're even newer than that, and you don't even know what an operator is, I'm going to try and give you the two-minute summary, and hopefully that'll be enough to help you understand the rest of what I'm going to be talking about. So operators are a design pattern for creating software that runs on top of Kubernetes. And the intent is that rather than by statically deploying your software in a pod and hoping Kubernetes will recreate the pod if it blows up, but it's not going to know the application-specific knowledge that your app might need to actually be handled by Kubernetes, and encapsulate that in a controller. So we're calling it an operator because the functions that it's replacing is usually the domain-specific knowledge that would be encapsulated by human operator. And sort of the simplest possible example of this that if you're at all familiar with using Kubernetes is you've probably done something like kubectl create pod before. And the way that actually happens is behind the scenes, there's a process called the pod controller that will see when you create that pod and then actually go and make that pod happen, like actually make a Docker container come into existence on a machine somewhere in the cluster. So we're just going to replicate that control flow in our thing where you're going to, you know, you have your application, your thing, we're going to make a controller such that you can say kubectl create your thing, and there will be a your thing controller that goes and makes it happen, whatever that means in your application-specific example. Okay, so going to dive into what we've been cooking up the past couple months. So the big thing we would like to announce is v1 for not exactly OLM because OLM is kind of going away. So we're doing sort of a major version refactor of a bunch of our internal APIs, which we are collectively calling v1. In terms of making operators themselves using operator SDK, that's not really going to change. And hopefully, we should be able to pull the rug out from underneath everything. So nothing should change from a operator developer's perspective. But major things are going to be changing in the back end. And that's what I'd like to discuss today. So OLM and its resources, like catalog sources and all that are sort of going away. Hopefully, sort of the intended use case for operator developers is that you're using this by writing bundles. And those should all stay the same. So you shouldn't have to worry about catalog sources and all that unless you were, for some reason, using them manually, in which case, I think you might need more help than I can give. Moving forward, though, it is the intent that the new resources are more human-friendly. So if you do need to use these manually, hopefully, it should be a bit easier to understand. So the two main components that we're going to start with are catalog D and operators. So these are two new types that we're sort of introducing that we're going to implement as controllers. First, catalog is sort of going to be the replacement for catalog source. So you're going to be able to add a catalog to this for, like, operator hub or perhaps your own private on-prem one. And it's going to look at those and download the APIs the sort of the same way it does today when you point OLM at operator hub or whatever you. And keeping up with the sort of, like, package manager language that we're sort of trying to replicate, if you're familiar with, like, a Linux package manager, this is sort of the equivalent of YUM repo. So you can add a repo to your thing, see what's available on it, browse things and choose to install them. And then the operator controller is sort of the equivalent of YUM install. This is going to be a type that sort of represents an operator itself and has all sorts of various flavors for doing version management. Now, the intent is that those are going to be the primary interaction points for sort of cluster admins. So if you have a cluster that's running in production and you want to be able to install, like, the actual finished product, that's going to be the APIs that they're going to be interacting with. And these things are still going to support bundles, which I'll go over to in a moment. So if you want to keep interacting with them, the way you already do with OLM, that workflow should be exactly the same. Where it's gotten a little bit more granular is we have a new set of APIs that are sort of explicitly attended for development or more direct interaction. So bundles, we're changing that until it's an actual type bundle deployment that's going to encapsulate a little bit more than bundles do today. So like I said, we're going to continue to support our own native bundle format. We're also trying to massage things that you can support different back ends of bundles, primarily helm charts. So if you want to write your operator, publish it as a helm chart and use that within our ecosystem, that hopefully will be doable. And then we're going to have this thing called Ruck Pack, which is sort of the machinery back end, the stuff that actually moves stuff around, makes it happen on the cluster, actually downloads stuff, stuff's bundles and images gets everything where it needs to be. Although that is sort of everything I've just said is is really well defined at this point, we're pretty sure that's what it's going to look like, except for Ruck Pack, because we're still exploring some alternate avenues of maybe using some third party packages instead of our own homegrown thing. But that's still sort of brand new. So hopefully, if everything goes according to plan, v1 should be a lot more user friendly in a couple of ways. One, OLM's API was solidified before really everything about the way CRD's work was finalized. So now that that's all sort of set in stone, our API is going to be a lot more cube-esque, a lot more declarative and more aligned with sort of the mainline cube API philosophy. All of this should hopefully be more GitOps friendly, a lot more easier to automate and easier for humans to personally use. And hopefully everything should be a lot more non-explody. If you've ever had to go in and debug when you're trying to install a bundle on OLM and something breaks, it can be very difficult to back out and perform brain surgery and hopefully this should alleviate a lot of that. So we're really excited about that. It should be coming out at least in the trial version relatively soon. So look forward to that. And with that, I'm going to pass this off to Attila, who's going to be talking about Java operator SDK, a new sub-project. I need this better. So welcome everybody. I hope you can hear me. I'm Attila Mesairos, a maintainer of Java operator SDK. And this year, Java operator SDK joined operator framework. So on this occasion, I will give you a brief overview on the project. So what is Java operator SDK? So probably you already guessed this slide from the name, but it's basically a feature-complete production-ready framework to write, makes it very easy to write operators and controllers basically in Java for Kubernetes. It has a core framework and some additional components. And naturally, our targets are users or Java developers and our users are Java developers who implement their services in Java. They don't want to introduce additional language or framework or something to a project, mainly language in Go. Then you just want to use Java operator and write operators in Java. So what are the other components of the SDK? It basically, as I mentioned, contains the core framework, which built upon the Fabricate Kubernetes client. It's a Java client. It's already put a lot on the table. It supports generating Java classes from custom resource definition, but also other way around. It has all the facilities that the Go Kubernetes client has informal leader election and basically everything that's in Go. We support explicitly integration testing in form of a giant extension that helps us a lot also to testing the framework. And there is a support for major Java frameworks, especially Quarkus and but also Spring Boot. There is also a plug-in to operator SDK to scaffold the project and the separate framework to implementing conversion hooks and dynamic admission controllers in Java for Kubernetes. So basically, these are the major components. There are a few more, but these are the more significant ones. So a little bit more about the Quarkus extension. Quarkus is a major cloud native Java framework and basically support an extension for that for Java operator SDK. Again, it builds on top of the core framework, but brings a lot on the table. It supports Helm, OLM, and pure Kubernetes resource generation. You can build your project or compile project to native binaries and provides all these goodies from Quarkus like this nice configuration approach. The Quarkus has this build time optimization, so something that normally would be done in Java in runtime in Quarkus you can do in build time basically. So it's a great place to start with. It's probably much more efficient than the core, at least in some regards. So a little bit more about core framework. I don't have time and don't have to go into details what is included in the core framework. It's basically very simple or very similar to the goal contra runtime or the frameworks in goal, but it's a little bit in a Java way. So there are obviously these differences in languages, but otherwise the concepts are almost the same. It's, however, it tends to be a little bit more batteries included. When we started the project, we actually was managing multiple external resources, not just resources and Kubernetes. So in that regard, it's more like there are already components to manage resources, even spalling and stuff like that for non-cuberantist resources. That's probably on top of what contra runtime does at the moment in the core, but it's also a little bit high level on some features that I will a little bit talk about also later. But to show you just some code, just if you can imagine, to implement your reconciled logic, you have just to implement a reconciled interface in Java. And in this one method reconciled, you will just receive the sample custom resource, the primary resource, and some contextual data and additional functionalities. But you basically implement your own logic, as you would do, very similarly in Go. And at the end, you might want to patch your status of the custom resource or the resource. So it's basically the basic API is as simple as this. And from that point, you can just freelance of code, run the controller, and basically that's it. So what do you mean it's a little bit high level in some means, some regards. Basically, if you, for example, just to highlight this, there are finalizers in Kubernetes. You already might know, you use finalizers when you cannot clean up your resources at your many generic consular or in a controller by garbage collector of Kubernetes, or basically, if you don't cannot put their own references on your resources. And so then you have to explicitly clean up the resources that you created. And for that, you have to add also finalizers to your custom resource. That makes sure if even the controller is not running, the resource is not deleted, under the controller explicitly states, it might be deleted. So basically, semantically, this means that you have to just do some cleanup. So in Java operator SDK, what you have to do is just implement the cleaner interface, and that will bring this cleanup method, and just implement your cleanup logic. But everything else like handling, adding the finalizers, removing the finalizers, maybe postponing the removal of the finalizer is just handled for you directly in the core of the framework. So in this regards, it's a little bit different than contrary, it's more like these problems are abstracted away for you and handled for you in the background. I believe in Go, you have to use some libraries directly in reconciliation, handling for yourself. In addition to that, we also provide some even higher abstractions and components. I would like to talk about a little bit. It's when you basically want to create some resources, manage some resources in your reconciler, it's going to be a very similar process, almost all the case. Basically, for example, if you want to create a custom map or manage a config map, always what the flow is goes like this. You check if it's a config map is in the cache. If it's not in the cache, you create the resource. If it's in the cache, you check if the config map is a desired state, what you want to achieve. If it's a desired state, then ideally, you don't do any updates or any explicit calls to current API. But if it's different than the desired state, then it's basically you update the resource. We can generalize this workflow also for non-cuberance resources, but also for Kubernetes ones that you manage. Basically, as you maybe will notice, this input for this workflow is just a desired state. So, we provide this abstraction called dependent resources. It's abstract evidence problem, and just based on desired states, it basically reconciles for you these resources. And not just that, it does much more, all kinds of use cases to use server side apply or not, if there are some cases when you want to have dynamic number of resources, or if you have external resources for which you want to store some explicit state, it makes sure that it's implemented for you in a correct way. So, in practice, this looks like that you have a class config map dependent resources just extends Kubernetes dependent resource. Here, CUD means that it creates, deletes, updates the resource if necessary, and you just provide the desired state. So, here's a config map nicely built. We have just the same name in the same namespace than your custom resource, and usually just put some data from the spec. And this is enough basically to have implemented the whole reconciliation for you in the background. So, from this point, you can just use this in the reconciler, as you may notice here, you can just annotate it with a dependent that say that I have this config map dependent resource, I want you to this reconciler for me and I want to have this desired state on the server. And before the reconciling material is called, the framework out of the box makes sure these resources present on the server, or in the Kubernetes API. But you can also, if you have some more specific places you want to reuse them, you can just explicitly call the reconciled API of that dependent resource, and it makes sure that the resource is reconciled for you. So, this will make sure there is a config map with the desired state on the service with just basically a few lines of code, and it makes sure it's correctly implemented. But usually, you don't have just one resource that you manage on the individual reconciler. So, usually, we are managing multiple resources. These are multiple dependences, multiple annotations. And sometimes, you just have these flows that you don't want to have this resource, basically, in all cases. You don't want to create ingress if some custom resource spec is different. For example, there's a feature flag to just to create a ingress or not. So, for that, we can have just all kind of inputs for these dependent resources, like a key conditions and post conditions to manage that flow. And there are basically few constructs. We call it workflow that executes this reconciliation. And the reconciled API condition first, it tells that if the resource needs to be there or not. It's kind of a precondition. But there is also a special thing called depends on. So, Kubernetes doesn't really care about ordering of resources. But in some cases, especially make you make, for example, deployment. And you want to make an API call on the service that you just deployed after it's up and running. It's kind of an ordering. And if you, but also, if you manage some external resources, you create a S3 bucket and then an RDS database that will back up to the S3 bucket. It's kind of a precondition that the S3 bucket leaves before you are starting to deploy some database. So, there is kind of a natural use cases when you need some ordering. And this depends on, you might recognize from other tools, it says that this resource needs to be reconciled after other resource. So, with these constructs, we call it workflows. You can just very nicely define some, some, our user asks for state machines, but we call it workflows, that we can make sure there are these dependent resources which reconciles each resource, but then there are these workflows which kind of make sure that reconciliation within multiple resources are done optimally. So, that means it's, concurrency is baked in. So, everything is concurrently reconciled, what needs to be reconciled or can be reconciled concurrently. And it's async. So, for example, if you are waiting for a database or deployment to be started and ready, it won't block the thread, it will just exit and reconcile again, and there is some changes happening in the background, and the trigger is triggered reconciliation again. Well, there is much more to it. I don't want to go into details, but this might resemble you something. It's actually was motivated or by this classic infrastructure as a code tools like terraform, card formation, Pulumi, or you might want to know or you might know. It's basically just adopted to the Kubernetes landscape. So, we are doing just same degree of resource management, we're providing this component to which you are able to use and describe these use cases inside an operator, and it makes sure it's optimal. It makes sure there are the informers and the caches and everything related to that handled for you in the background. And basically this resulted, we started to work on this about two years ago, there are very nice examples in production there, which then this resulted a very easy and nice, optimal and basically correct way how to implement this reconciliation. Of course, it's not for every use case, but for the use case when you are managing these resources, these typically these resources in your controller for Kubernetes. So, this is kind of a more abstract and high-level abstraction that Java in Java operator tries to be a little bit innovative and implement and help users and developers with this experience. So, basically that's all what I wanted to talk about and give the introduction. So, if you have any questions for us, then feel free to help ask. And if you can come up and ask them under the microphone or raise your hand and I'll come around so you can speak in the microphone and it'll be on the recording. Hello, great talk. First of all, I have a question for you about the change to v1 for OLM. So, I just want to make sure that I understand it correctly. So, bundles can now be backed by helm charts. So, for instance, if we do an operator release and currently that release involves building a bundle and pushing it to a bundle registry and also an official helm chart release, could we just do the helm chart release and then have the bundle be backed by that or how would that look? Yeah, that's the intent is exactly that use case because there are lots of people who want exactly what you want. So, yeah, hopefully that is one of the specific target cases that we're shooting for. Any other questions? Anybody at all? If not, that's pretty much it. If you'd like to come up and discuss more about Java to SDK, we'll be around, but thank you all for coming. Thank you.