 Alrighty, welcome to our Wednesday briefing, which is usually always operator centric Diane Mueller and the director of community development here at red hat. And this is another openshift commons briefing and today we have Rashmi Gopati and Varsha Prasad and a few other members of the operator SDK group and we're going to run through an update on the operator SDK. And if you're following along in the series upcoming on September 1st, we'll have one specifically on the Java operator SDK. So this will be sort of the overview one. And then we're going to let a few other folks come in on September 1st and do one on the Java operator SDK. So, without any further ado, Rashmi and Varsha, please take it away. We will leave time at the end for live Q and A in the conversation with some of the members of the team. And I'm looking forward to this update. So thanks guys for volunteering to do this. Thanks, Diane, for introducing us. Hi, everyone. I'm Rashmi Gopati senior software engineer in operator SDK. I'm here with Joe Lanford and Varsha Prasad and Mark O'Brien and Fabian. I'll be giving a quick overview of operator SDK and cube builder plugins with a focus on phase 2 plugins and Varsha will be taking over to talk about the Helm go hybrid operators. Varsha, would you like to quickly introduce yourself? Hello, everyone. I'm Varsha here, a software engineer with the operator SDK team. I'll be talking about the hybrid Helm plugin and also about the migrate command, which we have to move from package manifest to bundle format. Awesome. Before I delve into plugins, I'd like to give a quick background of operator SDK and cube builder and share some common goals of both the projects. Both operator SDK and cube builder aim to simplify building a Kubernetes operator and they allow you to quickly create and manage an operator project. So both these projects extensively use upstream controller runtime and controller tools to scaffold go source files and package structures. And in general, they're similar in the sense that they have the same basic layout and share project structure and scaffolding for go based operators. So the operator SDK moves some of its features and functionalities upstream into cube builder, wherever it's necessary and appropriate. And SDK will use cube builder under the hood so that SDK CLI tool will work with the project that's been created by the cube builder CLI. So the phase 2 plugins effort that I'll be talking about today will first make it to upstream and cube builder and then operator SDK will be able to use it. So operator SDK also offers additional features on top of basic project scaffolding that cube builder provides. So for example, operator SDK in it by default it generates a project integrated with operator lifecycle manager, which is an installation and runtime management system for operators. And also operator hub for publishing operators and comes with a scorecard tool for ensuring operator best practices and developing cluster tests. And the SDK also supports other operator types other than go such as Ansible and Helm. So another important goal of both these projects is extensible CLI and scaffolding. So what does it mean to have an extensible CLI and scaffolding? It means that cube builder needs to be more extensible for it to be imported and used as a library in other projects. For example, being able to provide customizations that operator SDK and other projects can use cube builder workflow with non-go operators such as Ansible or Helm. So in order for the project to have customized content, we need to have these extensions that will basically modify cube builders face scaffolding before any of the files are written to the disk. And one thing that helps achieve these extensions are plugins. So let's look at what plugins are. So operator SDK and cube builder both provides scaffolding options via plugins. They are responsible for configuring execution of the sub commands like init, which is used for project initialization and create API for scaffolding Kubernetes API definitions and create webhook for scaffolding Kubernetes webhooks. So when any of these sub commands are called, basically plugins are responsible for implementing the code that should be run. Now let's look at extending cube builder and operator SDK CLI through various phases of plugin implementation. These are the different phases of achieving extensible CLI and scaffolding. Phase one enabled cube builder to be more extensible and paved the way for it to be a library that can be imported in other operator projects. And it also defined a new plugin interfaces to drive the sub command functionality and provided a new CLI, which has been instrumental in laying a foundation for integrating external plugins with the cube builder CLI in their own projects. So if phase one achieved extensibility of cube builder, why did we need phase 1.5? Well, there were several cases of plugins that want to maintain most of the go functionality and add certain features on top of it, both inside and outside cube builder repository. That's the main reason why phase 1.5 was implemented, which was to chain plugins, meaning more than one plugin can be executed along with the cube builder CLI. And it also introduced a concept of plugin bundle for improved user experience. And phase 1.5 was crucial because it provided the ability to chain these internal plugins that currently exist in SDK and cube builder, which is very beneficial for third party developers that are using cube builder as a library. There were several challenges in phase 1.5, which is to run some post scaffold logic after all the plugins have scaffolded their part. So to tackle this new plugin API had to be designed for chaining these internal plugins. So both plugin phase 1 and 1.5 have been successfully implemented and achieved some desired goals. However, we wanted to extend it further by supporting external plugins in both SDK and cube builder. And that's where phase 2 comes to picture. It's an ongoing effort right now and it poses several challenging goals. It is supposed to tackle discovery of external plugin executables and these executables are not compiled with the SDK or operator SDK CLI, which means that when phase 2 is implemented, the source code of the plugins is not compelled to live inside the cube builder or SDK repository. So phase 2 handles chaining and running of these external plugins once they're discovered and once phase 2 is completed, cube builder library consumers will be able to support discovery and chaining of external plugins just by importing cube builder. Now let's look at why phase 2 is important and some of its use cases. So the first thing that comes to mind is out of tree. As these plugins are going to be external and not in tree of operator SDK or cube builder, it provides a lot of flexibility because the source code of the plugin is no longer required to live inside either SDK or cube builder. And that in fact eliminates the need to have all the plugins on the same version of go dependencies. Another use case is being able to support language scaffolds like Java or Python or Rust and so on and actually be able to write the plugin in that language. So plugin authors don't have to know go if phase 2 is implemented. So that way there's a lot of potential to significantly grow the number of languages available to operator SDK and cube builder. And writing a plugin API that allows for plugins to be written in a variety of languages and supporting other language scaffolds using standard and then standard out allows the API to be more language agnostic and it also allows for independent release cadence. So let's say right now the latest support currently in active development is the Java SDK plugin. Once phase 2 lands, the Java SDK plugin will jump on the bind wagon as well. So currently it has to be compiled with the SDK binary during releases, but with phase 2 it can have its own release cycles. Another important reason for doing this is as the external plugins since they're out of tree, they will have to be discovered at runtime. So that provides easy extensibility. Just being able to extend cube builder or after SDK without having to rebuild or re-release either of them. That's a huge plus. So to quickly summarize goals of phase 2 plugins, it is to discover and run the external plugin executables. And for discovery of plugin executables, we have considered quite a few approaches like group version kind scheme similar to how customize does it and also user specified file parts and another approach to add plugin executables with the prefix. After considering pros and cons of each, we are defaulting to using a name and version scheme which is pretty similar to how customize a handles plugin discovery. And as most plugins have group name and version, every external plugin gets its own directory constructed with the plugin name and version for the binary to be for the executable to be placed in. And cube builder will search for the executable with the name of the plugin and version of the plugin. And the other goal is for cube builder to be able to display plugin specific information when help flag is passed. Let's look at some of the examples and of how these external plugins can be specified in the CLI. On the left are some examples of the CLI request. The first example is init command with two external plugins. So in this case, cube builder scaffolds files are using the external plugins as defined in the implementation of init functionality by respecting the plugin chaining order, which is the order of init of my external plugin first and then my external plugin v2 second. And second example is create API sub command and it specifies plugins go v3 and my external plugin v2. So cube builder will scaffold files using go v3 plugin, which is a base plugin and pass those files to my external plugin v2 as defined in its implementation of the create API method. Of course, by respecting the plugin chaining order. And third example is taking in dash dash help flag. So cube builder should be able to display help information of the plugin, which is basically not shipped with the CLI binary. On the right is an example project file. In this project file, there are two fields to store plugin specific information. One is layout field. So the order of plugins can be specified in the layout field other than the CLI as well. And this ordering will be used for the plugin resolution on projects that are already initialized. And the other field for plugins is the plugins field, which is used to store any configuration information of any plugin. And it's an optional field. So in order to achieve these goals, we talked about there has to be communication and data passing between cube builder and the external plugins. The internal plugins do not need any inter process communication because they are a part of the same process. So direct calls are made to the respective functions based on the sub command in the CLI request. And as phase two is tackling out of tree or external plugins, there's a need for IPC between cube builder and the external plugin as they're both two separate processes. So what plugin system can we use? We've evaluated a few plugin libraries. One is the built-in plugin goal library and the other is Hashi Cards goal plugin. Based on the evaluation, the built-in goal library is more suitable for internal plugins as it doesn't offer any cross-language support. And as external plugins can be implemented in any language, it basically makes the goal library a non-starter. And Hashi Cards goal plugin system is more suitable than the goal library, but it's more suited for long running plugins and if you're handling quite a lot of deserializations. So for now, we are going with our own plugin system that can pass JSON blobs from cube builder to the external plugin over standard and standard out. And one of the nice things about sub-process is that we also get a built-in channel for errors, which is standard error. So like all the warnings and errors can be written and communicated to standard error in a structured format. So what exactly do we need to pass between cube builder and the external plugins? So message passing will occur through a request and response mechanism. Plugin request is, it contains information that cube builder wants to send to the external plugin through a standard in. And plugin response will contain information that cube builder receives from the external plugin. And let's see how this works with an example. Okay, so let's say user runs an init command in cube builder CLI passing in some flags and providing two external plugin names and versions. So what happens next? Cube builder basically discovers those plugin executables in their respective name slash version directories. And once they're discovered, cube builder sends plugin request as a JSON over standard into both these external plugins. And the external plugin constructs plugin response that contains information on updated file contents that should be scaffolded by cube builder. And external plugin basically writes back the plugin response to standard out in adjacent format as well. And if any plugin fails in the chain, the plugin chain should be halted because one plugin may be dependent on the other. And the plugin that failed will write back all the warnings and errors in a structured format back to cube builder. But what's going to happen with all the files that were scaffolded already? So ideally it's all operations on a virtual file system and not a physical file system. Cube builder would write all the files that it received in the response to disk only after it receives plugin response JSON from all the plugins in the chain. So if any plugin fails, all the files that were present in the context of scaffolding will just be discarded to prevent half committed state. This is the current design for our external plugin API using standard in and standard out. The actual implementation is in progress and will be demoed in another session when it's ready. For now I can show a prototype outside of cube builder, which is a go program that calls a Python plugin that's up standard and standard out and receives plugin request and response. So basically think of the go program as cube builder. And then there's the external Python plugin. So let me quickly share that. Can you see my screen? Yes, we can see. Can you make the font just a tiny bit? There you go. Bit bigger. Yeah, there you go. Is that better? Yes. Okay. Awesome. So wrote a like go program that handles setting up a standard and standard out and that calls a Python program. I'm just going to run that here. So this program is basically running a Python plugin shell script, which is just a wrapper script that calls are the actual Python plugin program, which is test.py. And with the sub command in it and dash dash domain and example.com. These are the raw flags that will be passed to the plugin. So the raw flags, whatever come as a part of the CLR request, they are passed directly to the plugin, all of the raw flags will be passed to every single plugin. And this is the response that comes back. So think of it as a plugin response. And there's a concept of universe, which contains all the files that are scaffolded. So initially, this universe starts out as empty for the first plugin. And once the first plugin receives a plugin request, it processes the command that was received, which was in it in this case and like perform some basic scaffolding. Since this is a prototype, it's just scaffolding a couple of files, which is license and main dot py. Both of those should be here. This file has been scaffolded and also the license header. Yeah, this was just an example to show how the standard and standard out works across a go program to a Python plugin. So let me quickly share the spec for phase two plugins by showing the plugin request and plugin response. Plugin request contains all the information that Cubelder receives from the CLI and and what it sends to the plugins, what it sends to the external plugins. So it contains the command that is supposed that the plugin is supposed to execute, which is in it or create API or create webhook. And API version defines the version schema of the plugin request that is being sent. This is important because there could be different versions of plugin request and response and for it to have a compatible version. We need to include this in both request and response. And initially this will be marked as V1 alpha and ARVs holds all the plugins specific arguments and this will be passed directly to the plugin and universe as I mentioned it represents the modified file contents that gets updated over a series of plugin runs across the chain. And plugin response contains the command that the plugin ran and it contains the help in case the plugin has any specific help text that the plugin wants to return to Cubelder upon Cubelder sending dash dash help argument. That's when the plugin attaches help and the API version to have a compatible version schema and universe representing the updated file contents and error information to indicate any errors and specific error messages. So that's for the basic prototype that I have so far and let me switch back to my presentation. So what's the future vision for Operator SDK and Cubelder plugins? We discussed phase two plugin so far. So when phase two implementation lands in Cubelder, it opens up a plethora of opportunities. So currently there's limitation of how scaffolding works. It's sort of one scaffolding fits all situation. So with phase two, we can potentially find novel ways of supporting customized scaffolding so that users can get a more tailored project scaffolding that fits their needs and purposes better. Another important thing on the roadmap for this is the library go plugin for the open shift platform. Some of our internal operator teams have some requirements to build their operators using library go and not control runtime. So a library go plugin that will init the operator project and creates APIs and webhooks and helps integrate with OLM will eventually enable OLM to manage some of those operators on the clusters. So several common operator patterns have emerged because of use cases like for example cluster add on and control plane and drivers and Kubernetes automation and so on and so forth. So we are looking to have custom scaffoldings for those operator types and patterns so that operator authors can easily develop the operators in these patterns by just staying focused on their use case and following the best practices. Powered by this plugin system and there are a lot of other exciting things that are planned other than the phase two effort to name a few the Java Quarkus operator plugin. Our team lead Jesus and Christopher from middleware will be doing a name session next week to go through more about Java operators. So that's one thing to look out for if you're interested in it and the hybrid plus go operator which is to make help us go SDK as the official helm operator. This will help our partners using helm operators to easily increase and elevate their operator capability level and overall improve the maturity of the operator ecosystem as a whole. And some other initiatives are config gen. Currently this is provided as an alpha option in the cubicle the CLI. But the idea of this option is to simplify the config scaffold. Yeah, so there are a few initiatives for the future. So if you're interested in contributing to any of these you're most welcome to. But before we look at how to contribute, I'd like to ask Joe if he would like to talk about the coming changes in this. Sure. So this is not so much of a fun update, but there's a lot of removals coming in Kubernetes 122, which, which will be part of the OpenShift 4.9 release. So if you're not aware already, the custom resource definition v1, beta 1 and the validating and mutating webhook configurations in the v1, beta 1 API versions are being removed. There is a v1 version of those things. So the upshot basically is that any operator that delivers CRDs that you or webhook configurations that use those removed APIs will basically no longer work and be able to be deployed starting with OpenShift 4.9. So Red Hat has started an initiative to try to spread the word that operator developers need to look at upgrading all of their resources used by their operator to make sure that they're still available in the Kubernetes 122 API server. And ideally have those operators available in an earlier version of Kubernetes such that users of those operators can upgrade to them and then have a smooth upgrade experience from their current version of Kubernetes or OpenShift to 122 or 4.9. So take a look at that link. There's more than just custom resource definitions and mutating and validating webhooks that are removed in 122. So take a look and make sure that as you're upgrading your operators that you are fair in mind those removals and keep those basically make sure that you're updating the resources that your operator uses to things that will still be available in 122. I think that's all I have on that. Thanks, Rashmi. If there's any questions there, ask them in the chat or ping us in our operator SDK channels or on the mailing list. Awesome. Thank you, Joe. Yeah. So as we mentioned contributed contributions are very much appreciated. So if you're interested in contributing here are some links of please feel free to try out the project file any feature requests or bugs that you would like to see or fix and submit PRs to collaborate with us. There are also community meetings that happen. We have operator SDK working group meeting and issue trash and backlog grooming meeting. The links can be found in the operator SDK working group. I mean, sorry, the operator framework community repository. So feel free to check that out as well. So, yeah, that's it for the overview. Any questions, comments, thoughts. I think you did a pretty good overview and introduction to as well. It's been a it's been a while since we've had an operator update SDK update on the Co-op and ship commons. So I'm really grateful for you guys taking the time today to walk through what's going on. I think the last time we did this to builder was just getting integrated into the operator. So it's been quite some time. So if people haven't been paying attention, there's a lot going on in the operator world and we would love to have you get more involved in the project. So please do reach out and attend some of these operator SDK community meetings. If there's a number of people from the team on the call right now, if anyone has anything else they'd like to add that we missed and didn't cover, please just speak up. Otherwise, that I think Varsha still has some slides to go over as well. Yeah, otherwise we'll move on to Varsha side of the fence here. So thanks for that bit. Take it away. So let me share my screen. So hope the presentation is visible. Okay, so I'm Varsha here and I'll be talking about the hybrid health plugin and also about the operator SDK migrate command, which we have to go from package manifest to bundle format. So let me first start with the hybrid health plugin, which is a very simple project and it helps us to argument the capabilities of a normal operator. So before getting into what hybrid health plugin is and what it does, let's have a quick overview of what a helm operator is. So this will give us a brief idea on why we actually ventured into the hybrid health world. So Helm operator, as we all know, uses Helm, which is a package manager for Kubernetes resources, and it also uses a specialized packaging format, which is known as Helm chat. Now, we all know what an operator does and operator looks at a customer resource, which is an instance of a user application. And it also makes sure that the current state of the customer source matches the desired state which is specified by the user. So Helm operator is no different, except for the fact that consider the customer source to be the Helm chart. So Helm chart is going to take care of the individual resources present inside the cluster and the operator takes care of the Helm chart itself. So Helm operator is perfect for stateless applications. And hence it was like by many of the users because it was very much easy to automate the existing Helm charts. So you don't even have to write a single line of code code. And there's no need to understand the Helm commands which are used here are to even install interfaces like tiller to interact with him. So things become very much easy. And out of the Helm operator, the user gets to have a stable, repeatable and auditable application. And by this, I mean that a user can deploy the same application across various nodes or various clusters as they want. And the operator will take care of all the other processes which happen. So this about the Helm operator and Helm operator does work well. But there are a few shortcomings which the Helm operator has. So on the right side, I have the operator capability model, which many of you would have seen pretty much often. So here we can see that the Ansible and Go operator have already reached level five and Helm is still hanging on level two. So the reason, the most important reason for this was that the users were not exposed to the Helm reconciler logic. That's the reason users were not able to add capabilities like they would be able to do so with Ansible and Go. So users cannot do a backup and restore process or have metrics set up or modify the logs to get more alerts or to have a deeper insight and so on and so forth. So if I am a Helm operator user, I would literally have to throw away the existing code which I have if I want to add additional capabilities. And I would have to start a fresh with either Go or Ansible based on my use case. So this is where hybrid operators come in, because hybrid operator is an amalgamation of the Helm and the Go API together. So let me quickly go through the project structure of the Hybrid Helm project so that when the demo comes, we have a better understanding of what this project brings into the table. So like any other project, we'll use an init command. And with the operatoristic init command, we'll have a bunch of files scaffolded. But the difference here is that the bunch of files which are scaffolded are compatible with both Helm chat and also the Helm API and also the Go API. So each of these APIs will have their respective reconcilers. So you can think of this as the Helm reconciler which is working in foreground which is having a Helm chart which is doing its own action. And we have a Go API to argument it. So we can have Go API to have the metrics collected out of it. We can have that to do a backup and restore process and many other things. So we all know that the Go reconcilers completely customizable. We can use controller runtime, have a custom logic return for the Go reconciler and it's very much easy to implement that. But in case of Helm reconciler, currently it is not so transparent to the user. So here is where we provide hybrid library helpers because with hybrid library helpers users will be able to interact with the Helm reconciler. So in simple terms, they would be able to add something like pre and post install hooks which can basically interrupt the release process of the chart and make changes to the Kubernetes objects. So this is the timeline which we had for the hybrid Helm operator project. So the phase one of the project involved creating and providing a library which has all the helper logic for interacting with the Helm reconciler and introducing the aspects of controller runtime. This is similar to how we would do with either Go. And the second part of the phase one was to scaffold the code required for the operator to work with both the Helm and Go APIs, which I'll show in the demo. And the second phase of the project which we are right now working on was to migrate the Helm operator code out of the operator SDK repository which is right now the case and move it to a separate repository which is known as Helm operator plugins. So on a long term, the idea is that when I scaffold a hybrid operator, and if I don't make any customizations to the reconciler, it should work as a normal Helm operator which is present currently. So on a longer term, there won't be any specific demarcation with a hybrid Helm operator and the Helm operator. So both of them will become kind of the same. And the third and the interesting aspect is to have a tutorial which basically showcases the interpretation and the usefulness of having Go and Helm APIs and the use of us bring the Helm operator and argumenting the capabilities of the current Helm reconciler. So the outcome of this is to release the hybrid Helm operator as a phase one plugin. As Rashmi had explained, we have phase one and phase two plugins. And the phase one plugin will here refer to the fact that similar to how we scaffold a normal Go, V2, or V3 project or even an Ansible project, we'll be able to do so with the hybrid Helm plugin. So we could use the operator STDA command and Helm operator, hybrid Helm plugin would be integrated with that. And definitely we will, we are welcoming contributions and will develop the hybrid Helm operator mode. So I have a quick demo on the functioning of the hybrid Helm, the scaffolding of the hybrid Helm plugin. So I hope my terminal is visible. So here I have an empty repository and I'll be scaffolding a hybrid Helm project. Right now the project, the plugin is not integrated with operator SDK. So I'm making a few modifications to the operator SDK building it locally. And in the init command, we would be providing the hybrid.helm.stk as the plugin name. This is the plugin name which is used to scaffold the hybrid operator project. And it also accepts the project version which is default version is three and will use a repo flag to describe the path of the repository. Now let me go through the project structure here. So as we can all see that we have a few placeholder repositories, we have API and controller repository, which will, which will use for Go APIs. And we also have a Helm chart repository which will have the Helm chart when we create a Helm API. In addition to this we have a Docker file which is basically used to build a Docker image of the operator which we can then use to deploy. And then we have a make file which is going to help us with the project on creating bundles, testing out the operator installing the CRDs, testing out the operator locally and so on and so forth. Our main dot goes interesting. I'll come to it a bit later after having both the API scaffolded. And then we have the project file which contains the metadata for the project like the layout, the plugins which are present and the project name. And then we have watches.yaml and this will be used in the Helm API because this will contain the path to the location of the Helm chart. Now let me go back and start creating a Helm API. So here I'm going to provide the plugin flag to be the Helm dot SDK operator dot operator framework which is the generic Helm plugin which we use. Now here, if we see, we do have the Helm chart present inside the Helm chart repository. So this is the default chart which gets scaffolded with operator SDK, but we definitely can use an existing chart which we have locally or use a chart which is present remotely. So in addition to this, what I'm going to do now is I'm going to scaffold a Go API in the same project. And this is what I meant when I said that the scaffolding is compatible with both Go and Helm together. So in the Go API, I'm going to scaffold the same resource, similar resource, but it belongs to the same group. So here I have provided the plugin version to be Go v3. This should be done soon. Yeah. Now let's have a look at the scaffolding. So we have the API folder filled with the scaffolding for defining the specific type which we have created. And we have the controller folder where similar to a normal Go project, we can define the custom reconciler logic of which we have using controller and tag. Now the most important part here is main.go file, which we have scaffolded, which was not done before in Helm operator. So this has the reconciler part for the Go API and also the reconciler part for the Helm API. So here, since the Helm reconciler is exposed, users can use the library functionality, the helpers which are provided in the library to actually make modifications. And add additional capabilities to the reconciler. So this is the library which I was talking about earlier. And this is the basic difference between a normal Helm project and a hybrid Helm project. So in simple terms, we could have, as I explained before, a Helm chart running doing its work. And we can have a Go API implemented with a custom reconciler logic, which provides all the metric collection. We can use it for backup and restore process. So now this project is similar to a normal Go project, where we can implement our own custom logic and then deploy, create an image of the operator and then deploy it locally on a cluster on a production environment. So this was about the current state of the hybrid Helm operator. We also have the operator project is currently in a recent stage, but it is available in the Helm plugins repository. So we do encourage anyone who are interested in the project to definitely try this out and provide us feedback on how we could improve this. And please feel free to file any issues or any features which you think would be useful in the Helm, which you would like to see in the hybrid Helm repository. And we are always welcome for contributions. So please do submit a PR directly in the Helm operator plugin repository. And we also have an enhancement proposal up, which explains about the hybrid Helm project and why did we venture into this. So this was about the hybrid Helm. I can move on to the migrating of package manifests to bundles. So this is a different topic. And to explain a bit more about this is that operator framework has moved from using package manifest format to bundle format. And the reason for this was mainly related to the performance issues. But before going in more into that, I would like to give a brief overview about operator lifecycle manager because both the formats revolve around what operator lifecycle manager does. So OLM as the name suggests is a place is what manages the lifecycle of an operator. And by lifecycle of an operator, I mean that it can install it starts from installation to having the dependencies resort to having the operator upgraded to different versions. So how does OLM know how to handle our manager operator. So for this to happen, we would the operator authors would have to package their operator manifests and submit it to OLM so that OLM knows how to manage and make changes to the operator which is deployed in the cluster. And to do this, we use two formats. One was known as the package manifest format, and the other is the bundle format which is being used currently. The package manifest format is being deprecated and it would soon be removed in future releases. So this is difference between the package manifest and the bundle format. I'll go into the bundle format a little bit deeper in the next slide. But on the package manifest format, we can see that all the different YAML manifests, including the CSV and CRD, which belong to a particular version are present inside a single version directory. So each directory represents a similar version and all the manifest related to that version are present in that directory. And this was the format which was used earlier. Now we have moved on to the bundle format. And in the most important part of the bundle format is that it contains two directories. One is the manifest directory, which contains the cluster service version, the custom resource definitions, and all the other supported Kubernetes YAML manifest, which are required by the operator. And it contains the metadata folder, which has annotations.yaml, which basically annotate the bundle.docker file, which annotate the bundle image. And it contains data like having the package name or having which version of SDK was used or the channel, which needs to be subscribed to and other details. And then we also have bundle.docker file, which contains the data, which is helpful to build the bundle image. Now let's go into the package manifest to bundle command. So the operator SDK package manifest to bundle command accepts a required argument, which is the package manifest folder. It accepts an output directory where the bundles are created. And it also accepts the image tag name, because it creates an image out of it, which can then be used with operator SDK, run bundle command to run the bundle. And we also have a build command, which I'll explain in short with the demo. So here I have a normal memcache project, a sample project. And let me show you the structure of package manifest, which we have here. So we have all the manifests, including the CRDCSVs present inside a particular directory. And when we convert this to bundle, we will have two bundles, each of the bundle will represent the particular version of the manifest. So let me go ahead and use the operator SDK package manifest to bundle command here. So I'm providing the base tag, the image name, and it will be tagged with respect to the version of the directory which is provided. Now an additional and interesting fact here is that we also provide an option of the build command. So the default build command is Docker, but users can also provide a podman command to build their image. Oh, sorry. I would have to remove the bundles. Now we will have to ensure the bundles which were created. So these are the two bundles which are created out of the package manifests, which contain the manifest folder, metadata folder, and also the bundle.docker file. And we also have images built for these two, and these images can then be used with operator SDK run bundle command to test your operator locally on an OLM cluster. So this was about the operator SDK migrate command. And also a quick demo on the hybrid term. So any questions related to either hybrid hand project are the migrate command which we have. Not a question, but a comment on the helm operator stuff. There is a team within Red Hat already working with the hybrid helm operator libraries. The Stack Rocks team is building their operator with those libraries. They're already contributing to it and using it. It's out there. It's ready to basically start being consumed. So just want to reiterate what Varsha was saying earlier that we're really looking for contributors there. And we think it's like at a point now where you can actually start using it in your products. So it's not really like it's alpha in terms of it could change still, but from a stability standpoint, maturity standpoint, it's definitely usable. So we're definitely interested in getting folks out there using it and giving us some feedback on it. And what is the best way to get feedback back to the team? Is it through the community meetings or the mailing list or the dev, the channel in Slack? Users can actually file requests through feature requests through GitHub. We definitely triage those in community meetings, but we are always available in the Slack. So I can ping the operator SDK Slack team, operator SDK team on Slack and then will anytime be ready to answer questions or clear any doubts related to the current work which we are great. Well, I'm not seeing any questions in the chat and you have covered a lot of territory today, probably a lot more than I expected. So thank you very much for making the time to put this all together. I think we're going to post the slides if you can share all the slides with me after this, the slides and the video up on the OpenShift.com blog. So take a look for that if you're looking for the slides or any of the resource links will be in that blog post as well. So thanks. And again, the next session we have on operators will be on September 1 and it will be covering the Java operator SDK. So looking forward to that one as well. So keep an eye out for all of this. I'm sure if you're heading for KubeCon in the fall, there'll be some operator content there as well. And we'll keep you up to date and hopefully a little bit more frequently than we have been. So thanks guys. Rashmi, is any last words you want to add in? I think we covered all of this. You've covered a lot. So thank you guys for the time and much appreciated. Joe, thanks for coming on board and everybody for all the work and effort that you've been putting into the operator SDK and the entire operator framework. It shows in the maturity of the project and all of the adoption that's been happening. Thank you guys. Thanks a lot, Diane, for organizing this. All right, take care. Thanks so much, Diane. Thanks everybody.