 Hello, my name is Camila Macedo, but that is not important at all. What matters is the content prepared by us to share with you. I will be kicking start on this journey. However, if you want to know more about us, please see the reference link at the bottom. You can check out our profiles on the Kubicon page. Following the talks that we will cover here, it is only to let you know what it is a presentation is about. To begin with, we will do a short overview and introduction about KubiBuilder. As the project is used by it and you are trying to cover Operator Pattern. So, we can speak about good practice and common suggestions with it. After that, we will share the KubiBuilder updates. Then we will let you know about optional plugins and how they can help you accelerate the development of your solutions. We also will speak here about the KubiBuilder plugin ecosystem. Then we will tell you more about Controller Jane and Controller Tools. By last, we will share how you can interact with and be part of the community. Your help is more than your own. We are willing to see you around. We all hope that you find this information very helpful. Let's start, shall we? Probably many of you here are familiar with the project already. However, we would like to do a small, very fast overview just to ensure that we are all on the same page. You may also discover very cool things about the project itself. So, let's go. What is KubiBuilder? KubiBuilder is exactly a command line tool that aims to help create projects extending the Kubernetes APIs. Commonly, follow the Operator Pattern. Its design directory is based on plugins. That allows users to be able to choose if they want to opt-in or not for specific features, helpers, and sometimes integrations. I will not do a spoiler here. It's better covered in T-Stock. But for now, I'd just like to share that it's also possible to use KubiBuilder as a library and then take advantage of its features and implementations. For example, imagine if you'd like to create a tool to provide specific scoffals or to do customizations and integrations on top of what is done and provided by KubiBuilder itself. Operator, we added here a piece of information from the Kubernetes documentation. You can use KubiBuilder to build a solution that you not follow its principle. However, KubiBuilder is designed to help your activities go. Kubernetes is very nice and very cool, but we need to make our solutions work on it. As you know, it's not an easy task and definitely there's more than a way to skin a cat. One is by extending the Kubernetes APIs following the same principles adopted by Kubernetes itself. Then take advantage of its unique possibilities. But how this idea mainly works? We have a controller responsible for managing some specific kind of resource such as a deployment. Then we declare a desired state, which means we define what we want. Persuade the idea of the deployment, we declare that we want a deployment, its containers, how the security contacts it should use, etc. Therefore, the Kubernetes API will raise events, in this case to notify our desire to create the deployment. In the controller, we will be watching all create events hazarded for the specific account. Then the controller you see, oops, I have a new event that matters to me. Let's start the reconciliation. Let's start to run all operations required to ensure that the desired state is the current state on the cluster. Okay, that's nice, but if it is new for you, probably you are asking now, what are the advantages of adopting this approach? By using operators, it's possible not only to provide all expected resource, I mean to install, uninstall things on the host, but it's also possible to manage them dynamically, programmatically, and at execution time. Let's return to our deployment example to help us to illustrate this idea. So we have the deployment and we have the pods defining the deployment. Everything is running and finding on the cluster. If we go there and delete the pods, what happens? Do you remember that? Then in this case, the pod will be recreated without our intervention. It happens because the deployment controller will be watching the cluster and you see, oops, I need to look to ensure the desired state again. I need to create the pod. In the same way, we can automate our solutions. So if someone or another solution or our solution itself creates the leads or updates resource on the host that matters for us, resource that we are watching for, then we can go there. Oops, we need to reconcile. We need to be looking for to ensure that all on the host as we want to be. Now that we are all in the same page about what it does and its motivations, we can share that it could be builder is developed on the top of controlling on time and controller tools. The projects built we could be building results in a pod that runs the manager and it is responsible for managing the controllers. Controlling on time is the leap that provides abstractions, meaning to help us to manage these controllers and to watch the resource and the controller tools, which is used to generate code. You'll see that better after this talk. About best practices, it's very common people ask, where I can find the good practices, where I can find this information. So shall we try to do a small overview here? Let's start by speaking about the golden rules. Let's call it this way. In the introduction, we spoke that it is all about automation. Automation of operations with the purpose to do a solution that you will be looking for to ensure that the currency state is the desired state. Therefore, our solution must be idempotent. That means for some input, we always be looking for the same output. Then, when implementing a solution that you adopt the operator pattern, we extend and use the Kubernetes API. Thus, it is wise to ensure that we understand its structure and adopt its API conventions. Put yourself in the position of insisting on admin. Who is the person who you consume and use our solution? This person knows Kubernetes, so ensuring our solution works in the same way you make their life much easier. Definitely, it provides a much better user experience. Let's think about it. If I'm designing a solution to run onto the environment, why not think, is this problem that I need to solve is sort out by Kubernetes? If so, how Kubernetes does it? Then, why not do it in the same way? For example, it is an excellent idea to track the stats of our reconciliation so that it's possible to troubleshoot and to know if something goes wrong and to the reason if you win, it is possible. How does Kubernetes API do it for resource of the kind deployment? Do it not check the stats conditions of a deployment when something goes wrong? In this way, let's not try to reinvent the heel. We can do it the same way. On top of that, see that we can use Kubernetes Leaps and implementations to achieve our goals and spend our effort in what we need to sort out. In this slide, see that as a tip, we are sharing the project API machinery, which provides a helper to work with stats conditions. If you are looking, for example, by looking at the scan files generated by the deployment image plugin, be sure to speak a little bit more as part of our updates. You can find them. The next tip is try to avoid a design solution where the semi-controller has the purpose of being responsible for reconciling and managing more than one principle kind. For example, suppose we have an operator project that you manage our solution and our solution has more than one component. We have the front and the back. So in this case, it might be better we have a custom resource definition for the front and another for the back, as it's respective controllers. Instead of you have something like one kind, install everything, install app, and one controller to do everything. Furthermore, this might hurt concepts such as single responsibility. By hurt concepts like these, it might bring the expected backside effects, such as increase the difficulty of growing, reusing, or keep maintaining our solution. By last, we are sharing here a link from the operator SDK documentation where you can find a very helpful information about this topic. So let's now share the updates. We will start by sharing the new deploy image plugin. What is the most common scenario when you want to develop a solution to work on Kubernetes? Would it not mean deploy an image? Therefore, the motivation of this plugin is to generate the holy code to deploy and manage a solution on the host. Ideally, following the good practice. In this way, this plugin can help you accelerate the development of your operators also reduce the learning curve. It can be used as a good example source. As you can see on the screen, we need to inform the image that should be deployed on the cluster to generate the code. It will end up as a deployment with a pod running a container using this image. So, by running both commands, one to create the project and another to scale forward the API and the controller to manage this image, we mainly have a whole solution implemented with tests, raising events, and using status conditionals. This solution can be used as added. I will show now the code generated for you have a better idea. So, here we have an input director. We will create the project. Now, let's show the full scale forward. Here is the main. You can check that we have a manager to manage our controllers. Also, you can check a making file with target to help you develop the solution and build it. The Dockerfile that you generate the container to run our manager. And now, let's create our API. Here, you can see the group that we are using, the version of our API, the kind, the image that I want to manage, and the plugin. Now, you can see that we have it here. The API implemented with the status conditions. You can see that all code is generated with commands to help you out and to know further information. They spec. Now, let's check the controller. See that the reconciliation is implemented with status conditions, finalizers, and also commands. Now, the tests. Here are the tests implemented for this controller. And then, Tony will let us know about the Grafana plugin. Let's talk about the Grafana plugin. It's a handy tool designed to save users time by simplifying the process of building dashboards to visualize operator status. The plugin generates manifest based on default metrics, allowing users to easily copy and paste the content into the Grafana web UI for instant observation of the operator status. Moreover, it features a customizable config entry that enables users to visualize their own user defined metrics in the Grafana panels. With our Grafana plugin, we can easily monitor the performance of the operators with ease. Here is a quick demo on the memcached operator. To trigger the plugin, we use the subcommand edit. This will give us the Grafana sub directory that contains the JSON file of the Grafana manifest. We can copy the content of it. For instance, this resource matrix JSON file. At the Grafana UI, we can click the input button, paste the content, and select the data source. It will give us the CPU and memory usage. Similarly, we can operate on another file, the runtime matrix JSON file. Again, let's input the content. Now, you can see there's rich panels that provides different perspectives of the reconciliation and the work queue status. For the user defined custom metrics, the plugin provides a config.yaml file that allows users to provide the name of the metric and the type of it. In this example, we have predefined two metrics. One is a histogram, the other is a counter. Let's trigger the plugin again. This time, it will load the config.yaml and bring a new JSON file. Let's copy the content again. As you can see, this time, the dashboards will provide two panels that correspond to the separate metrics given by the user input. Now, let's move on to the next page, contributed by Rashmi. Hello, everyone. My name is Rashmi Gotipati, and I'm from the Operator SDK team at Red Hat. I'd like to introduce face-to-plugins in QBuilder and Operator SDK. As you've seen from the introduction and background so far, QBuilder simplifies building a Kubernetes operator and allows you to quickly create and manage an operator project. An important goal of QBuilder is to have an extensible CLI and scaffolding. So, what does it mean to have an extensible CLI and scaffolding? It means that QBuilder can be important and used as a library in other projects. For example, being able to provide customizations such that other projects can use QBuilder's workflow with non-Golang operators, such as Ansible or Helm. In order for these projects to have customized content based on their needs, we need to have extensions that will modify the QBuilder's base scaffolding before scaffolded files are written out to the disk. And what helps achieve these extensions are through plugins. Let's see how QBuilder provides extensible CLI and scaffolding options through plugins. So, what is a plugin? A plugin is defined as a software component that adds a specific feature to an existing computer program. And that's exactly what the plugins do in QBuilder. So, there are like three main subcommands. Init, Create API, and Create Webhook. So, when any of these subcommands are called, their respective plugins are responsible for implementing the code that needs to be run. So, plugins basically configure the execution of these subcommands. Now, let's look at how QBuilder was extended through various phases of plugin implementation. Phase 1 enabled QBuilder to be more extensible by defining new plugin interfaces and providing a new CLI, thereby adding the ability for QBuilder to be imported as a library in other projects. Currently, Phase 1 plugins are all grow-based, so they have to be compiled with the QBuilder binary and they cannot be external. Phase 1.5 introduced the ability to chain plugins, meaning more than one plugin can be executed along with the QBuilder CLI. However, Phase 1.5 plugins are still grow-based and they're internal plugins that need to be compiled in. Here's the thing. What's new in QBuilder now is that we have landed Phase 2 plugins which allow support for external plugins. The Phase 2 plugin API is designed in such a way that it discovers external plugin executables that don't have to be compiled with the QBuilder binary. So, the source code of the plugins is not compelled to be inside QBuilder repository. So, QBuilder library consumers can now have support for discovery and chaining of external plugins by just importing and using a sample external plugin or writing your own plugin in the language of your choice. Since Phase 2 plugins are out of tree, these plugins are going to be external and not in the tree of QBuilder. This eliminates the need to have all plugins on the same version of Go dependencies. Another major benefit is that the plugin API is designed to be language agnostic. It provides support for other language scaffolds using standard in and standard out mechanism. So, plugin developers are no longer restricted to writing in Golang and are actually able to write a plugin in any language. And since the external plugins are out of tree, they will be discovered at runtime which makes QBuilder easily extensible without having to be re-builded. Let's look at the workflow that occurs when a user runs an init command and provides two external plugins with their names and versions. So, QBuilder discovers those plugin binaries with respective names slash version directories. Once they're discovered, QBuilder constructs and sends a plugin request JSON to both the plugins over standard in. The plugin request consists of the command that was initiated, all of the raw flags, and an empty universe that contains the current virtual state of file contents that is not written to the disk yet. After receiving the plugin request, the external plugins construct a plugin response that contains information on the modified universe based on the new files that were scaffolded by the external plugin and writes it back via standard out in the JSON format. QBuilder then receives the plugin response JSON which will then be decoded and then writes all the files in the universe to the file system. I'd like to show a quick demo of how Phase 2 plugins work with QBuilder. For the demo, I'll be using a sample external plugin that I've implemented in Python to showcase the ability for Phase 2 API to support other language scaffolds. As we have seen earlier, QBuilder uses name and version scheme to discover external plugins. It's most natural for this use case since plugins must have a group like name and version. So when any of the sub commands like init, create API or create webhook are in work, QBuilder first discovers the external plugins by traversing through the plugin root directory to match the plugin name and one level down it then matches the plugin version and once it's in the third level it tries to match the plugin executable. So for the demo, I'm running on a Mac so the default plugins path will be user slash home directory slash library slash application support and under that we have QBuilder slash plugins. So this is going to be the plugins root and on Linux the default path for the plugins root would be a different one. So if we look at the directory structure of the plugin under plugins we have the myExternalPlugin directory which has v1 as a sub directory and under v1 there's myExternalPlugin.py which is the external python plugin. So every plugin gets its own directory constructed using the plugin name and plugin version for the executable to be placed in and QBuilder will search for the plugin binary with the name in the name slash version directory of the plugin. So in QBuilder main directory let's create a new project. Now let's run the init command by specifying the name of the external plugin in the dash dash plugins field. What happens now is that QBuilder detects the OS and it discovers myExternalPlugin in the path that we just talked about. It also validates the name slash version directory and checks whether the plugin is an executable based on the bitmask and then it runs the plugin. So this results in the scaffolds like project file and license header and domain.py and main.py. So the file contents are provided by the external plugin in the universe object. It represents the modified file contents that the external plugin writes back to QBuilder as we've talked about in the previous slides. So communication between QBuilder and the plugin happens through plugin request and plugin response by passing the updated universe back and forth to each other. So now let's run the create API command by specifying the plugin name and passing in some args like group as cache, version as v1 alpha 1 and kind as memcached. So QBuilder finds the plugin and runs it but this time by passing the create API command in the plugin request. So this results in scaffolds like gvk.py. So if you look at gvk.py, it scaffolds the class memcached and group as cache, version as v1 alpha 1 and kind as memcached. These are the values we specified for the flags when we ran the create API command and also an init command that sets the name and name space. And now let's run the create webhook command by specifying the external plugin name and group as cache, version as v1 alpha 1 and kind as memcached. And this results in the scaffold webhook.py. Yep, so these are the three subcommands to initialize the project. Scaffold Kubernetes API definitions and webhooks and this is how external plugins can be discovered and run with QBuilder. And this concludes the talk and quick demo on one of the recent updates in QBuilder which are phase two plugins. Thank you for watching. Thank you Rashmi for explaining about plugins. Now let's look into controller tools and controller j. Controller tools is a project that contains a set of goal libraries for building controllers. It provides us with a binary called controller gen that helps in scaffolding out code for deep copy generation, custom resource definition and RBAC manifests. You all would have probably heard the very common term called markers or marker commits which are present in the operator project. Controller gen performs the task of reading those markers and scaffolding output and the YAML definitions are just go code. Controller gen provides us with three different generators. It also has the option to specify output rules wherein we could mention the location of the scaffolded content. Let's walk through a sample code here. We see that the busybox API spec is defined and contains a field named size. On top of this, we see the controller gen marker comments that contain the tag queue builder. These are validation markers and here they mean that the minimum possible value should be 1 and the maximum could be 3. After running make generate we can see that the open API validation is added to the custom resource definition. The lesser known use case of controller tools which we will briefly discuss here is the ability to extend the library and create custom generators. The major utilities which controller tools provides us with is the ability to define markers in the format we want, parse them with any custom logic and scaffold out relevant bits of code. The first step is to declare the format of a marker and create a definition out of them. The commonly used helper for this is marker.makedefinition. The link is to the same is present in the slide. Markers can be defined to be on a type on a package or on a field like the queue builder object marker or say a validation or a webhook marker. The marker format does accept arguments and these fields can be defined as a struct and passed on to the definition. The step two is parsing of markers. This is done using Django which we have wrappers in controller tools and the reference again is present in the slide. The step three is to generate output. Now the output can be of any form as mentioned earlier either yaml or just go code. The location of the output can also be configured. Now that we have reached the end of the presentation, we would like to welcome you all to be a part of our community. We do have a Slack channel which is a work named queue builder and we also run bi-weekly community meetings. Please do join the Google group to receive invites and updates on the things happening in the community. We have the references here. We have the link to the queue builder documentation, the quick start guide, the link to different plugins which we discussed and the link to use queue builder as a library. We also have links to controller runtime, and a few API conventions followed by Kubernetes. Please do have a look at them and feel free to create issues, RPRs against these projects. Thank you so much.