 Hello, KubeCon. My name is Pedro Guimarães and I am a field engineer at Canonico. I will show you the universal operators concept and how can we transform your application operations. Let's start with what operators do. They are actually software that drives software. Operators can package, run, and maintain any app on Kubernetes. Kubernetes operators are becoming more and more popular these days as they abstract complexity and allow to deploy applications without requiring domain knowledge. Universal operators extrapolate the same concepts and apply them on any substrate. Universal operators will manage your apps on any cloud, on top of yams, bare metal, or containers. Our goal is to build small, composable operators that do one thing and do it well. These are then easier to contribute to, they are reusable, and we know that reusability leads to better software quality. For all these reasons, we have created Python operator framework. These operators are easy to integrate. What does this mean? We look at dependencies and relationships between applications and allow them to be declared. These allow operators that share the same relationship type to be easily integrated and to break down complex applications into more manageable components. Thanks to the integration concept, we can respond to complexity with composition. Take this example of a Kubeflow model. A complicated set of applications driven by individual operators can be related, creating a more complete stack, and solving a complex problem. Operators can be placed in a model. A model is a workplace that facilitates composition. It provides organizational view of all the operators, abstract view of infrastructure, service isolation, access control. Universal operators help create different apologies to address different scenarios. Let's reuse the Kubeflow example here. Say you want to do an inference at the edge, a small scale focus cluster is the ideal approach. Meanwhile, desktop applications can have more resources and focus on a Jupiter stack, for example. Finally, the full deployment is composed by the entire service stack deployed on several nodes in HA. And here is the representation of those three scenarios in practice. So on the left, we do have the edge scenario with just the pipelines and the data science containers related to TensorFlow, PyTorch, etc. On the middle, we have the desktop scenario, where we add the Jupiter and the authentication like DAX. And on the right, we have the full scenario with all the Kubeflow that we deploy. Cat-Tib comes in an extra metadata-related deployment, ambassador, etc. Good is about how loud this is possible. The Jujube operator lifecycle manager allows for operators to run on any substrate and drive both infrastructure and application level components. It provides abstraction from the underlying cloud and manages lifecycle events for applications on top of the cloud. And Charms. Charms offer perfect reuse of operations code. They do so through packages that enable full lifecycle management out of the box. This includes the ability to readily manage workflows, operations, and actions. You already find this capability readily in operator pattern. Where we have gone a step further is enhancing and automating integrations to others lifecycle changes across scenarios. This is made possible through the combination of Jujube and Charms. What does a Charm look like? Let's explore the universal operator code. The PyTorch operator framework allows you to approach the infrastructure and application management as a code. It is an event-driven framework in which events are exchanged between your operators, the lifecycle manager, and other neighboring apps. Whenever an event happens, the corresponding piece of code is called. You start by narrating from a base class, which contains the primitives to connect to the universal operator. Add in it method. You will register which events from the operator do you want to listen to. For example, I have registered here that I want to listen to configuration changes. We do it by calling an observed method. Whenever an event comes in, informing there was a change of configuration by the user, the operator framework calls back my specific function, uncomfig changes. Now let's zoom in into how integration works. The integration and points are declared on a special file called metadata.tmo. There, you can define which interfaces we'll have as endpoints. Your app may consume a certain endpoint, in which in this case it requires a endpoint, or provide it to other neighboring apps. Now let's use the universal operator to solve a real-world problem. The data science problem. Data science should focus their time in writing machine learning algorithms, the little orange box in this picture. However, there is a lot of complexity in a data science data that comes from infrastructure and data pipeline operations in production. That adds a lot of manual effort or extra code. Here's an example of a data science pipeline that predicts what will happen in the S&P 500 stock market index. The workflow consists of collecting and storing raw data using Elasticsearch, building and training AI models using Kubeflow, storing the results in SAF, and distributing AI models with TensorFlow and Kafka. Juju facilitates the lifecycle management of both infrastructure and applications. In this demo, we will start by deploying the operators on top of the metal and then on top of containers. We will first deploy Kubernetes and SAF using a single model. This means that Kubernetes and SAF will be co-hosted on the same bare metal modes as a hyperconverted approach. Then we will deploy a second model for the Elasticsearch in a third run from Kafka. The same Juju controller will be used to deploy Kubeflow on top of Kubernetes. To kickstart the demo, I have Juju controller already running. The very first step is to create the models. I have defined three bundles for the bare metal parts, Kates and SAF, Elasticsearch and Kafka. I will start by deploying Kates and SAF. First, I will leave a watch on Kates and SAF model. There is no model, so let's create it and model Kates and SAF. Now we have a model. We can actually deploy the bundle. My Juju client is pushing the bundle information to the controller, so which charms I'm going to use, then which machines I need, and mapping applications to machines. On the right side, you can see the Juju status coming up, so how the model is getting created on the fly. Right now it's deploying the applications. Okay, so deployment is finished and we can see we have an upgrade deployment. Let's copy the cubed config file. You can also see have several running containers for the controller, so cluster is fine. What you're going to do now is we're going to scale this cluster for three extra nodes, so I want to scale this out and add three extra workers and three extra SAF OSDs. I will have a second watch on Juju machines. I will now run the command to add three extra machines and then I'm going to map the Kubernetes workers and SAF OSD units into that machine. So add machine. I'm going to set up some constraints, in this case tags, and there we go. You can see the machine is coming up. They won't have units so far. Now I can map extra units to those machines. So three extra worker nodes and I'm going to map them to six, seven, and eight, which are the new machines. And we can see here Kubernetes work is getting to weight status and with three extra new units and I'm going to also add three extra SAF OSDs. So with three commands I added the machines and I deployed two new types of workloads directing those new bare metals. And now we have a nine node cluster for Kubernetes. Next step is let's deploy KubeFlow on top of this cluster. Juju, so I will first switch from Juju status, Kate, SAF, Kate and SAF to the model that we're going to create called KubeFlow. KubeFlow. It doesn't exist yet. We're going to add it. And I'm going to also leave this and instead have a watch on KubeCTL, get fuel. So there will be a namespace called KubeFlow once we add the model on Juju. And now I'm going to deploy KubeFlow on top of Kubernetes. For that, I first need to add the Kubernetes cluster that I have right now as a cloud. So add Kate. That will essentially use the Kube config that I have on this machine, configure it already to communicate with the Kubernetes. Okay. And now add a model for KubeFlow. So Juju add model, KubeFlow kates. Now we can see the model exists and there is a namespace. Now let's deploy the actual KubeFlow we are now. It is starting to deploy the operators on Juju, likewise on Kubernetes. Likewise, let's create a model for elastic search in Kafka, adding the model and Kafka. We have on one side Kafka being deployed. On the other side, we have a watch on the Juju status. KubeFlow deployment is finished. And for this demo, I've already set a pipeline for the data processing called finacial data. The pipeline is divided in pre-processing phase, which also pulls the data out of elastic search, train the model against the data, check if that passes or not a validation value. Finally, the deploy phase, which pushes the trained model back to S3. Let's give it a run. Pipelines are deployed within containers. If we go to the CLI, we can see the containers coming up. And we can follow up the logs of that container. Now that it's finished, we can see there is a precision of 0.77 and we can go back to the pipeline and see it's finished and deployed to set. The next step is to serve it. We have our model stored on S3 and we will need to run it on TensorFlow. For that, we run this script that will deploy a new charm called tapp-serving on a container to make it available. Now we run the script and that is the point tapp-serving charm. So we can check our due to status and the charm is coming up. Also we can check on kubeflow and there is a serving operator coming up. Now that the serving pod is connected to Kafka, we will send some input data and read its prediction. As you can see, this is the prediction of the model had done for the future stock market closings. Cudio is about universal operators, come to the above. Learn more about universal operators on tutu.is and channelhubs.io.