 Hi, everyone. We are very excited to be here for the third year for me. I don't know for Tom and Vasek, but maybe we can start introducing ourselves. Go ahead, Tom or Vasek first, and then we will start with the presentation. Don't be shy. Are you going to choose? Okay. Let me start. My name is Tom Zofa. I'm with Red Hat. I work at the Office of the CTO on the Open Services part. What do we do? We are trying to experiment with stuff. We have the liberty to experiment with many different components, many different technologies and figure out what works the best for future products. And part of our portfolio is investment in data science technologies, AI technologies, and we are helping with some of the upstream projects and also transforming those upstream projects into downstream products within Red Hat. So our focus is mainly currently mainly on SRE practices, best principles, how to operate air workloads, how to operate data science, and other stuff among others. So that's why we claim to have some expertise in this topic. And that's why I'm here for you today. My name is Vasek Palin. I'm an architect of Open Data Hub, which is the platform that Francesco is going to use and which Tom is running and complaining about all the time. This is also my last week at Red Hat. So it's an honor to be able to present with Francesco and Tom here and support them in their actually make AI cloud native on OpenShift. Thanks Vasek. We are also honored to be with you actually. So let me start with the presentation. So my name is Francesco and I'm also part of the Open Services Group. So I work with Tom and Vasek in these years. And I also work on project thoughts that will be introduced in a moment. So we won't spend too much time on the presentation, so we can focus on the actual work. So let me start briefly introducing some of the technologies that we are going to use today. So you will see Open Data Hub just briefly introduced. If you have any question or any deep question, you have Vasek here. So please feel free to ask. Operate first also. We don't spend too much time because there was also a talk that I hope you attended this morning by Marcel Hilt and he explained what is Operate First and how it works. And if you have more questions, we are also here and Tom actually is here. So if you have more questions, please ask. Then we will see project thoughts and project meteor and all these tools that we are going to use today in this brief workshop. So Open Data Hub So Open Data Hub is an open source project. There is a community behind it and it's basically an AI as a service platform that runs on OpenShift. It's production ready, so it's already used by hundreds of users and it's a meta operator, which means that it's able to manage different components through operators. So there are many tools that we are going to see today and all of these are basically managed and installed by Open Data Hub. And there is, as I said, the community behind it and we will focus on some of the most popular AIML tools. You will see, for example, Ellyra, Jupyter Hub and other tools that can be used, for example, for deployments. And the tools that are available actually cover the entire workflow or life cycle of the machine learning. So you will see that we can use tools to analyze data through Jupyter Hub with all the dependencies and TensorFlow and all the different packages available. We will see that you can run experiments through Kubeflow, for example. And we can run AI pipeline using some specific steps that we want, for example, creating, processing some data, training the model. And then once we are happy with our model, we can basically deploy it in the platform directly because Open Data Hub is running on OpenShift. So everything is integrated and very easy to use actually. And once we have this model deployed, that you will actually deploy, you will be able to see, I don't know, some metrics when we try to run the application that we build. So it was very brief. More question, please. Vasek is here. Regarding Operate First, the same, I won't spend too much time, as I said, but the idea behind it is that there is many of Open Source everywhere today. And one missing part actually of this Open Word today was basically the operation. So Operate First is a community that wants to exploit this word, the word of the operations, and wants to include everyone from the community to learn how to maintain application, how to maintain the cluster, how to monitor the cluster, the application, how, what are the best decision or better solution for some specific tools. And everything is done openly and there is a community there. As I said, if you attended the talk this morning, you will see that you are welcome to join and learn all about operation and there is a community that is growing. So please, if you want, join us. And here are the links. So this presentation will be shared later. But of course, we will use Operate First today. And on Operate First, we will have Open Data Hub that is running and maintained by the Operate First team. And we will see one example that I will explain in a moment. Project Toth is another project available in our group. Project Toth wants to basically facilitate some of the day-to-day tasks that are done by the machine learning developers or the data scientists in general. So we want to provide a way to ease their daily job, in particular focusing on the dependencies, which is a very important task when you want to build your application. The first thing you're going to do is select the TensorFlow or PyTorch. And this means that you want to have a way to select these dependencies in the best way because depending on the hardware that you're in or the interpreter that you're using, maybe there is some specific dependencies that you can use that can boost, for example, performance. Or if you're interested in security, then you can go ahead and receive a certain set of dependencies that will allow you to keep your application safe, maybe reducing the performance. So depending on what you want, this tool or this resolver that is provided by Toth is able to give you this recommendation. So Toth overall is a recommender system that is able to give you the dependencies based on your need. So you can not just say, I want this dependency, but I want this dependency to be the most secure or the most performant. So there are other layers on top of the dependencies. And in this way, we can provide the recommendation to the users. And the second important thing is the delivering of optimized images. So there are pipelines that have been created and can build the container images that you can run, for example, in OpenShift or in AI pipelines, depending on what you want to do. And the third is the automation. So we want to reduce at the minimum the time spent on this kind of things that can be completely automated and can be run by bots. So you can focus specifically on your problem. There are several integrations available, but what we will focus today is the one provided in JupyterLab. So I will explain this in a moment. Also here you have links if you're interested. And then we have a Project Meteor, which is another project that was started in the Open Services Group. And Project Meteor is basically a proof-of-concept on all these tools that have been created, that needs to be integrated with the daily tools, for example, used in open.hub. So we want to integrate all these pipelines and run them on Tecton or on OpenShift pipelines and provide this added value to the work of the data scientists. In particular, you will see that the tool is basically going to hide everything that is happening from the user. So what they want is an environment to run their application. And if I have a GitHub URL, for example, I can just introduce it in this tool. And after some machinery and magic, you will get two URL, one that is pointing to a Jupyter Book. And this Jupyter Book is basically the documentation, for example, for your experiment. And we will see the experiment that we created today. And the other one is bringing you in the Jupyter Hub environment, where we are going to actually do the actual work. And with this, I think that's all. Yes, I took nine minutes, so it's enough. I think we can already move to the actual part. More fun, let's say. If you have any question, please don't hesitate to ask them in the chat. I mean, there is, I don't see always the chat, but there is Tom and Vasek. So if you have any question, please let us know or feel free to interrupt me, please. So what do we do in order to start? So I created this Excel file that I will post in the chat. I don't remember where to put it. Let me see. So please, if you want to try this workshop or try the task that you're going to make today, and remember that you can also continue to do this in the next days or whenever you want, because the workshop will be open and is always available on GitHub. So if you want to try it, you can run it on Open at first, which is open to everyone. So what you need is basically only a GitHub account and a GitHub token that you can generate easily. And if you don't have one, I'm going to show you in a moment how to do that. So I can give you this in the chat. If you don't have a token, do this. So how do we start this workshop? So let's say that you want to start this tutorial by yourself and you find an interesting project. For example, this application that we call the AI DevSecOps tutorial. This tutorial is focused on using most of the tools that are available on Open Data Hub. And the application that we're going to focus on is just missed classification. So it's a very simple model, but this is to show how you can use all these tools and how actually easy it is to move across the different phases of your project, machine learning project, and how you can deploy this application very, very easily. So we go to the meteor shower. So if you enter this sheet, you will see that there are two sections. The first one is focus on the steps that we're going to make. I will tell you what is something that you can completely skip. I already added some, but maybe there is something more that we can skip if we don't have enough time. And the second sheet is basically showing you some of the links that are useful for today. And the first one is the shower meteor shower link. And once you're here, you will get in this UI. As you can see, the UI is very simple. You can just take the URL of this project and enter it here. And you have some options if you want to choose some specific branch of this project, the expiration time, and the components that are going to be created. As I said, there are two URLs that we'll get at the end. And in this case, in order to not waste too much time, I already run this pipeline for you. So you should see that there is a meteor available. Please let me know if it's not available. Otherwise, you can find it in another way, but you should be able to see it. So if you open this, you will see that this is the final basic status when the pipeline finished. And you have the two links available. So one to open the Jupyter Hub and one to open the website. The website, we mean the Jupyter Book in this case. So if you open that book, these are all the steps that we're going to follow today. I mean, we will try to follow today. And if you don't have time, you can always keep trying and following the description here. So today, I will explain most of these steps. And if you have any problem, please remember that you can ask us now or even in the next days or when you want. And you can use, for example, this issue here that we created specifically for today. So you can basically just mention the task you were working on based on this spreadsheet and describe your issue. And we will try to help you solve the problem if there are any. Hopefully, the workshop will go smoothly and you will be able to finish all the steps until the deployment of the application. So if you're already here, so we saw that you can access the tutorial. This will basically explain the different steps. And there is one link that brings you to the initial environment that you need. This is something that you don't need to do in this case because we are using meter. And if you're using meter, everything is already prepared for you. So what we do, we just go to this link. And here, we will be redirected to the meter that we want. So in this case, this one with this code, you have the same here if you don't find the reference. And we can just select the container size, medium or large. I mean, we can go for the medium. There should be enough or large. Let's go for large. So now the Jupyter Hub Spawner is basically pulling the image if it's not already available. And once the image is created, it's going to be assigned to our username. And then we will enter in the Jupyter Hub environment. So if you want, we will try to follow the steps. Please go ahead if you are faster or if you want just to go ahead. We will try to follow each of the steps in this hour. But if you are faster and you want to also continue, please feel free to do that. I will try to follow these steps meanwhile. So the environment is almost ready. So regarding the application, maybe we can go ahead a bit while we wait for that to be ready. Yes. So the first part will be focused on the notebooks. So as you see, here we are in the Jupyter Hub environment. And you should see that you have a repo that was created with the timestamp also. This is to avoid that anything that is already present in your environment, sorry, in your, yes, environment volume that is attached for your image to be lost. So in this way, you don't override anything. And I have two, because I was already starting before, but you should see that you have only one in this case. So if you open here, you will see that this is the structure of the project that actually we were having a look from GitHub. So this was imported here automatically. And this is actually something that you don't have to do. Typically you have to clone the repo. If you see that this is the first thing that you can, you should do in the step, but you don't have to do this in this case because we are already in the environment. And one interesting plugin that is available in Jupyter, in Jupyter Hub, actually in this case sent to Elayra is the Jupyter Hub Git extension. So whenever we change something, for example, if I open anything of this, and I save it, for example, you should see that it immediately identified that there was a change in my project. And if I'm happy with it, I can just move it to the stage as you will do with using Git. And then if you're happy, you can try and just make the description of what is the change, I don't know, modified the config YAML. And then you can just try to commit. And once you commit, you can open a pull request in the repository. In this case, there is some thing that you need to enter. So for Git to work, you need to provide your name and email that is attached actually to your GitHub account. And then you will be able to push. In this case, the environment, I won't add them here right now, but once you add yours, you will be able to also push to the repo and eventually open up a request in that environment. So regarding the application itself, as I said, is a NIST classification application. And if you go to notebooks and then TensorFlow NIST classification, you can see that there are two notebooks. The first one is basically a very simple notebook that you can use to download the dataset. So the dataset is available through TensorFlow. And so you will be able to run this notebook in order to collect the data. Regarding the dependencies, as I said, this is a very important thing that you would do typically when you start the project, but also you want to make sure that if I give this notebook to someone else, they will be able to run this notebook without any problem. And how do we do that? I mentioned that ProjectOS has cell integration and one of them is called Jupyter Lab Requirements. And this is available actually in all these environments by default. So whenever you try and do something like ORUS check, which is the CLI provided by Jupyter Lab Requirements and also the magic commands that are integrated in the notebook, you should be able to see that this notebook has been basically created with that extension. And that means that I can find the requirements. So all the packages that I used when I was creating this notebook, now they are available here. I have the requirements lock. So not just the dependency like TensorFlow, but all the dependency and transitive dependencies that are coming with TensorFlow are going to be stored in this notebook metadata. If you want to see them here, you can go to Advanced Tools. For example, you can see that the solution engine that was used to create these dependencies, the requirements that I used. So as you see, there was TensorFlow, Boto3, and the requirements lock. So in this case, I can reinstall the same environment if I want. And there is a specific command in order to do that. You can also see, for example, the dependencies that were used, so TensorFlow, Boto3, and Matprolib, this is just if you want to run it directly in the notebook. And if you want to install the same environment, you would do or set kernel. And then you can also choose the name of the kernel if you want. I don't know, DefConf US. And this will basically start creating a new kernel that is used to run this notebook. It will take some time, but I mean, don't worry, this is not important. This is just to show that these tools are created to help you in the development of your project. So the dependency are very important things. And we want to make this very easy for the users, so they don't need to focus on that. And there is also, as I said, the magic commands, but you can also use this extension through the UI that is available through this button here. Or you can also run it if you use the terminal of JupyterLab. We should have the command directly here. So if you want to do some specific things on your notebook, you see that there are all the commands. For example, the one we use was set kernel or show that these are all available if you want to do this from the command line. So we provide different possibilities depending on the preferences of the users. So when this will be done, I mean, it's a big software stack, so it will take some time at the moment. But we can go ahead meanwhile and see what are the other steps. So the second step is once you get the data, and typically you should process them if they are not provided already in a nice state. In this case, it's quite easy because TensorFlow already provide you with the defined dataset, and it is already ready to be used. And this part is more focused on the training of the model. So if you see here, we create the, we split the dataset, we create the convolution over network. And then once we are ready, we just run it and test the model that was created. And finally, the model will be stored on a specific place, either locally or on a storage that is provided. So these two steps, as you see, it finished and it already assigned the new kernel. So this new kernel being created now, and this containing all the dependencies that were created through this, for this step, you don't need this in your notebook anymore. But now that you are ready, you can basically run the notebook. And this, you can do the same for this other notebook if you're interested. So once the notebook is ready, or the two notebooks are ready, typically what you do is saving everything. I showed you a few minutes ago, this is something also a very good practice to every time you modify something or at the end of your day, to just save everything or push it to your main source. In this case, it's our GitHub repo. In this way, we can allow others to contribute to your project. So if you're working on a specific application, typically you would have maybe more people working on that. And they also want to contribute or help you in a specific part. So typically this is a good practice to push everything and save everything in your main source. Another thing I did mention is that some good practices also can be helpful in terms of the structure of the project. As you see, this specific project has a structure. And this is not a random structure. This is actually a structure that was created in our team. It's based on the cookie cutter templates, if you are familiar with that. And then we added some specification depending on the tools that are integrated, like for example, there is the top configuration file. And so configuration files are all specific configuration that we use in our projects. But in general, it's a good practice to have this structure. Why is that? Because you typically have different personas that work in the project. And typically, if you want to look for something, you don't have to think, where can I find this one? So it's very easy to find, for example, immediately, if I want to have a look at the notebooks, I know that they're all stored here. So there is a structure here. And in this case, I'm another science, for example. But if I am an DevOps or AI DevOps person, and I want to deploy your application, I want to know where are, for example, the manifest. And this is something that you can also find easily, if you have a certain structure that is agreed, of course, among your team. But, I mean, doesn't have to be the same everywhere. But depending on your team, of course, you can adapt it. But it's a good practice to have this. And we typically have this in our projects. So if you are following the different steps, you will basically already enter the OPERT first, in a way. This is not something that you saw, but basically you entered through the Meteor link. So we are already in the OPERT first environment. Sorry. So we can say that this was done. You're ready. Ice that we wanted. We already entered in our repo. We don't have to clone it in this case. So you can skip it. As I said, the Meteor is basically preparing the environment for you. We already saw the orders commands. And we already talked about pushing the changes when you are happy about it. So let's say that now I have my different notebooks. And what I want to do typically is once I'm happy with the different steps that I want to run, is that I need to repeat, for example, if there are any pipelines or any workflow that needs to be run. For example, in the MLops concept, typically it will be a pipeline for retraining the model and redeploying everything once it's recreated. And having pipelines is also quite common. And if you want to test these pipelines, there is the ELIRA extension available in these projects in OpenData Hub. ELIRA is basically an extension for JupyterLab that have an easy UI to create these pipelines. We're going to see in a moment how you can create this pipeline. But we're not going to run them today. So I'm going to show you what is the pipeline editor and what are the different steps that you have to do in order to run them. But they will take some time and there is no opportunity probably today to run them, at least not today. So typically the pipelines, you just open your editor. And you can enter in this pipeline either notebooks or some specific Python code if you want, or also R. R is also available through the JupyterLab extension. So depending on how you prepare or create your different steps, maybe the processing step is done in a Python script, but the training is always done maybe in a notebook. So this is quite flexible depending on what you want to do. And if we go to our step, this is quite straightforward. So you can import them directly in this editor. And you can see that this will be, for example, my initial pipeline that I want to rerun after some time. And this is the typical flow that I want. The first step needs to download all my data and store them in a specific place. And then I want to train the model using this data and store this model, of course, in another place that I can reuse later. There is one important thing or two important thing that you need to prepare in order to run this pipeline. So first of all, you have to imagine that each of these steps are some specific property that you need to select. So on the back, this is hidden from the users, but basically this notebook will run in a specific container. So you need to select a specific image to run it. And typical this image is something that can be created through automated pipelines. As I mentioned at the beginning, all these container images can be automatically created. And I will show you in a moment how we usually do that in our team. We don't have to do this today because we already prepared all the images in case, but as we are not running the pipeline today, I'm just going to show you how this can be done and why this is important also. So as I said, we are here and what you can do, you can basically select an image. These are predefined and default images available through Elire, but you can also have your own one. In order to do that, we can do control shift, control shift C. And you can have, you have several commands here. And you can, for example, manage your runtime images. These are the images you saw a few moments ago available here. And if you want just a new one, you can select your name, DefconnQS, they download. And you can select the image. Where are these images? In this case, you can find them going to your Jupyter book. And if you go to create AI pipeline, you will see that these are all the steps that I'm following. If you want to repeat everything, you can just follow again. Or if you have any problem, of course, let us know. And you can see that here you have the image. This is the image name that we would introduce here. And other option, if you are familiar with those, otherwise just follow the usual default. And this new image would be available here. And of course, this will be available in a moment also here. So we will be able to select each of them. Beside the image, what else you can select? There are also the resources that you want. So if you want to run this step, specific step, not just not everything, but just this specific step with GPU or CPU, then you can select it. For example, in training is something very common if you have big models to use GPU. So in that case, you can also allocate resources depending on your step. So maybe in the first one, you don't need to do GPU. But in the second one, you require that, then you can also tune depending on your requirement what you can select. And there are other things, for example, if you have environment variables or other specific environment variables to provide. And you do the same, of course, also for the second step. This is highlighting the error because I'm not finding any image. But you can see that this is solved, for example. And as I said, these are the images that are required for each of this step, but then how you actually run this pipeline. So what you usually do, we go back to the usual control shift C, and you can see that you can do manage Kubeflow pipeline run times. There are no run times already created at the moment. There is no default because it's something that you need to create. And you need to choose also the engine that is going to run these pipelines. So we use a Kubeflow in our team. And on the back, actually, you can select if you want Tecton or Argo, which are two other open source tools that you can use to run the pipelines. We usually use Tecton. We're not going to fill this because we're not going to run them. But these are all information that you can find and are provided if you want to run the pipelines. So you just select a name and the different information related to the inputs. But as I said, we're not going to run because there is no time. But in general, this is just to show, so you add your initial project, you know how to enter in the environment, you know that you can create your own notebooks, and you can save them when you want in your project, in your GitHub project. Once you're happy, then you want to basically experiment with these pipelines. So what you do, you create this AI pipeline and you show that it's very trivial to do that. What is important is that you have the images. How do we create these images? So in our team, we have CI CD in place, and it's actually another application which is called the AICO ECI. And this application is available in most of the projects and can be installed as a GitHub app. And the second GitHub app is actually the Kebash app, which is one of the bot that I mentioned at the beginning. We want to automate most of the things so that you don't have to focus on that. And they should be available in each of these steps. Or, well, we can go directly here. So they're running actually CI. There are two things that you need to have. So once you install these two applications which are available on GitHub from the GitHub marketplace, you can basically select this configuration file. And in this configuration file, you can decide what images and what container you want to create, sorry, what images you want to create. And typically, you have, you can select the name and the base image that you want to use, and the type of build that you want to have. So if you want to build from source or you have, if you are familiar with it, some container file, where you specify either type of some specific instruction that needs to be followed in the container image creation. And you can do this for several steps. For example, here we have several steps. And as you can see here, you have the two steps that I mentioned before. And so these two images are already available. And if you want to see them, once the AICOE pipeline creates these images, automatically they will be available on the container registry, which is an open container registry publicly available. And these images are available here, as you see. And why this is important, this is also important in order to trace what you are using in your project. So as you see, there are different versions or tags that are named for each of these images. So we can immediately identify where is the problem and what kind of version it was working and which one we need to use for depending on the project. And this is, of course, a good practice that is taken by the software engineering practices. But it is also very useful if you want to do data science, because you always want to be able to repeat these experiments and allow others to repeat these experiments. Yes, so this is regarding the images. So we are basically talking about these steps. And let's say that you already run the pipeline and you already created the model. Then this model, for example, if it's small enough, can be stored also on your GitHub repo. But typically this will be stored on some specific storage, because the models are actually not so small in machine learning in general. So what we can do if we want to deploy a model. So we basically now have everything, we run the pipeline, we have the model, and now I created an image for that model. You saw that before. Of course, this is typically an expertise of a DevOps person. So the data scientists provide the model and then the AI DevOps needs to containerize it and create some specific, for example, endpoints for this. If you use Flask, you need to provide, for example, a specific logic to it. And we have typically two endpoints that we provide. One is for the metrics. And one is for the predictions. So when we want to predict based on the model, but of course you can have a depending on your project, other type of solutions. And regarding the deployment itself, actually, this is not the only solution, because the deployment of the model can be done with several ways. We're working on adding other examples, not just the Flask application, but there are other tools that are available, like Seldon, Openvino, Kfserving. So there are different solutions. And the deployment can be done even more easily using this tool, because, for example, Seldon just required the model URL so you don't have to do anything else. The model just needs to be provided in a certain way. And you don't have even to create the logic to it. So as you see, there are different solutions. And all of them actually are going to be or are already available in Open Data Hub, and you can use them. So regarding the deployment of the model that we're going to see today, how do we deploy a model typically in our team? So if you are testing the, for example, the deployment, we usually go through deploying this through Manifests. And these Manifests, as I said before, are available in this path, as we mentioned before. So we know exactly where to find them. And there are three specific objects that we use, which are used in the OpenShift. In this case, we use the deployment config. So this deployment is basically describing what type of deployment you want. And some specific, for example, image that you want to use. So there are some information that you can provide that are specific to the Kubernetes word and to the OpenShift word. And this is one way, of course, once we have this manifest, what we can do is just, this is what we're going to see today. You can just apply them directly. But what we do also today is automating this. So the user doesn't have, doesn't have to basically do this manually. But typically, there is a solution that is completely automated, which is the use of Argo CD. So Argo CD is another tool that we use. I don't know if you are familiar with it, but Argo CD basically, okay, it's not available. Argo CD is basically integrated with the, with the, with the Github project. So you can use Github in order to deploy everything related to a specific application. So in this way, you can track whatever it is declaratively, what is in the cluster. And if you modify anything in your Github project that maintain this application, you can directly modify what is happening in the cluster. So in this way, we can have more control and this integrated with the Github, Github's best practices. So there are these two ways. But today, we're going to see one of them. And it's directly the use of the, of the manifest that have been already created and available for you. The only thing I ask you is to modify the three objects that we're going to see today. One is the deployment config. So whenever you find the name of the, of this specific object, just add your username. So my username in Github is PacoSpace, for example, you can just add yours in order to test this if you want. And please do it for all of them. And the same thing needs to be done for the other two objects that we use. In this case, are the route and the service. So once you have these three, then you can, we can basically deploy the application. And let's see how to do that directly from the terminal. So we have in the terminal, so we are in our operate first and open shift inside the cluster. So we want to connect to open shift. How do we do that? If you go to the links that I provide in this spreadsheet, you can see that there are other links. So you can, for example, open the reddit open shift console. And if you want to find your login commands, you can basically go here and display your token. Once you get your token, you just copy the, you can copy all the things, yes, all the commands and just run it here. I won't copy it because I don't want to show my token, but if I'm already connected, yes, I'm already connected in this case. So you should, you, I can already run this. So the project that you're going to use is called the namespace, I thought they copied here, but okay, it's called the Defcon demo. So you can actually also access it from here if you want to see the namespace. Yes. So this is the namespace that I want to access. How do we do this with open shift? You just go here and you say, what's your project? That's weird. Then I refer to my expert in operate first, Tom. I'm looking into it. Thank you. Seems like you're not logged in. I thought I did it. Then wait a second. Let me do it again. Yeah, the project is still there and accessible. And the RBAC hasn't changed. Thank you, Tom. Thank you. That's why you bring experts into this workshop. So now we are back and as you can see, we are already in the cluster. So how do we apply those manifests? We have to apply the correct path. So this is the latest one. Manifests. What do we do? We just do or see apply all of them, but just the route. So now if we go in the cluster and we see the pods, you will see that the application is scaling up. And this is basically how you deploy this directly from the manifest. As you can see, once the container images are ready, this is quite straightforward and very easy. And while the application is deployed, what we can do is having a look at testing this application. So another important thing is, of course, having metrics for your applications. And if you want to see them, yes. So as I said, there are two endpoints that we created. And if you see the route of this application, you should be able to see that the endpoint called metrics, exposing the metrics of my Flask application. So these metrics are already available and we're going to test the application now. How to do that? We go to the notebooks. There is one notebook called test deployed model. I think so. This should work. If not, we will see. I need to recreate the environment then. We don't have the environment for this specific notebook, so we need to recreate it. We call it test model. While this is running, I don't know if you want to, if you have any questions. Meanwhile, anyone ask any specific questions? Still running? Francesco, you're not sharing your screen anymore. Is that intentional? Oh, yeah, I'm sorry. Coming. I was checking actually if there are any questions. Meanwhile, but apparently we don't have questions at the moment. So hopefully everything is clear. Can you see my screen now? Yes. Yes. Okay. Meanwhile, the command finished. I added basically all the things that were required. Now, if I want to test, I need my cluster URL, which is available here. My cluster URL, I should be already logged in. Yes. My pods. Yes, I can take the routes. Yes. And now I want to test basically the application. So we can use the correct URL of the application. The predict endpoint is correct. Yes. Okay. This was not updated to show the output. So this is basically reaching out to the endpoint. And we can see that from here, we should have some metrics. Yes. You see, I was hitting the endpoint with this API calls. And this provided me with the recommendation. Sorry, with the prediction related to the model. And as you can see, this is a missed classification model. My input is this image. And you see that the model predicted that this is at zero correctly. We can change this to see another example with more complex, but it's a seven, as you can see, is correctly identified. And that's basically it, I think. We have five minutes left. So it was fast enough, I think, to finish all the steps. I hope, I know it was a bit fast, but the idea was to show all this concept and all the tools that can be used. Hopefully, you will get to finish each of these steps. And otherwise, remember that you can always reach out to us. I show you that you can open any issue if you have any problem in any of the task. So just use this link and open issues if you have any. So in this case, we can help you finish the workshop. Or if you have any question in general about any of the tools that you saw today, or if you want to join us to the community and operate first, just let us know. We will share the slide later, so you will have all the links. And in general, I hope you had fun. For us, it's always a pleasure to be a DevConf. And let's see. Thank you very much.