 My name is Miriam Fontanis. I am also a product manager in Red Hat and with the application service BU and today I'm here to talk about AI powered applications with OpenShift. So, I guess this is it. So what we'll discuss today is basically how do you build an AI powered application and the answer is with OpenShift. Because The unique differentiator that OpenShift has or where we are finding our sweet spot is that you can manage the end-to-end life cycle of an AI powered application. You don't have to divide the people with different tools and different platforms. You can just give them one comprehensive set of tools to connect all of your data and your business events in real time or if you're still doing it or in batch. You have tooling to train your model, to build applications in any language that you choose and to deploy that in a consistent and cohesive platform, which is OpenShift, you can start making your predictions and then get feedback to compare how are you doing versus, you know, the traditional method. But most importantly is that OpenShift has already a set of practices and tools that allow you to apply that same short feedback loop of with not only the application developers, but also with the AI engineers so everyone can interact happily with the same platform. So first of all, what is an intelligent application? Well, it's an application where part of the code is written by a human and the other part is created by a model, trained with data. Some of the most common use cases for intelligent applications, recommendation engines, virtual assistance, fraud detection, money laundering, things that require you to make a decision that you could do best learning from the data that you already have. And why is it so difficult to build this type of applications, right? So first of all, you have at least two different types of personas that in this case are very different. You have a data scientist working on the model that gathers, prepares the data, develops the model. They use certain tools that are completely different than the tools that your regular developer uses and both of them have to iterate independently on their side of the application. That is a whole. Maybe you have three different profiles with three different tooling and three different processes to build this application. So you have the data and the data engineer goes through the process to acquire the data, clean it, check the lineage, calibrate it. They have the model, you have the app, or you can have even more profiles. So by interviewing people who do data science day in and day out, the more the most common phrase that they said were like, yes, when I finish training the model, I just give it away and off it goes. I don't know how effective is it being in in real-life data or in production because I don't know what they do with it. So we have three different people. Well, at least two. The other thing is that each one have their own practices and tools. So they all do one test, MLOps, other one, those DevOps. They have to put their code in a source code repository version or in a model registry. They have to manage the configuration. They have to manage the data. But at the end, they kind of are applying those same engineering practices to do their job. The application, well, it's also a little bit more complicated with a regular application. Well, you just have to manage the code and the configuration with an intelligent application. Well, you have the code of the model itself. You have the model that you have to version. You have the data. You have the application configuration, the application code. So when something changes or when something is broken, how do you know which of all of these variables are the ones that broke the code? If it was a change in the model, if it's the data that it's drifting, is it a change in the code of the app? So it's a little, it gets more multi-dimensional. So everybody's, well, that's not how it's supposed to look, but they do is that everybody's trying to use AI. But the problem is that taking AI out of the lab and putting it into the real world is, it's one of the main challenges. And to prove it, well, we have a landscape full of products that you can use and these landscapes are getting a little bit overwhelming. There are lots of videos with people overwhelmed. You now need a landscape for the landscapes because there are so many now. So if you're gonna try to do this yourself and set up your own platform, how do you go about it? Which one do you choose? So I like to think of OpenShift as the Mr. Potato Head of the platforms because you have multiple pieces that you can put in, that they all fit well together, they all work well together. And depending on which ones you use, well, you have a different Mr. Potato. You can have an intelligent Mr. Potato or a Mrs. Potato. So for intelligent applications, well, these are the pieces. This is our Mr. Potato. So we have support for taking advantage of your GPUs, of your public clouds, you have different tools for model serving, you have tools to integrate your models, each one for the different type of personas that we saw. But the most important part is that they all work on the same platform and they all apply the same practices. So this is like a short story. How would you do it? So let's say that you have the data scientist. It's developing or training the model using Jupyter. You have the app developer who's using whatever language it is, let's say Java and IDE. They both commit their code into a Git repository. Once the code is there, you can use S2I in OpenShift to gather that code and produce the model. If it's a model, then you put it inside of a registry for model. If it's the code, well, maybe you generate an artifact and you put it in an artifact repository. You also have tools to do streaming like Kafka or to change data captures to bring the data and start the inferencing of the model. And the user, well, his is frontend in a web UI. And if something goes wrong, well, we go back to the code. Everything is regenerated and everything is automated using the practices that you already have in place. Because most of the applications are not new applications. You have to put that into what you already have, which usually it's not the prettiest. So what are some of the benefits of using a platform like OpenShift, Mr. Potato of platforms? You can deliver code for intelligent applications in a reproducible and reliable way. And you apply the best software practices and an opinionated way to streamline your whole life cycle. So I think that's it. Thank you very much. And if you have any questions, I'm all right.