 Hi everyone, my name is Steven Nules and I'm responsible for our product strategy for our AI cloud services at Red Hat. Thank you for joining me today to talk about how we're evolving with Kubernetes to embrace model ops. I don't think it'll surprise anyone in the audience today. If I was to tell you that data and AI workloads are one of the fastest growing workloads on Kubernetes today. Of course, you see the traditional workloads were all used to deploying to databases logging web servers. But increasingly, businesses are trusting their Kubernetes platforms to put these very critical workloads into production. And that includes everything in the lifecycle to build a model from the data acquisition, data cleansing, model development, model productization, as well as then the lifecycle management of those models. And that lifecycle isn't without its challenges. First and foremost, finding the right set of resources to work on these is very challenging, right? Data scientists who are familiar with containerized model development and pipelines that are then able to hand off those models to application developers for integration into their intelligent applications can be a challenging workflow. And then having IT administrators and Kubernetes administrators who able to support those users and be able to impart best practices and give them a dynamic and flexible platform is a new paradigm that's still evolving. In that model, data scientists are always looking for access to the latest and greatest tools, right? They want access to all that innovation that's happening in open source. So being able to provide an environment that gives your users the ability to have access to those tools while maintaining the integrity needed for auditability and trust in the models that go into production is critical. And then the overall complexity to operationalize AI models lends itself to be a slow and silo process. So along the life cycling of those models and once something's put into production, you need to have a platform that can rapidly react to whether it's model drift or changes in the data or changes in the nature of your user base. So when we talk about operationalizing AIML, it's not trivial. It's a data ops set of tasks. It's a model ops and ML ops set of tasks that span multiple users across your organization. So if you're an IT administrator or Kubernetes administrator, you're providing the platform that's ultimately going to facilitate the set of transitions between these users. And how we put that together is really critical because if we're constantly having users jump between environments or do manual handoffs, you're never going to get to the full set of automation that it takes to really realize the business value from those analytics and models that you put into production. And this is where Kubernetes really provides a leg up among some of the traditional approaches that are out there. Right with the Kubernetes plan form, you get the agility to automate resource management to be able to respond to triggers and stimuli in the environment that then necessitate those models to be rebuilt or the data to be re aggregated and then redeployed. You also get the portability that's inherent in the platform models need to be deployed within data centers. They can be targeted edge components. And really building on top of your Kubernetes platform gives you that flexibility and what your target platform can be while taking advantage of those centralized resources. So you get your users out from underneath their desktops and onto a shared platform where they have more scalability and flexibility in the environment they're using. You also get the flexibility of how you provision that AIML environment. Not all model training exercises are built the same. Some require more resources. Some require specialized hardware. Kubernetes platforms have the ability to gain access to those specialized hardware accelerators or specialized storage devices and give you the ability then to train multiple types of models and really conduct experiments at a more rapid pace. So you get to try out and get to a better answer quicker than ultimately what you put into production. And then finally, you get the inherent scalability. Right. If you're working on large data sets versus sample data, you're going to need more resources, more power to be able to run those experiments. As things are put into production, again, inference might change the requirements and demands on the system. Depending on the nature of your business, you might have see peaks and valleys in the overall usage of those models. And you want this underlying system to automatically respond to those demands. And Kubernetes really gives you that platform to build on top of. And these are things that have always troubled the AIML world. Right. This is the things that Kubernetes essentially was built to handle at the end of the day. And in the end, when we look at the ModelOps pipelines, it looks very similar to those DevOps pipelines that we've come to know and love on top of our Kubernetes platforms. Right. Just like there's application development, test, deploy, we have the same thing in the model lifecycle, right. So you're building your ML model, you're running multiple experiments. When you found a model that you think is the one you want to put into production, you can validate it. And those that pass the validation go into an in-men registry. Once those things have been published, there's triggers then that ultimately manage the deployment of that model as a service for integration into the intelligent application. And they're ultimately deployed out to their final platform. Once they've been deployed, then they can be monitored for drift and any other types of metrics that would ultimately require you to retrain and redeploy. And the secret in all of this and the trick is really to have this be done in a continuous pipeline. So again, once you've been able to build these models, you're able to react and put these into production continuously. It's challenging enough to put a model into production once, but to be able to do it continuously with confidence and on demand is really a critical advantage. And you don't have to take my word for it. We've been doing this with a number of customers on top of Red Hat's OpenShift platform in the healthcare world where we're helping with diagnoses. Obviously in the financial services world, we're helping monitor and analyze large-scale data. There's a number of use cases across industry. And so if you'd like to learn more, you can visit redhat.com and read more about these customers. I thank you for listening to me. And if you're not already using your Kubernetes as a ModelOps platform, I'd encourage you to take a look at it. It's a great platform, and I think it'll meet a lot of the needs of your user base. Thank you for your time.