 ML Ops. What is ML Ops? ML Ops is the process of applying DevOps principles to AI ML model lifecycle and automating the deployment of AI ML model in production environment to ensure prediction accuracy. With ML Ops, you can do this at scale in an iterative way, anywhere, even at edge locations. Now, consider a bank which is using an AI ML model to protect its clients from fraud. Creating an AI ML model to detect fraud is one challenge, but productizing that AI model creates a whole new set of challenges because you may know the fraud patterns of today, but you can't predict the fraud patterns in the coming six months or a year from now. ML Ops synergizes data, process, technology, and people to overcome the challenges of productizing the AI model. The AI ML workflow must be built to support the evolving changes of a data scientist and the models they create because models degrade as the prediction accuracy changes, and they need to be continuously monitored, retrained, and redeployed to ensure prediction accuracy in the production environment. This is how you productize the AI ML model with ML Ops. So how can Red Hat OpenShift and community projects like Open Data Hub help with ML Ops? How can it help with scale and deploy it anywhere? Let's hear more from my friend Siamak. Red Hat OpenShift is the industry-leading communities-powered open hybrid platform which has integrated DevOps capabilities. It provides a compelling common platform between developers, data scientists, and machine learning engineers to be able to through a self-service fashion operationalize the entirety of machine learning lifecycle through data preparation, until developing the models, training them, evaluating them, and go through this iterative process until a model can be deployed in production and create intelligent apps with that. OpenShift builds focuses on enabling developers and machine learning engineers to create images for their ML models and store them in an image registry automatically. Red Hat Quake provides an storage place and it's an image registry that can contain the ML model images and distribute them from that access to organization. OpenShift Pipelines enables automating the process of data preparation, developing models, evaluating them, and going through that iteration as many times as needed through an automated fashion. In OpenShift, GitHub focuses on simplifying the deployment of these models across a variety of environments, across multiple clusters and multiple public cloud providers. OpenShift also learns from and integrates with OpenData Hub 100% open source machine learning platform for the data scientists. With these features, OpenShift supports data scientists, machine learning engineers, and developers in creating the AI ML models and operationals and deliver architecture for productizing it within the environment that applications require to consume these models and all the while creating visibility into the process across all these roles to be able to collaboratively work throughout this process and be aware of the impact of the change that they're building. OpenShift provides a common platform for data scientists, machine learning engineers, developers, and IT operations to accelerate their AI ML lifecycle across their data centers, public cloud, or even at the edge.