 Customers are exploring open-source solutions to tackle their AI challenges. Hello, everyone. My name is Neeraj Kuppam. I'm part of the Red Hat consulting team. I manage our AI services portfolio. I'm here with Eric. Hi, and I'm Eric Erlundsen. I'm a software engineer at Red Hat's AI Center of Excellence. Today, we want to share with you one customer story where Eric and I helped solve some of their AI challenges. So this particular customer had a small data science team, and they were facing multiple challenges. The first challenge was that they had siloed data sets strewn across multiple departments, and they were having difficulty connecting all of those. The second one is that they had business critical projects that were taking too long because the data science workflows could not scale nor were they efficient and could not be used by the thousands of data science users that were making in requests to these small data science team. They were now looking for open-source technologies to solve these problems because they felt that they would give the best flexibility out of an open-source solution. They were also looking to build a machine learning as a service platform that would remove the small data science team as a bottleneck to their thousands of business users. So in doing this, they discovered the Open Data Hub project, and they reached out to Red Hat to help them explore the Open Data Hub capabilities. And in that process, we helped the customer understand the Open Data Hub architecture and other open-source technologies that can be used with the Open Data Hub architecture. We helped them derive business value out of the previously siloed data sets, and then we helped them transform the data science notebooks into repeatable application workflows using our OpenShift as the platform. To take you through the technical details, Eric will give you a lot more information there. Eric? Thanks, Naraj. So yes, one of the main problems that the data science department that this customer had was that provisioning new resources to run their data science workloads was an operation that could take days to weeks and furthermore involved a lot of IT support. And so we were able to deploy Open Data Hub on OpenShift to allow their data scientists to easily, in a self-service mode, spin up their data science resources without involving IT at all on a day-to-day basis. Eric, we helped the customer with DevOps for machine learning. What does that mean? Sure. So DevOps for machine learning is applying the power of DevOps workflows which are used for traditional software to software that involves machine learning, in other words, intelligent applications. So you're able, by deploying Open Data Hub on OpenShift to allow their data scientists and their cluster administrators to work together to produce repeatable software workflows that included all the power of machine learning and AI. And we were able specifically to deploy for them Apache Spark and Jupyter Hub which allowed their data scientists to run Jupyter notebooks backed by Apache Spark clusters spun up on demand in OpenShift. And they were able to run these notebooks both interactively and as repeatable scheduled jobs that could also access on-demand clusters and also their federated data resources. Eric, can you talk about the AI workshops we did for this customer? Sure, Niraj. So we did several kinds of workshops on different topics all the way from how to use the OpenShift platform itself to using DevOps workflow tooling to do repeatable workflows using machine learning on the platform, all the way to how to do source to image turning data science notebooks into runnable images and how to deploy those images both as microservices and as runnable batch jobs. Thank you, Eric. So viewers, if you have any questions about how Red Hat can help you with your AI challenges, please reach out to us.