 and the cloud architecture summit rolls on. Welcome to the breakout session from Red Hat and let me introduce Abhinav Joshi, senior manager, OpenShift product marketing from Red Hat. Abhinav, welcome to the show. Okay, thanks. We're really glad to have Abhinav for this this morning. He focuses on workloads for the OpenShift Kubernetes platform, such as AIML, data services and more. He brings almost 20 years experience in various topics, which are of key interest today, hybrid cloud, AIML, data analytics, and software-defined infrastructure. Prior to Red Hat, he held some great posts at VMware and Cisco, among other companies. He brings all that expertise together for us this morning in his session, top considerations for accelerating AIML lifecycle in the cloud-native era. You know, the rapid pace of the move to the cloud is prompting a lot of folks to ask for more self-service. And in particular, Abhinav tells us that data scientists and ML engineers say that better access to modeling tools, data, and compute resources would let them more rapidly build, scale, reproduce, and share with peers as well as software developers. So today, Abhinav's gonna provide us a roadmap on how to get there. He's gonna share some overviews of today's AIML and cloud architecture market. He'll discuss in detail containers, Kubernetes, and DevOps powered by Red Hat OpenShift hybrid cloud platform and how that accelerates data science workflows. And just to show it's not just theory, we're gonna get some real-world examples. So a jam-packed session from Abhinav, I'm really happy to have him here with us. Before I hand it to him, just a quick reminder, you can download his slides, and I really recommend you do, just hit that big red button on the view screen. You'll see the Red Hat team has also shared with us some other great assets available for you right now, no registration required, and any questions, we're here to help. So just type those into the Ask a Question box. So with all that, Abhinav, let me turn it back to you and tell us about top considerations for accelerating AIML in the cloud native era. Yeah, thanks a lot, Vance. I'm happy to be here, and I'm looking forward to the session, okay? And as Vance mentioned, so this is the goal for today, right? And these are all the key topics that I'll be spending the next few minutes on, right? So we'll kind of start off by giving an overview of the market and talk about design architecture and the execution challenges, and then briefly talk about how containers, Kubernetes, and DevOps can actually help you with AIML, speed up AIML, and then finally we'll talk about the Red Hat and the ISP ecosystem on how between our portfolio, that's kind of powered by OpenShift, as well as the ecosystem, can help you operationalize the AIML. And then at the end, we'll talk about the real-world success stories that we see out there. In terms of the businesses, right? So what we are seeing is the traction across a lot of different industry verticals, be it healthcare, be it financial services, be it telco to be insurance or automotive, government, and so on. So all kinds of organizations are using the power of AIML to be able to achieve their business goals and objectives and be able to build out the differentiated services and gain a more competitive advantage. And at the end of the day, that's helping all these organizations be more competitive, agile, be able to make more money, save costs, and so on. Now, AIML has been there for a very long time. So the history goes back to 1956, but it was not until the recent times that AIML has become real. And the key reasons have been that there is the abundance of the compute power, like lots of data and the availability and of the open source, the AIML tool chain and the frameworks. So all these three things are helping make AIML more real now as compared to in the past. So a key topic I want to introduce here is the life cycle and the key persona involved. Before we get into how the cloud-native technologies can help, I think it's really important to understand the AIML life cycle and the key persona that are involved in this life cycle. So as you see on the screen, first, at the business audience level, so the goals are set, that hey, these are the key goals. And the next key thing is that you have to gather and prepare a lot of data from the various, it could be the internal sources or the external sources as well. And this task is primarily being done by a persona called the data engineer. And then the next step in the process is you have the data scientists who are gonna take all that data and then be able to build out the machine learning models, develop the model, test the model and train the model and make sure that the model is making the right predictions. And once the data scientist is done with building out the model, the next step they have is to get that integrated into the app dev process. And that's where they work primarily with the app developers to make sure that the machine learning model that they built out is going to get rolled out into the software application. And then ultimately the application has to be rolled out into a production deployment and the process of inferencing starts. And inferencing means that the machine learning capability is now part of that application. As it sees the new data, it's going to start making predictions. And one key thing to the last step in the process and it's extremely important is being able to monitor and manage the models because as the models see the new data, they may start to make predictions in a wrong way. Especially if they have not seen that kind of data before in the training phase. So that's why we have this loop going back on the top is to retrain the model to kind of make sure that model continues to perform in the best way as the time goes on. And then at the bottom of the slide, so I talked about the data engineer, the data scientist, the app developer and there is a new persona that we are starting to see in a lot of organization. It's called the machine learning engineer. So think of a machine learning engineer as a software developer that also has some of the skill sets that data scientists have. So that way they're able to work with both the data scientists as well as the application developer, say to help get these models rolled out into a production deployment. And at the bottom is the IT operation. So their job is to make sure to kind of provide the supporting infrastructure and the application services that all these different persona need to get their job done at a fast pace. The next key thing I wanna talk about is the conceptual architecture that you really need, say to make all this real. And this would be the high level architecture. So at the top, what you see in here is, so the persona that we talked about on the previous slide, what they need is the tooling, the software that they need to get their job done. And I've shown some of the examples up here. It could be a framework like a TensorFlow, could be a PyTorch, and then the data scientists love to use the Jupyter notebooks. And then you're programming languages like Python, R, and so on. So you have to serve the model into a production. So that's where the Selden is used and so on. So there is a lot of tool chain and you have to have an architecture on that that for each of the phase, what tool chain is each of the persona going to use. And the next step in the process, the layer below is the data pipeline and the data services. Because data is one of the key components of the architecture. So you need to have a pipeline and a data management strategy. So that's where the technologies like the Kafka, Spark, and you could have the data in a SQL database, in a no SQL database, or it could be a data lake as well. So the bottom line here is that you need to have a data pipeline and a data management strategy in place. And there are a lot of tools and access to the data services that your persona is going to need. And to be able to run all these tool chain, what you need is a hybrid multi-cloud platform that has the self-service capabilities. Because at the end of the day, so your data scientists, data engineers, and the app developers, so they should not like have to be having the dependency on IT on a daily basis to fulfill all the provisioning requests. So they should be in a self-service mode so that way they can get their job done at a fast pace. And some of these tasks are extremely compute intensive. So that's where what you need in your hybrid cloud platform is the integration with the hardware-assisted compute acceleration capabilities with the likes of GPUs, FPGAs and TPUs. So that way a lot of these tasks can be speed up in a seamless way and your different persona that we saw on the slide. So they should not have to do a lot of configuration, say to get all this to work. And then this whole solution should be supported in a consistent way on all kinds of infrastructure platform, be it a physical infrastructure, could be on-premises in your data center, could be in say one or more of the public clouds or it could be the edge. And what we are seeing is that the organizations are starting to have a distributed architecture where they may do the gathering and the preparing of the data, say like at the edge and also the inferencing at the edge, but the build out of the models and the software applications. Yeah, it may happen in one or more of the public clouds or even in your own data center. And the key thing here is, as I mentioned earlier to the persona that we wanna focus on here is a data scientist. At the end of the day, a data scientist should not have to spend a lot of time on infrastructure on getting the infrastructure configured because if they have to do a lot of that, that means that they are spending less time on being able to build out the machine learning models that are going to provide the differentiation for the business and provide the key capabilities that you need say to be competitive and have a differentiated offering. So what they need is a simplified access to the modeling tools, the data and the resources, be able to share their work in a seamless way with their colleagues as well as the application developers and then also be able to roll out their work in a speedy way and not have to go back and forth a lot in terms of we are getting their work finalized. So there are a lot of execution issues that the organizations may run into and we see that talent shortage, it continues to be a big gap in being able to operationalize AIML. Next key challenge that we see is the lack of readily usable data because the capability that you built out is gonna be as good as the data is getting trained on. So if you don't have good data or a lot of data, then your prediction capability of your machine learning models is gonna be only so good. So this is a challenge as well, having the right kind of data. And then the next key challenge is the lack of collaboration between the teams because if all the teams that we saw on one of the previous slide, if they're not able to have the interactions and have the collaboration between them, it can slow down the process. And we are seeing that a lot of projects can actually fail as well. If the process is not automated and there are a lot of manual and the siloed operations. And the last key thing is if your persona like a data scientist, app developer, data engineer, if they have to wait on IT operations, say to be able to provision the infrastructure as well as the software tool chain and access to data that they need, this can also stall the deployment. And that's where we believe that the cloud native technologies like containers and Kubernetes and also the DevOps operational practices can actually help solve a lot of the challenges. And at this point, I want to introduce the value of containers Kubernetes and DevOps for the AIML space. And the value is extremely similar to what the organizations are seeing on a traditional software development side of the house, where being able to build out the software at a fast pace is a key need. And as we saw earlier that the whole life cycle on the machine learning side is an extension of that where first you have to gather data, prepare the data, build out the model and then you have to work with the application developers to roll those models out as part of the application. So the value on agility, the portability, the flexibility and the scalability that these technologies and operational best practices bring to the table are extremely valuable as well here for the AIML workload. Because I should be able to do a lot of experimentation and do a lot of pre-experimentation. I should be able to share my work. I should be able to scale out my deployment. All these things are extremely key in the AIML base as well. So that's where I want to introduce the OpenShift Kubernetes platform. So it's got all the capabilities that we saw on the previous slide and has a lot more. It's built on the value of your containers and Kubernetes and a lot more because at the end of the day the cloud native technologies by itself may not do the job for you. What you really need is a tightly integrated the enterprise class platform that's built on containers and Kubernetes and a lot more say to be able to get the job done. And one of the key value that OpenShift brings to the table in addition to the value of containers and Kubernetes say for an AIML workload is the simplicity that we bring to the table. And we have done a lot of integrations with ISV ecosystem on the AIML tool chain side of the house. That helps you simplify the deployment and the lifecycle management of the containerized AIML tool chain. And this brings a lot of simplicity and takes a lot of risk out of the equation and also adds a lot of security as well because all the things that you're doing are in a fully automated way and there is a less risk of a manual error. And OpenShift, it provides the portability as well because the solution is supported on-premises. It could be in a cloud hosted, self-managed or it could be consumed as a service on all kinds of locations and on all kinds of clouds as well. And you see a few of those logos at the bottom here. Then OpenShift helps with the collaboration aspect of the solution as well because we are able to extend the DevOps to the entire machine learning lifecycle because of the integrated DevOps capabilities in OpenShift. And then the last key thing is it's got a lot more capabilities in terms of the log analytics, monitoring, the integration with all the key storage and the networking capabilities and also has the serverless capabilities, service mesh and so on. So that way you can build out applications at a fast pace in a comprehensive platform. I want to bring the picture back here, the picture that we saw a few minutes back. So this is where OpenShift fits. It serves as that hybrid multi-cloud platform with the cell service capabilities. And then we have technologies in our application services portfolio like the AMQ streams. Think of that as the Kafka that you need and also the Fuse to do the integrations. And both of these can actually help you build your pipeline and data services. And then on top of that, so we have capabilities like decision manager and the process automation manager. So both of these can actually help you roll out your models into a production deployment and also our OpenShift application runtime. So it's got a lot of runtimes that can help you build cloud native applications because as you know, being able to build out a cloud native app is a key part of the AIML lifecycle. And at the bottom of the stack is the infrastructure capabilities that we have and all this is built on the foundation of secure Red Hat Enterprise Linux. And then we have the capabilities on the storage side as well, say to give you a massively scalable cloud native storage based on SEF and also the storage for the containers. And that's where the OpenShift container storage comes into play. And then a lot of organization are interested in building out the solution on top of a programmable infrastructure as a service solution. And that's the Red Hat OpenStack. And then it could be Red Hat virtualization solution as well. And then we have a very broad ISV ecosystem that we are working with and we have built the integrations and the joint solutions with the ecosystem. We have the Kubernetes operators-based integrations with a lot of the AIML and DevOps tool chain. So that way it becomes easy for you to simplify and speed up the deployment and the lifecycle management of all the key tools that your data scientist, your data engineers and the software development people, so they care about. And then we also have built out the prescriptive reference architectures with the infrastructure partners and also NVIDIA. So that way it gives you an AI-powered cloud in a box that you can roll into your environment and then be able to start working on your data science projects. And it simplifies the deployment and the lifecycle management of the full solution. So Red Hat also works in the open source community as well on the AIML projects. And you see some of the key projects that we are involved in on here. And then based on the projects, so what we have done is rolled out a community project that's called the Open Data Hub. But think of it as a complete machine learning as a service platform that can be built out based on the Red Hat technologies as well as the open source tools and the frameworks like a TensorFlow, Jupyter Notebook, Kubeflow, Spark and PyTorch and so on. So Red Hat is a strategic partner for the AIML solutions with a lot of companies. And these are some of the real world examples that we can publicly talk about as the organizations that have operationalized to the AIML lifecycle on OpenShift and the broader portfolio. And the first example I wanna mention is HCA Healthcare. So they have built a data science and a DevOps platform to build out the machine learning powered intelligent applications that are giving the physicians a five hour head start to be able to find out and fight a medical condition called sepsis. And sepsis is a response to an infection that turns the body's immune system against itself. And if you're not able to find out sepsis early in the process can result in deaths of people. And the second one is the Boston Children's Hospital. So they also have a similar use case. They built an AI powered data-driven diagnostic solution on top of OpenShift and then BMW. So they have built out an AI powered data platform on top of OpenShift. And this is helping them to reduce the development time with a faster and a more accurate driving simulations and data analytics. And then the last one I wanna mention on the slide here is ExxonMobil. So they are one of the largest oil and gas companies in the world and they have built out a data science and a DevOps platform on top of OpenShift and it's actually helped them to build intelligent applications that are helping them to optimize the business operations. And we have a lot more examples on various industries that we can share with you. And a lot of those examples, if you go to openshift.com or slash like AI-ML, so there are many more examples out there of the organizations that are able to build out the AI ML capabilities on top of OpenShift. And at the end, I want to share with you the resources that we have that can get you started in terms of the value of cloud native technologies on AI ML and a lot of the partnerships we have in the space and also the real world examples. So all these are actually available on openshift.com as well as I'd ask you to also check out operatorhub.io. And we also have a lot of videos on YouTube that give you the high level value proposition of cloud native technologies for AI ML as well as the deep dive technology videos as well. So to summarize, AI ML can bring in a lot of business benefits, but they are execution challenges as we talked in the last few minutes. And the cloud native technologies and operational practices like the containers and Kubernetes and DevOps can actually help you get the agility, the portability, the self-service capabilities and the flexibility and so on. But by itself, the technologies are not going to do the job for you. So what you need is a comprehensive platform that has all these capabilities and a lot more as a packet solution. So that's where the value of Red Hat comes into play. Our portfolio is built on openshift. It provides you a complete platform for all your AI ML needs. And yeah, it allows you to leverage all the benefits of containers and Kubernetes, DevOps practices, the ISV ecosystem like we have in the space to help you speed up your AI ML lifecycle. So thanks a lot and I wish you all the best. And if you have any questions, I think this would be a good time to take on the questions. Thanks and have a nice day. Avanad, fantastic overview. Fantastic look at two pretty hot technologies here coming together, AI ML and cloud native. And it was wonderful to see how Red Hat has put it together with all the open source heritage they've got in the way of integrating with third parties for a really great ecosystem. Fantastic session. Thanks a lot. I'm glad to be here. Yeah, and we're glad to have you here. And as you might expect, you've triggered quite a number of questions. So with your permission, let's get right to them. Yeah, sure. So first off, Avanad, one of the big visions or comments reflects the fact that a lot of the public cloud companies of course have all in one flavor or another come out with an AI offering or AI tooling or layer. So in general, one of the questions here that caught my eye was when should an organization look at using OpenShift for AI ML versus some of these public cloud slash AI packaging? Yeah, so that's a very good question. And what we see out there is if you need to do something quick and your data scientists have had to get started and experimentation. So we have organizations that are starting in the public cloud. At the end of the day, if you really need a lot of flexibility with the tool chain and you don't want to be logged into a particular cloud provider, and if you have the guidelines in your organization that part of the architecture has to be on-premises and then a part of architecture has to be on the edge. That's where the value of OpenShift comes into play. And also what we are seeing is that a lot of the organizations are using a wide variety of tool chain as we saw on one of the slides. And all those tool chain are not going to come from the same public cloud vendor. So if you are looking for that kind of flexibility as well, to be able to use power of the ecosystem, so that's where OpenShift and all the integrations that we have done can provide you a lot of value. So to be able to build AI platform on top of hybrid cloud. Well, and frankly, the other thing that struck me, Abhinav, is you were going through the partners list and how you work with them. It does seem as though that your relationship with them means that you've removed a lot of configuration of those third party to the point that it might even be plug and play, not a lot of coding going on. Is that a fair statement? Yeah, absolutely. And as we saw, it's a machine learning life cycle and you have a lot of people and a lot of different kind of teams involved. So our job here is to make sure that we can bring in a lot of automation in the life cycle. And that's where the key focus that we have is to work with a lot of ISPs and the partners and make sure that all the key people that you have and if they want to build out a platform with all these ISP application, they get a very good experience in terms of being able to deploy and life cycle manage those apps on top of OpenShift. And also say for a lot of companies, it can take a lot of time to build out the full solution, the software and the hardware pieces. And that's where all the partnerships we have on the hardware side and all the reference architecture that we built out, yeah, it's going on the speed of the deployment and the management of the complete solution. Excellent, excellent. Bob, another theme that came through here was this idea that you're marrying the AIML life cycle with the app or the DevOps life cycle, thanks to your OpenShift architecture. I wonder if you could talk a little bit about how those two don't always match up and sometimes they're serial or one after the other. And here it looks like I can do the much more in tandem and aligned for maybe better applications. Yeah, in this case, the thing is that all these teams have a dependency on each other, where DevOps focuses on the app dev side of the house. And then these days there is a term called MLOps, like being able to get the data and then be able to build the model and then be able to have the model integrated on the app dev side of the house. So that's where you have to combine both of these, machine learning ops as well as the DevOps. So that's where we see there is a lot of synergy. All the best practices that the organizations have learned on the DevOps side can add a lot of value on the AIML side of the puzzle as well. Oh, fantastic, fantastic. And both in the design time and the runtime, that was the other message I got from that discussion, Abhinav, right? Yeah, absolutely because it's a process. And there is a feedback loop as well in the process and you have to keep repeating the process to make sure that all the models that get built out and rolled out into a production. So they are continuing to make the right prediction because if they start to make the wrong prediction, could have a lot of business impact. Like if you give all the wrong kind of recommendation to a user, then you can lose a lot of customers. So you have to have automation in place in the complete life cycle. Say we have to have being able to gather and prepare the data and then be able to build out the machine learning models. Yeah, really important, really important. You know, another topic here, Abhinav, I know we talk a lot about cloud native, but your example, customers, this question points out, must have a lot of on-premise or hybrid architecture. So the question simply says, can the speaker give some examples of how the customers implementing with OpenShift for AIML, were able to bring their on-premise or legacy assets into cloud native or OpenShift environment? Yeah, absolutely. And what we see in there is a lot of these examples in some of the cases. So the customer is gathering the data from a lot of devices. It could be a lot of test cars that are out on the streets. So you have to move all the data. So to be able to build out your models, to do all the training and so on. And then we have customers that are doing the training in the public cloud, where they put the training data in the cloud and they run OpenShift on top of it. So they do all the modeling and so on on top of that. But then they have the app dev that happens on-premises, say for some reason, could be a compliance reason and so on. And then what they have to do is, once they have the app built out, so they have to roll the app out back on the edge. And common theme here is that they should not have to change their infrastructure and the platform. So if the platform is the same, it actually helps them speed up the process. And that's where OpenShift has been able to provide a lot of value because the customers are able to get a consistent set of tool chain and the security as well. And that's a big one. I set my policies once on a site on OpenShift. And I can make sure the lifecycle, I have the same set of security policies in place. So our customers are getting a lot of consistency, the agility, and they have to learn a single tool chain on the platform side for the entire machine learning lifecycle. Excellent, excellent. I see we got time for one or two more questions here, Abhinav, you mentioned the importance of the framework and you certainly had Red Hat to put a lot of thought in that. So the question here says, can Red Hat help us with either reference implementation or other guidance to get the correct framework or are we on our own? That's a very good question. We have consulting services that can help a lot of organizations. And the examples that I showed of the real world deployment, so most of them can use our consulting services, say to go right from the architecture phase to be able to build out the design, to build out your solution and teach you how to operate it. So we have a lot of capabilities. Fantastic, fantastic. You know, Abhinav, I see time has just flown by. We could easily spend another half hour with you but afraid we have to close. But before you go, refresh for us again, the best ways that people can go forward with OpenShift and AIML. You had some options, they all sounded pretty good. But let's just do a quick reprise of those. Yeah, sure. The best place to start is to go to openshift.com slash like AI-ML. So that has a lot of good links to reference architecture, the ecosystem, the success stories and so on. And also we have a YouTube channel as well on OpenShift. Like if you go to YouTube and do a search on OpenShift and AIML, a specific playlist that we have in there. So that would be a good place to start. And then we have an ebook on data pipelines on OpenShift and Kubernetes. So that would be a good place to start as well. Fantastic, fantastic. And thanks to your team, Abhinav. I'm pretty sure we've got most of all those links right here in your breakout room. So all those of you who are attending, if you check out those links under the view screen, you'll see a lot of those assets available for you right now, you can get to. So with that, let me thank Abhinav, Joe. So you've seen your manager at OpenShift Product Marketing for Red Hat with a great session about how OpenShift is bringing its technology and partnership approach to AI-ML to streamline your adoption and also make self-service easier. It's been a great session, Abhinav. Thank you very much and appreciate having you answer so many questions too. Thanks a lot. Our pleasure. And again, those assets are available under the view screen but another note I want to make here, take a look at Abhinav's slides. You can download them just that big red button. And at the end of the slide deck, you'll see a terrific Red Hat Resource FMI slide. With the slide download, all these links will be live. It'll take you directly to the Red Hat website for some other great resources for cloud and AI-ML and OpenShift. Thanks again, everyone.