 And the Intelligent Data Summit continues, this time with Red Hat and another great view on the state of AIML trends. And to bring that to us, let me introduce Abhinav Joshi, Senior Manager, OpenShift Product Marketing at Red Hat. Abhinav, welcome. Hey, thanks a lot, Vance. I'm glad to be here. Yeah, we're really glad to have Abhinav with us here too. He focuses on workloads for OpenShift Kubernetes platform, which more and more of these days includes support for AIML. He has almost 20 years experience in hybrid cloud, machine learning, and data analytics. So it's all a terrifically holistic view we're going to get of cloud Kubernetes and AIML this morning. Abhinav Session is entitled Fast Track AI from pilot to production with a Kubernetes-powered platform. We're going to hear how Red Hat will let you fast track AIML projects with Kubernetes and DevOps powered by the OpenShift hybrid cloud platform. We're going to get a look at the ISV ecosystem. Red Hat is built as well as some great customer examples to show you how real this is. So just a quick reminder, you can download Abhinav Slides. Just hit the big red button right under the view screen. You'll also see that he and his team have brought some other great valuable resources. We know you'll gobble them up, so just click the link there and you'll be able to download right away. And any questions, just let us know. Type in the submitted question box. Abhinav, after all that, it's time to hear from you about fast tracking AI. Thanks again for coming. Thanks a lot, Vance. Let's do it. That was a great introduction from Vance. So I'll directly jump into the agenda for today. So these are the key topics that we will talk about. We'll start by talking about the overview of the AI market. And then after that, we'll jump into the desired architecture and the execution challenges that we are seeing out there. Then after that, we'll briefly touch upon the value of containers, Kubernetes, and DevOps for taking AI from pilot to production. And then after that, we'll dive into the value of Red Hat OpenShift, broader portfolio, as well as our ISV ecosystem to help operationalize AIML. And then finally, we'll talk about the success that we are seeing out there and the key reasons as to why some of these customers have been successful with AIML. First things first, data has been extremely important. It's a critical business asset. And being able to leverage the power of AIML can actually help you achieve the data-driven insights and be able to achieve the key business goals, like being able to improve the customer experience, being able to gain the competitive advantage, being able to increase the revenue, save cost, and also being able to automate the business processes. And all these are extremely important these days, especially with the COVID-19 situation going on. And the more customers that we talk to, the more we hear that AIML is top of mind for them. And they are looking at ways, say, to fast-track AI from pilot to production. And that's the key topic for today. These are some of the ways in which the businesses are using AIML. And you can see a lot of good examples all across different industry verticals. They beat health care, like extremely important these days. They beat financial services, telcos, insurance, like automotive, as well as we see a lot of fraction in the federal government sector as well, and a few other key verticals. And one thing to notice that AIML has been there since 1956, but it was not until recently that the momentum has actually picked up. And the key reasons have been the compute power, like a lot of data is getting generated, and the availability of open source machine learning and deep learning tools and frameworks. All these three things are actually helping make the AIML more real now as compared to before. One more concept I want to go over is the AIML lifecycle and the key personas, because it's not a project where a single team is going to execute. It actually involves a full lifecycle and a lot of different personas as well. And all the people have to collaborate. So what you see on the top of the screen is the typical lifecycle in the AIML project where at the business level, so the goals are set. And then after that, you have to gather and prepare the data. And then after that, you have to build out the machine learning models, so you have to test them and train them, make sure that they are going to make the right recommendations when you deploy the models in a production environment. And then after that, once you build out the models, the next key step in the process is you have to integrate the models into your software development process because these models get consumed in real world as part of a software application. And then once you've built your application, then you have to deploy the application in real world and then begins the task of the inferencing where your model is going to see the new data and start making the predictions. And then as the last step is you have to continuously monitor and manage the machine learning, the deep learning models to make sure that they are making the right recommendations. And the key reason here is as the models start to see the new data, they may not make the right predictions. So you have to continuously retrain your models so that way the right business decisions can be made and you can continue to keep your customers happy with the right kind of recommendations. And throughout the process, you have all kinds of persona that are involved in this life cycle. So you have the business leadership that is mainly responsible for setting the goals and then the data engineers whose key job is to gather the right data and prepare the right data, do the visualization, make sure the data is all good before passing it on to the data scientist who are mainly responsible for building, training and testing the machine learning models and also being able to monitor the models into production. And then you have software developers whose main job is to work with data scientists to take the machine learning models and then integrate that as part of their application development and roll these out into production. And then IT operations is gonna be involved throughout the process here because they have to work with all these different personas to make sure that they have all the resources that they need as and when they need them and it's extremely important to be able to speed up the entire process. And we are starting to see a new kind of persona called the machine learning engineer. Think of this as a person between the data scientist as well as the app developer and the IT operations with main focus to help roll AI into production because the data scientist may be focused on perfecting the machine learning model but at the same time, they may not be as skilled or it may not be part of their job to make sure that the models are getting rolled out into production at a fast pace and also to continuously monitor the models in production, so that's where the job of the machine learning engineer comes into play as a glue between the data scientist and the software developers to make sure that the whole lifecycle is actually operating in a nice way and the models are getting deployed into production at a fast pace as well as the retraining of the model is also happening on an as needed basis. To be able to realize on this lifecycle you have to operationalize a solution architecture and there are certain set of capabilities that you need to make all this real. At the top here is all the tooling that your data scientist, your data engineers and your software developers need to be able to gather the data, prepare the data, build the machine learning models to do the software development and so on. And these days there is a concept of machine learning operations being able to apply say DevOps to the ML ops and this helps to kind of speed up this entire lifecycle. So these are the different tools that you need and then the next key thing is your data, right? So data has a big role to play and what you need to be able to do is build out your data pipeline as well as the key data services that you need in each of these different phases of the project. So that's where things like Apache Kafka comes into play to do the stream ingestion and you may have databases like a SQL database, no SQL database and you may build out a data lake say to store all your data and then be able to serve the data as and when needed in the different phases of the project. The next key layer in this whole solution is being able to host all these tool chains that the different persona need on a hybrid multi-cloud platform that has the self-service capabilities because the last thing that you want is the different persona to be waiting on IT to provision the resources and the applications that they need to get their job done. And also they should be able to get the consistent experience with these different software tools be it like on-premises, be it in the cloud or be it on the edge location. And a lot of these tasks are very compute intensive. So that's where the value of the hardware-based compute acceleration comes into play and an example of that is the NVIDIA GPUs. So this hybrid multi-cloud platform with the self-service capabilities, it should have seamless integrations with the compute acceleration like hardware. So that way, all these different tasks can be speed up as and when needed in a seamless way without the data scientist and the data engineers having to spend a lot of time to be able to make this work. So all this has to be seamless. And then at the bottom, what you see here is the infrastructure stack. As I mentioned, a couple of minutes back, all these tool chain as well as the platform, the software-defined hybrid cloud platform with the self-service capabilities, it should work in a seamless way, regardless of if you are doing all this on the bare metal servers on-premises, be it on the VMs like virtual machines, be it a private cloud or be any of the public cloud or it could be edge location as well. So at a very high level, these are the key capabilities that you need to make the AI ML real and the necessary people and process transformation. So that part is extremely key as well. And one of the reasons that we highlight this is because at the end of the day, what do the data scientists care about? And data scientists are one of the key persona in the AI ML lifecycle. So what they really need is a self-service capability like a portal where they can access all the tools that they need to get their job done, to be able to build out the models and be able to access the different data sources. And then once they're done with their modeling job, they should be able to share the work and collaborate with the colleagues as well as the software developers to be able to roll these models out into production as well as they should be able to make the tweaks at a fast pace and build out the new models because as I mentioned earlier, it's a very iterative and a compute intensive task as well to be able to build out the models and roll them out into production. So at the end of the day, they want to care less on the infrastructure as long as it provides the flexibility, portability, scalability and the agility. So that's all they care about. So to them, the infrastructure, it should be an invisible thing at the bottom that just works. So all this is good, but at the end of the day, there are several of the execution challenges that all the organizations have to solve for. And all this is based on what we are seeing in our customer base. The first one is the shortage of the skills, the talent shortage, like the lack of the key skills that you need to make all this real. And what we believe is the automation can actually partly help you solve the challenges on the talent. And the next key thing is lack of readily usable data because you have to collect a lot of data, but yeah, it has to be good data, meaningful data should have all the key things that you need. And this can be a daunting task as well because the machine learning models are going to be as good as the data that you use to build and train those models. Next key thing is the lack of collaboration between the various teams that we saw on one of the slides earlier, because if these teams are not collaborating, it can be a challenge as well and stall the various projects. And there are statistics out there that show that more than 80% of the models that get built out don't actually show up into a production because of the lack of collaboration between the various teams and the siloed operation. So you have to solve for this challenge as well. And that's where we believe that the DevOps capabilities and a Kubernetes based platform can actually help you solve some of the collaboration challenges. And the last one is the unavailability of the infrastructure as well as the software tools that the data scientists, data engineers and the software developers need. If they're able to access all this in a self-service way, that is great for them to be able to speed up the whole process. What we believe is the cloud native technologies like containers and Kubernetes and operation best practices like DevOps can actually help solve a lot of these challenges. And that brings me to the value of containers, Kubernetes and DevOps for fast tracking AI projects from pilot to production. And the values that you see on the slide here are not very different than what we see in a typical software development project. So the same values are directly applicable to the whole machine learning lifecycle as well to be able to provide the necessary agility that you need, the portability, build once and deploy anywhere, be able to share with the team as well as the software developers and so on and then the flexibility and the scalability because so you have to be able to respond quickly by being able to automate the computer source management and also being able to use all the self-service capabilities that you get with the containers and Kubernetes. And then the portability is extremely important because you should be able to develop the model once and be able to deploy them say anywhere be it on-premises at the edge or in public clouds and then being able to provision and be able to scale up and down in and out as and when needed and where you need it because all the flexibility and the scalability that you get are going to help speed up the AIML lifecycle. But at the end of the day, these are the concepts and this is not gonna be enough. What you really need is a Kubernetes platform that has all these capabilities and a lot more. So that's where the value of Red Hat OpenShift Kubernetes platform comes into play. It provides all the key capabilities that you get with containers and Kubernetes and also DevOps best practices. But on top of that, we have done a lot of the value add integration to can further speed up the AIML projects. And first key thing is we've done a lot of integrations with the key ISVs on the AIML site and that actually helps you speed up the deployment and the lifecycle management of the key tools that your data engineers, your software developers, your data scientists and so on may be doing. And all this is done with the concept of the Kubernetes operators. So to be able to automate the deployment and the lifecycle management of the AIML tool chain. Next key value proposition that we bring to the table is the portability and the consistency that you can achieve by running your AIML tool chain and the intelligent app dev on top of OpenShift because it's got both the cloud hosted as well as the self-managed options that can be consumed in a data center, in a public cloud or Edge locations of your choice in a seamless way. The third key thing is OpenShift also includes the DevOps capabilities with the pipelines like say Tecton and it supports Jenkins and also Kafka and so on. The serverless with Knative. So basically what you're able to do is it gives you a complete platform. It helps you extend your DevOps for the entire machine learning lifecycle and helps aid enablement of the collaboration across the different teams that are participating in the process. And the last one is we've added a lot of different capabilities that you desire from a typical enterprise platform in terms of being able to do the monitoring, the automation and the host tours that you need on a Kubernetes platform, the log analytics, being able to provide a serverless capability, a VM capability, a container capability, like all these key things that you typically desire in a Kubernetes platform. So all these things are part of OpenShift and all this is actually based on the open source we have our projects. So that way you don't have a vendor lock-in. If you go with Red Hat OpenShift Kubernetes platform, say to run your AI ML workloads and the intelligent applications. And on top of OpenShift, we have capabilities in the rest of our portfolio that help you with the other pieces of the architecture. As I'm showing on the slide here for helping you build out the data pipeline and the services, that's where we have the Red Hat AMQ. Think of this as the Kafka on Kubernetes. And then for helping you with being able to roll out the AI ML models into production. So that's where we have the solutions like BAM as well as the decision manager, and to be able to help you build out the software applications that have the AI ML capabilities. That's where we have the Red Hat OpenShift application runtime capabilities. And then at the bottom of the stack for the software defined infrastructure. So that's where for your container host we have the secure and the trusted Red Hat Enterprise Linux. And also for the storage capabilities, that's where we have the OpenShift container storage as well as the self storage that can help you with the data lake, as well as providing the container storage capability that you need for the entire life cycle. So what you see here is the depth and the breadth of all the ISVs that we are working with to have the integrations as well as the reference architectures. Like a lot of the software ISVs on the AI ML side. So they've already have the software integrations onto OpenShift using the operator framework. So to be able to speed up the deployment and the life cycle management of the key AI ML tool chain that your data scientist, the data engineers may be using. And also we have done the integrations and we have reference architectures with the infrastructure partners that shows prescriptive guidance on how to design and deploy a full AI ML solution at a fast pace because we've done a lot of work in building out the best practices and have captured the best practices as part of the reference architectures. And on top of that, we participate in the community projects as well on the open source side because open source is at the heart of Red Hat. So all the things we do are in the open on the AI ML side. So we work on a lot of projects that you see here. We do TensorFlow, Jupyter Notebooks, Kubeflow, PyTor, Spark and so on. So these are some of the key upstream projects that we participate on. And then based on these, we've actually built some of the downstream projects as well. And the one I want to highlight here is the OpenData Hub. Think of it as machine learning as a service platform based on OpenShift, Chef Storage, Kafka, Jupyter Hub, Spark and so on. What this actually showcases is that using all the open source tools and technologies, how you can build a complete AI ML architecture. And also we have the contributions into operatorhub.io. So it's the home for a lot of the community operators that have been coordinated with a lot of different ISVs to be able to build out. So both the OpenData Hub and the operatorhub.io help kind of show that using the open source technologies, you can do a lot of things and speed up your projects on the AI ML front. So the last thing I want to touch upon is the key wins that we have had in the space. We work with a lot of different organizations across different industry verticals and help make them successful. And a lot of these that you see in here are the public success stories that we've had on the AI ML front. If you look at on the healthcare side, we have HCA healthcare who built a data platform to help save lives. And now they are using this platform to be able to fight sepsis as well as COVID-19. And it was not too long back that they did a public webinar where they talked about on how they used OpenShift say to build AI ML and the data platform say to help fight sepsis and also COVID-19. And then on the automotive side, so we have customers like BMW Group that have used OpenShift to build a data platform to be able to improve the customer experience and also this data platform is actually helping them to speed up the autonomous driving projects. And all these links are actually real and you can learn more by clicking on these links. And then on the financial services side, so we have RBC Bank, we have discovered financial groups that have spoken at say KubeCon and also at OpenShift Commons gatherings on the use case and how they've used OpenShift to be able to speed up the AI ML projects. And then the last one is on the slide in the oil and gas space. It's a famous company, ExxonMobil. So they have been able to democratize data science for all the phases of the oil and gas exploration. And we have a lot more deployments in the field but these are some of the ones I wanted to show on the screen here as some of the key wins and examples that we have in the real world. And all these organizations have been able to achieve the real world business benefits. Okay, and the last key thing is the resources that we have that can help you get started to be able to fast-track the AI projects from pilot to production. So we have a website that has a link to all these things that I talked about. Then we have an e-book as well. So if you are in the beginning phases of your project, there's an e-book that I just built out on the top considerations for building a production-ready AI. So go through the e-book. It's a short e-book like eight pages and we have a lot of videos as well. High-level videos as well as the detailed solution architecture level videos that show you how to take the AI from pilot production using that had OpenShift and the broader portfolio. Then to summarize, as we saw at the beginning of the talk that AI can bring in a lot of benefits to the businesses but then you can get a lot of the execution challenges that can stall the projects and that can have a business impact. Then the cloud-native technologies like containers and Kubernetes and DevOps can actually help you achieve or they have the potential to get you the desired agility, the self-service capabilities, the portability, flexibility, scalability and automation that you need say to be able to speed up the AI ML projects but then at the end of the day, what you really need is a full Kubernetes-powered platform as in say that had OpenShift that can help you leverage all the key benefits that you get with containers, Kubernetes, DevOps and the key ISP ecosystem to help you speed up the AI ML project and the delivery of AI-powered applications into a production deployment. So that way you can get all the benefits that you expect out of the AI ML solution. With that, thanks a lot for listening to me. Please feel free to kind of see all the things that we have listed at the bottom of the screen here as the key resources that can help you get started. And if you have any questions, feel free to reach out to me. That's my email on the slide here. So that concludes my slides and let me turn it back to Vance and see if we have any questions. Fantastic, fantastic. Avanava really great and well thought out discussion on this whole AI ML ecosystem, let's call it, with so many different stakeholders and technologies that can help us a really, really great look. Thanks a lot, Vance. I truly appreciate it. You know, you make a good point, especially toward the middle and end of your session, Avanava, about the idea that the whole goal of an AI ML project should be that it's production ready and actually gets deployed and helps the business. And a lot of folks these days spend an awful lot of time talking about the front end development or the model design. Let's extend this to production ready. What are some best practices that Red Hat has learned to help make sure all that front end work and investigation actually does turn out to be production ready? Yeah, that's a very good question. And based on all the deployments we've learned is the following. The first key thing is that you need to have the sponsorship from the top levels in the organization. Because if you don't have a sponsorship and they don't see the value in the project, it can stall your projects. And the next key thing is that you have to identify the key stakeholders on the business side that can provide you a lot of feedback as you execute on the project and are part of project as well. Because the end of the day, they are the key stakeholders, especially the people that will be using the solution. Like say the example I want to highlight is the HCA Healthcare. So what they did was they partnered with doctors and nursing staff right from the beginning of the project because at the end of the day, the software app that they built out that had AI capabilities is being used by the nursing staff. So they have to have the confidence in the solution that they are building out. The third key thing is being able to train the people and get them comfortable with the solution because the people are not trained and you may have the best software in the world, but at the end of the day, if you have not done the people and process transformation, yeah, it can stall your project as well. And the good news is that based on all these key things that we have learned from a number of deployments in the field, we have a very mature, insulting services that can help on the AIML projects to help you take the AIML from a pilot to a production phase at a very fast phase. So we have the capabilities in terms of both the software, the ecosystem, as well as the key consulting services that you need to make AIML real. Great answer. In fact, I think another question here wants to also tap into your brain on the idea of how to do AI projects better. And it says, I'm really impressed with how Red Hat has studied how AIML projects move across different personas. What are some of the best features in OpenShift that support a handoff in the AIML lifecycle? Yeah, so we have a lot of key capabilities in OpenShift. So it's a lot more than the basic containers and Kubernetes. And what all we've done is we've also integrated a lot of DevOps capabilities. Like we provide Tecton natively in OpenShift as a CI CD pipeline. So that can actually like help you make it easy to roll the AIML models into production. At the same time, we have the integrations with the software with the app dev capabilities. That way, using OpenShift runtimes, you can build the software apps in a seamless way. And then also we provide the integrated automation capabilities, we have monitoring capabilities, the log analytics. So these are the key capabilities we have that can help you with the entire machine learning lifecycle and speed up your process of taking the AI models into production. And we have a lot of capabilities on the services side. We also have data scientists and the folks that have been there done that. And also the folks that actually support the internal AI ML solution we have at Red Hat. So that way as part of the service, the services, we are able to bring in the right people that speak the right language and can talk to all the key persona in the organization. And the different success stories I showed on a slide. So our consulting team has been a key part of all the projects. Bottom line is that we are able to bring in all the key capabilities that the customers need to make AI ML solution real. So beyond simply getting from pre-production and production, you actually have support services or consulting services, Abhinav, that will actually help people design a pilot correctly? Yeah, pilot and also be able to roll out the pilot into a production environment. And we have a number of success stories there as well, where our team actually help the customers make AI ML solution like real, right from the concept phase all the way to the deployment and also the management, if need be. Fantastic, fantastic. I know that some of these steps, depending on how well people are staffed or how well they know about DevOps can be a bit intimidating. So this has been great so far. Abhinav, I know that you've given a lot of thought this whole, not just across the life cycle, but the stack of technology you showed. And a question here about that, can the speaker talk a little bit about their roadmap for the future to fast-track AI projects? Yeah, that's a very good question. And there are different aspects to this. So our goal is to continue to drive the innovation in the open source community with the projects like say Qflow, the Open Data Hub and so on. So that way the organizations can use the power of open source to be able to make the AI ML real. The next key part of our strategy is to continue to work with the ISV ecosystem and be able to build out the key software integration that we need, simplify the deployment and the life cycle management of the key tool chain that your software developers, the data scientists and the data engineers need to make AI ML real. And then the third key thing is to continue to add the capabilities in our products and also make sure that the whole DevOps thing we talked about. So all the value DevOps can be extended to the entire ML life cycle. So to have all the key capabilities, that's a key part of our strategy. And the fourth one is to continue to mature our services offerings because as we said, it's not just products, you have to bring the capabilities as well. To help the different organizations throughout the project. And the last one is we continue to work with our hardware partners as well. So that way we can bring the complete end-to-end solutions to the market that have the software as well as the hardware at a very high level. These are the key things that we are focused on. That's a lot to be focused on. So thank you for up enough for that. That was really fantastic. You know, I see time is just about up. But before you go, Avinav, we'd like to have you give folks a next step opportunity here. I can tell from the job titles here that are attending with us today that we've got a great group of folks that are AI, ML professionals. And then we've also got folks that have been working in DevOps and Kubernetes and maybe even OpenShift for years. Maybe give us a suggestion of how we could bring these two professionals together with a next step from Red Hat. Yeah, what I would say in terms of the next step is to check out the resources that are listed at the bottom of the page because there we have a lot of good examples. And based on the various phases of the journey that you may be in, there is a link there that can help you. And then feel free to reach out to us. Like I have my email on the slide as well. So if you have any questions, feel free to reach out and we can help you make the AI, ML real in your environment at a fast pace. Yeah, we love that. We love that. And in fact, you mentioned partners before just a quick note here. We're proud to say Abenov that we've got several of your partners here with us this morning. We have folks from H2O and Cladera and MMSQL as well. So another great way to kind of bring all that knowledge together that you presented for us this morning, I think. Yeah, absolutely. Definitely agree on that. Excellent, excellent. Abenov Josh, you've seen your manager OpenShift product marketing at Red Hat. We're really glad to have you here. Thanks again for bringing together the worlds of AI, ML and cloud native development. It's really been a fantastic eye-opening session actually for me. And I could tell from the questions that really tapped into a big news trend going on in the sector for intelligent data. And we appreciate your time. Thanks again. Yeah, thanks a lot, Benz. Abenov mentioned quite a number of resources. Let me quickly review. We've got some of them here locally right in the breakout room. Click the link that you see below and you'll be able to get directly to that asset. We've also got a copy of the slides. That big red button will get you that. And so much going on at Red Hat in the AI, ML, intelligent data space. We didn't have room for everything. Here's a slide that'll take you directly to the Red Hat website. Click on any of these links. Once you download Abenov slides and you'll go directly to that asset because the links will be live. Thanks again, everybody.