 Hi everyone. This is Abhinav Joshi. I'm the Product Marketing Lead for AIML on OpenShift at Red Hat. And today what we'll be talking about is how you can harness the power of AIML with the Red Hat portfolio of technologies and integrated solutions. And feel free to interrupt me if you have any questions. And what we'll do is this is more of a strategic discussion. So we won't be going deep technical in this talk today. But if you have any deep technical questions, just keep them with you and we'll be happy to address them as much as we can today. If not, we would definitely like to do a follow up where we can go deeper into the technical details of our solution. And that's where we can address all the deep technical questions. So before we go into the meat of the discussion here, just wanted to kind of come up with the summary. So what we are seeing is that AIML Power Intelligence software applications are helping our customers to be able to achieve the key business goals and objectives. And we are seeing that the AIML is helping the customers achieve their business objectives across all kinds of industry verticals. But at the same time, we see that there are a lot of execution challenges. And this has to deal with not just the various tools and technologies that you have to put together, but also related to the people and process transformation that you have to do for these enabled to make AIML real. And if you are not able to execute these people process and technology changes, what we've seen is that a lot of the projects are stalling. And this is same as like with the digital transformation cloud native application development as well, where if you don't focus on people and process transformation in addition to technology, it can result in a negative impact in terms of the investments that you made. And then finally, the good news is that Red Hat can help. So we have the capabilities to help you speed up your AIML initiatives. And this will allow you to have the faster delivery of intelligent software applications that will help you differentiate in the market and make your customers happy. So this is the agenda for today. So first we'll talk about the potential and the challenges with AIML. And then after that, we'll talk about why containers and Kubernetes or AIML. As you may know, containers and Kubernetes are providing a great value to speed up the software development projects. So the same value is applicable for the AIML projects as well. So we'll briefly talk about why containers and Kubernetes and then we'll jump into why Red Hat and talk about our portfolio and also talk about some of the success stories that we've had in this phase. And then at the end, we'll kind of talk about the next steps on where we go from here in terms of the next level of discussion to help make AIML real for your environment. So let's first talk about the potential of AIML. And we always like to start with this slide to do a level set on these different terms and technologies. You may have heard of AIML, DL and on and on. So at the very basic level, the word artificial intelligence is mainly used by the business community and you can talk about how machines can imitate intelligent human behavior, being able to have the intelligence so that the machines can make the predictions on behalf of the human and help you with the business decisions. And then the words like machine learning are mainly used by the technical community and the main idea is giving the computers the ability to learn without being explicitly programmed to do so. And then deep learning is a subset of machine learning where what you do is you use the layers to progressively extract the value of the high level features from the raw inputs and some of the use cases for deep learning include computer vision, image recognition, video recognition and so on. So is that how you guys see like AIML and DL as well? Okay, so in terms of the key goals and objectives, now why would you make a lot of investment in your AIML projects? So in partnering with Red Hat, these are the key things we can help you achieve by implementing your AIML initiatives. It can help you serve your customers better and you can be more competitive, help you gain like AIML can help you be more competitive. It can help you increase your revenue and then it can also help you reduce the cost. The question to you is like are you seeing it the same way as well as to the value that AIML can bring you to the table? But at this point I'd like to open it up and see if you see anything missing in here in terms of the key top level goals and objectives that you're trying to achieve with AIML. Okay, now moving on to the next level down in terms of the key business goals and objectives because for being able to increase your revenue to save costs to serve your customers better and so on, you have to work on specific initiatives in your line of businesses. So what you see on the slide here are some of the top initiatives that we are seeing across different industry verticals where AIML is becoming a key part of the program. Like if I talk about healthcare being able to diagnose a disease something very quickly before it like negatively impacts the patient is very key. And we are starting to see that a lot of customers are red hat customers are starting to do use AIML for being a for beating up the diagnostics and cure for the patients and also to improve the clinical efficiencies. And if you talk about financial services what we are seeing is customers are implementing AIML to help with fraud detection, be able to give out targeted offers, be able to make the predictions as to who's going to default on the payments and also finding the customers that they should be increasing the credit limits so that the customers can spend more money on their credit cards and the bank can get some value out of it in terms of more commissions and more revenue and so on. And then if you look at automotive there is a lot of buzz on autonomous driving is becoming a big thing as well and then like predictive maintenance quality control and so on. So across all the industry verticals we are seeing a lot of traction. So what are you guys seeing in terms of the top projects that you have right now where you are looking to benefit from AIML in your key business projects. So at this point like to open it up for discussion, so that we can have a good idea on your top level, the key initiatives that you have in your different line of businesses, where you want to implement AIML. Now what we are seeing is, and the data that you see here is all coming from customer research and the research from the third party analyst. What we are seeing is that like enterprises and even like on the government side, all our customers are investing in the platforms for AIML. Why now as compared to before like AIML has been there since 1956, but now with the abundance of data like data doubles every 18 months, and then the computing the dense computing power that is available to us. And then the rise of the open source machine learning tool chain. All this is helping make AIML real and what you see here the $13 trillion. So this was a study done by McKenzie in 2018. So what they came up with based on the study they had was that AIML has the potential to deliver $13 trillion into global economy activity by 2030. So basically like AIML can help the businesses be more successful and generate more revenue than the figure that you see on the right hand side. What we've also heard through our friends at 450 World Research is that 48% of the like over a thousand customers that they interviewed. So 48% of those said that that the current AI infrastructure will not be able to meet the demands of their, will not be able to meet the demands of the AI project. So the reason that we are sharing these numbers is that AIML is real and you have to like holistically think about your solution, your solution stack, not just at the AIML tool chain layer, but also at the cloud platform and the infrastructure layers as well to be successful with AIML. And then that 451 research that I briefly mentioned about and we can aside the source at the bottom of the slide. It also reported that that you have to like holistically think about the infrastructure in terms of making like AIML real. So this could be like data preparation and management, the initial phase, then the AIML model training and then like ultimately once your model is built out, you have to put the model into production and start doing the inferencing. So what you see here is the customers are reporting that that all these phases are compute intensive and that depends on the use cases as well. So you have to think about your infrastructure platform in a very holistic way when you are thinking about like executing on AIML projects. So are you guys thinking the same way as well? Like in terms of the solution architecture that you need to put together for the different phases of AIML projects. Now let's briefly talk about the AIML life cycle and the key persona involved and a lot of you may be familiar with this, but our goal to have this discussion is so that we all can be on the same page on what the AIML life cycle is and what are the different persona that are involved in the AIML life cycle. Then what you see here is you start with setting the goals that, hey, what are the goals of my projects. Once you set the goals, the next step is to gather and prepare the data. So that way, like you have a lots of data for your to be able to build out the machine learning algorithms that can help you make the correct predictions when those algorithms are most commonly known as the models get deployed into production. So once you have the gather and prepare data, the next step is to be able to develop the machine learning models. And this is where you select a few, you shortlist a few machine learning models and then you to train those models, you test those models. And then from all of those you can a shortlist one or two that you want to get deployed into production. And that's where the next step comes in being able to deploy the machine learning models into the app depth processes. And then after that is being able to implement the intelligent application that is powered by the AI model into production and start the inferencing phase where your model is going to help make the prediction based on the new data it sees frequently. And then like ultimately what you have to do is continuously monitor and manage your model because as the model starts to see the new data, you may run into issues where the model is making wrong predictions because it may not have seen that kind of data before. And then you see a loop on the top where you have to continuously retrain the model based on the new data that it is going to experience in production to make the model to keep the model fresh and continue to make the best predictions possible. So this is how we see our customers are thinking about the AI ML life cycle. Do you guys see any similarities or differences on how you guys are seeing the AI ML life cycle in your environment? Okay, now let's talk about the key personas that are responsible for the different phases that you saw here in the life cycle. Like any other project, the key goals business goals are set by the business leadership at the top post level and then being able to gather and prepare the data. There is a persona called data engineer and you guys may have those as well. This is what we are seeing data personas are data engineers are responsible for gathering and preparing the data. And then after that comes the role of data scientists. So they are mainly responsible for being able to develop the machine learning model and work with the software developers to deploy these models into the application development or the DevOps processes. And that's where you see the role of an app developer or a software engineer, if you will, where software developers role is to work very closely with data scientists, get those models and then be able to build those applications, the AI powered intelligent applications and get those deployed into production. And then they have to work hand in hand with the data scientists in the monitoring and the management phase because data scientists have to know how the model is behaving in production and then what part of the models has to be retrain all that data scientists have to be closely involved. And then it operations. The goal is to be involved throughout the life cycle, because for it operations, all the personas that you see here are the customers because they have to make sure that the data engineers data scientists app developers are getting the best experience possible. And we are also seeing the rise of persona called machine learning engineer. Think of that as like as a hybrid between a data scientist and app developer, but more focused on getting those models deployed in production by working very closely with the software developers because the data scientists may be very busy with building the next set of models. And then maybe deep in the weeds and may not necessarily have the time to work on a daily basis with app developers. So that's where we are seeing that machine learning engineer persona come up a lot. Now my question to you is in terms of different personas, like how closely or how different are you thinking in terms of the different personas that will play a key role in your AI and life cycle. Okay, now that we talked about the different personas and the life cycle, a key thing we have to get on the same pages, the conceptual architecture that if you have to execute on AI ML projects, there are various software and infrastructure pieces that you need in your machine learning architecture. And at the top most level, what you see is like the the tooling and the framework for your machine learning deep learning for gathering and preparing data and for DevOps to get these models deployed into production as well. So that's where we see tools like TensorFlow, Jupyter Notebooks, Python, Telden could be tecton or Jenkins say for DevOps, and so on. Right. And below that, you see the the data sources like you need your data services as well to be able to host the data throughout this whole machine learning life cycle. So that's where we see things like your traditional could be a SQL database or the no SQL new SQL databases, data lakes, object storage, and so on. And then below that is you have to be able to run all these machine learning and DevOps tool chain. And even the data sources be able to connect those in a uniform way with the hybrid multi cloud platform with self service capabilities. And what we see is the sales these self service capabilities in the in the hybrid multi cloud platform are very important. So that way, like all the personas that we talked about on the previous slide, as in the data engineer the data scientists your application developer have the freedom get the self service capabilities to be able to do their job without having to depend on it on a daily basis to provision the infrastructure because a lot of these tasks are very repetitive and they can be compute intensive as well. So they don't want to depend on on the IT operations team on an hourly or a few times a day or on a daily basis to make new provisioning request. And that's where the next key thing in the layer is if you go down is compute acceleration. Like a lot of these machine learning model development, as well as for certain cases, like the image detection or video inferencing in production can be very compute intensive and you need the insights at a very fast pace. So your like hybrid multi cloud platform with self service capabilities, it needs to be able to seamlessly like integrate with the infrastructure layers and provide the necessary compute acceleration. And that's where we see things like like Nvidia GPUs, FPGAs and TPUs. And then at the bottom of the layer, you see that like all these things that we talked about to be able to consistently run across your different infrastructure footprint speed physical servers, bare metal servers be virtual private cloud in your data center or even at a hosted facility be it in the public clouds or be like a hybrid combination of private and public or be at the edge. So your data scientists, data engineers, app developers could get a consistent experience regardless of where they build the model and where they build the software application and where they deploy the solution. So that way they don't have to reinvent the wheel when they move from one stage of this project on to the another. Now my question to you is, does this make sense? What do you see kind of missing here at a high level? Or what do you think that could be changed here to align with how you guys are thinking in terms of your conceptual machine learning architecture that you want to operationalize in your environment. And your input is going to be extremely valuable as you move on to the next phase of the discovery journey to work with your technical team and come back with our Red Hat solution proposal to help you make AIML real for your environment. Okay, so now we've talked about the life cycle, the personas and the architecture that you have to put in place. But all this is not straightforward, as you may have realized in all your projects, there are always execution challenges that you have to solve for so that you can make AIML real and help achieve your business goals and objectives that led to the investments that have led to your investments on the AIML space. So some of the key challenges that we see customers are facing, the first one is talent shortage. It's really hard to find talent these days and especially the data scientists. There are very few out there, so we see that talent shortage is a big challenge. The next one is data acquisition and preparation. And one of the challenges that it's hard to find readily usable data that can be taken and then be directly fed to the data scientists. We see that the customers still have to spend time to collect and prepare the data before it can be fed to the data scientists. The next key thing is lack of collaboration across teams. As you saw on the previous slide, there are a lot of different personas involved and to make AIML real at a fast pace. All these people have to collaborate in a very tight and like a well oil machine because lack of collaboration can slow things down and it can stall your AIML projects or even the delivery of the AIML powered intelligent applications. The lack of collaboration we are seeing as a challenge as well. And then finally having to wait on IT admin teams or the IT operations to be able to execute on the data science workflows, on app dev projects or for data engineers to use their data preparation tool chain and be able to prepare the data. And then lack of like having to wait on infrastructure resources. And the second key thing is like having to put together their AIML software tool chain and be able to repeat their experiments in a consistent way and be able to share their experiments with each other. And even with the outcome of the data science experiments with the app dev teams in a consistent way and making sure that the results that they get are consistent. So we are seeing that not being able to achieve the consistency is also a big challenge. And the good news is that we can help solve a lot of these challenges. And that's what we'll talk about in the next few slides. Now my question to you is does this like align with the key challenges that you are seeing in your environment and like how are you thinking about it? Like do you see anything missing in here? Or if some of these are more important or more critical for you right now as compared to others. So this would be a good time to have that discussion. Okay, now that we've talked about the challenges, a key thing that we want to start off the next topic is to talk about the value of containers and Kubernetes. And this is where we're seeing a lot of our customers and even industry in general is building their AIML projects on top of containers and Kubernetes because these technologies are helping the customers make AIML real. They are helping you accelerate your AIML projects. And these are the key values that containers and Kubernetes can bring to the table to help you accelerate your delivery of AI powered intelligence software applications. And the value that containers and Kubernetes have brought to the software development world are also directly applicable to your machine learning model development projects as well. Like the value proposition like around being able to go faster, portability, like you do it once and then you ship it like across all your sites to achieve the consistency. Having the flexibility to provision the environments needed for your AIML projects on an as needed basis as and when you need is key as well. And then being able to scale up and down your like AIML modeling tasks is can can be achieved with the use of containers and Kubernetes based hybrid cloud platform. So we are seeing a lot of customers across different industry verticals get the benefits of containers and Kubernetes. And why this is important is because at the end of the day, what are the data scientists care about, right? So what we are seeing is the data scientists care less about the infrastructure platform as well as like how you are helping them go faster. All they care about is they need a self service portal from where they can access the machine learning tool chain and also access the data sources that the data engineers have made ready for them. And then they want to go in this in these the machine learning tool chain and be able to perform the machine learning modeling and tasks. And then they want to work closely with the software developers and get these machine learning models deploy in production. And then they want a way to be able to inference to monitor the inferencing tasks. And then so their expectation is that all these tasks that should seamlessly integrate with the compute hardware acceleration as well as with the infrastructure resources like compute network storage and so on. So all they care about is getting a very seamless experience with their machine learning tool chain and access to data. That way they can focus on the machine learning modeling and getting these models deployed into production as compared to worrying about the infrastructure platform and like having to work with with IT teams on an hourly basis to fulfill the provisioning request. Because if they have to spend more time on that, that means that the AI ML projects are slowing down and the business is going to suffer because your competition is going to be able to bring the new features to market at a very fast pace as compared to you. Now like how does this align with how you are thinking in terms of the needs of your data scientists and them having to depend on the different teams, especially the IT infrastructure on the resource requests. Now let's talk about why Red Hat and how Red Hat can help in this space. Now what we are seeing is and all this is based on the deployments we've had in our customers in terms of AI ML projects that we are helping the customers on as well as the capabilities we have in our portfolio. So what we can do is help you accelerate these AI ML projects. We've learned from a lot of the early deployments that we have seen out there. So we've learned a lot and I'll talk about those examples in one of the next few slides. So we are continuously learning and making the customers successful by operationalize helping them operationalize containers and Kubernetes based like hybrid cloud platform to host your AI ML software tool chain on top of that. The next key thing is we have comprehensive portfolio beyond just the containers and Kubernetes platform and our comprehensive portfolio can help you meet the needs of the entire stack that we talked about a few minutes back helps you complete the AI ML architecture. Then we have several key partnerships right at the strategic level. So that way it makes your experience very seamless on deploying and lifecycle managing the AI ML tool chain on top of Red Hat OpenShift and a broader portfolio as well. Then finally the Red Hat is built on open source tool chain and we continuously work in the open source community and are seen as a leader and a trusted provider for the open source space. Everything we do is open source. So that way tomorrow if you decide to manage things on your own so you don't have a lock in. You don't have a lock in and you can feel free to manage your environment yourself because it's all open source software and also like open source is helping drive a lot of innovation as well and not just the cost savings because I don't have to buy the licenses. I just have to pay for the subscription but because of open source the rate of innovation has gone up considerably. So these are the key values that we bring to the table. Now being specific right we have a lot of customers that we've made successful like HCA healthcare they're able to diagnose like a disease called sepsis at a faster pace. And this is giving the physicians an advantage in terms of starting the treatment plan for the customer. And like a lot of the financial institutions are leveraging our portfolio to help with getting the fraud detection in place so that they can be a trusted financial institution for their customers. And then going to like oil and gas like ExxonMobil they've been able to democratize data science and be able to make the oil and gas exploration very efficient. And then like Ministry of Defense Israel they have a use case to help researchers go faster so that way it can help them with speed up the mission the key mission goals and objectives that they have be able to achieve these at a very fast pace. And we have a lot more examples across different industry verticals. We thought I'll bring up a few of these to kind of show that we have credibility in the space and we are helping the customers across different industry verticals make AIML real. Speaking of HCA healthcare I mentioned a few years a few minutes back it's providing the clinicians a five hour head start in terms of treating sepsis and this is helping save a lot of lives as well because now they're able to go faster and save lives. And nothing can be more important than saving lives and especially these days in these tough times where saving lives is the most important thing that like everybody is focused on. And BMW I talked about this as well they have something called connected cars right where the users can call the concierge service right from the car to get like help as well on the car or find a local resource like as to where you are driving or even if you run into an accident or something so they can get the customer support at a very fast pace and the solutions are playing a key role in having BMW deliver this connected drive capability and now we saw this slide a few minutes back where we talked about the different layers in the stack. What you see in here is that the Red Hat portfolio it can help you up and down the stack to make AIML real. We start with the middle where we show like Red Hat OpenShift Kubernetes platform as being that hybrid and multi cloud platform with self service capabilities to help you make AIML real and then we have capabilities in our middle way portfolio that can help you with the machine learning and the DevOps tool chain to make like needs of those and then we have data services like data grid like again part of the middle way portfolio to help you with the data services. Then we have the infrastructure portfolio at the bottom right starts with like that enterprise Linux. We have storage offerings in Red Hat container OpenShift container storage, self storage for Data Lake and then OpenStack and virtualization platform as well. As you see here we have capabilities up and down the stack to help make AIML real for you. Now speaking of OpenShift right so it's kind of base of our solution where it provides all the capabilities it's the most mature like enterprise Kubernetes platform out there. It provides all the benefits that we talked about like around wide containers and Kubernetes on AIML. In addition to those benefits we also provide a lot more because there are several Kubernetes distributions out there and you may get confused as to which one is more real and which one is not so real. So we are continuously working with a lot of our different ISVs in the AIML space to do deep integration. So that way it helps you simplify the deployment and lifecycle management of these like AIML tool chain. So this is a key differentiator for us and then the second key thing is that portability because we work with all the major cloud providers like IBM cloud, Google cloud, Microsoft Azure, Amazon and so on. And then you have the flexibility to deploy and lifecycle manage your machine learning tool chain as well as the AI powered intelligent applications like across on-premises data center or in the cloud in a very consistent way. And that consistent experience is provided by OpenShift serving as the common like hybrid cloud platform on top of all these infrastructure underlays. And then on OpenShift we also have the self-managed and the cloud hosted options. So that way if you decide that you don't want to be responsible for the day-to-day management of infrastructure. So you can go with one of our cloud hosted options that we have on the various cloud providers. Next key thing is the collaboration. So because of the integrated DevOps tool chain that we have with OpenShift, it can help you not just to build a containerized like AIML tool chain but also like allows you to your data scientists to work very collaboratively with your application developers, your data engineers, your IT operations and so on to collaboratively deploy these machine learning models into production through the AI powered intelligent applications that your software developers are mainly focused on. And what this helps you to do is it helps you extend your DevOps tool chain for the entire machine learning lifecycle. So that way you're able to bring in a lot of automation into your machine learning project and also speed up the delivery of the machine learning models and the AI powered intelligent applications. And then finally we can't harp on this like enough. We continue to work on different projects like what you see here is like the K-native for the serverless functions bringing that capability to OpenShift. We have the STO service mesh, Prometheus for the monitoring, Ceph of say for storage, then the CoreOS, that's the REL CoreOS, it serves as the container host, the immutable container host. So what we're doing is taking all these different projects on top of Kubernetes and be able to do the integration. So that way you get all the key capabilities that you need and you don't have to manually combine the various parts and pieces to be able to have like an enterprise class Kubernetes platform. Now our middleware portfolio, it provides a lot of capabilities in the machine learning tool chain as well where we have things like the fuse like AMQ streams and then Kafka that can help you with the data ingestion. Like the event driven architecture, it's got the three scale API management, data grid and on and on. And then we also have things like process automation manager, decision manager that can help you accelerate the deployment of machine learning models into production. One of my names, we have a broad middleware portfolio that can make it easy for you to operationalize your machine learning models and also help you with the full pipeline in terms of the model serving once you have operationalize the model. And then the next thing is in terms of the software defined cloud infrastructure. That's where we have like open stack that I had open stack platform. We have a lot of customers that are using this as their infrastructure as a service and open shift and the rest of the portfolio can write directly on top of open stack where open stack can serve as the infrastructure as a service platform with the GPU integrations. And then you can write open shift directly on top of that. So that way you have the flexibility to get both containers and virtual machines as well if need be for your AI ML projects. And same is true for the Red Hat virtualization as well where we have some customers that are using open shift on top of Rev as the virtual infrastructure platform. And Rev also has integrations with like NVIDIA GPUs. So that way you're able to speed up your AI ML projects and especially the compute acceleration. And if you're deploying open shift on top of bare metal servers as well. So that's where we have integrations with NVIDIA GPUs as well. So that way you get those that GPU acceleration to speed up your machine learning modeling and the inferencing task in a very seamless way for your data scientists, their app developers. Then on the storage side, right? So you're, say for hosting all these containers or the containerized machine learning and your app depth tool chain. So you need storage as well. So we have open shift container storage that plugs very seamlessly into open shift. And through the integrations we have done, it becomes very easy for you to deploy and lifecycle manage your storage for your open shift projects. And it also helps you scale seamlessly as your data needs grow and you need to be able to scale your data as fast as your business. So what we are also doing is we are continuing to work in the open source community as well. And Red Hat is a big proponent of the open source community. Where we are working with projects like TensorFlow, Spark, Jupyter, PyTorps, Kubeflow, working with NVIDIA as well on open source projects. Be able to take all these different projects and then work on some downstream projects as well where you may have heard of like Open Data Hub. Think of it as a reference architecture like a complete machine learning as a service platform that's built that's built on open shift. That storage and some the middleware and the open source community projects as well like like Apache Spark, TensorFlow, Jupyter Notebooks and so on. To be able to give you a complete platform to do your machine learning based on open source projects. It's a community project. Some of the parts of the project are supported by Red Hat and some of them are based on the upstream open source community. We'd be happy to talk about more on Open Data Hub in the follow on projects or in the follow on meetings that we may have. The partnerships are very important to us the strategic partnerships in the AI ML space because your data scientists, your app developers and data engineers. They mostly care about that tool chain that they need to be able to do their job and anything we can do to help make their experience very easy is very important to us. So that's where we're working with a number of different leaders on the AI ML tool chain up and down the stack to help you make it very easy to deploy the tool chain and do and lifecycle manage that as well. So that's where you see like to the IBM cloud pack for data on top of open shift. It's already supported and that's where it brings in a lot of value to the table. And then we continue to work with vendors like H2O.ai, Cognitive Scale, Seldom, DotScience and then the database vendors as well and on and on. TAS as well, we are working with all these key players to make sure that your data scientists and app developers and engineers find it very easy to use like OpenShift to be able to deploy and lifecycle manage. The AI ML and app data tool chain in a very easy way. So that takes a lot of the infrastructure barriers out of the equation so that they can focus on their machine learning modeling tasks as compared to spending time on fixing the infrastructure issues. And here are some of the quotes from the key leaders at the different companies we work with. Like NVIDIA, SAS and then our own IBM where we are very tightly integrated on the cloud packs that IBM has brought to market. So we talked a lot to summarize as you see that we talked about that different organizations are operationalizing AI ML projects to help the businesses go faster and be able to achieve the key business objectives, be it being more competitive, customer satisfaction, increasing the revenue, giving cost or being more secure. The business benefits, the AI ML are playing a key role, but it comes with challenges as well across people process and technology transformation like you see in like any app dev projects, those challenges are applicable for your AI ML projects as well. And we are confident based on the experience we have and the broad portfolio, our open source leadership and the strong ecosystem that we are building on AI ML space, both on the software side as well as on the infrastructure side to help you accelerate your machine learning projects and help you with the faster delivery of AI powered intelligent software applications. And with that, I want to get on to the next steps. So that's where what we would like to do is get your sponsorship to have this discussion with your technical teams like across your data science teams, your app developers and IT operations and have a more in depth technical discussion, do more discovery and also talk about the technical capabilities of our solution and see how Reddit can help you make AI ML real. And then we also have a lot of collateral and so on that's publicly available. If you go to openship.com forward slash AI dash ML, you go to open data hub dot IO. So you'll get a lot of different good things out there. Customer testimonials, links to talks from our customers reference architectures and also the integrations we have done with different ISPs. All that information is available on our website. We also have a YouTube video channel where we started the very basic concept on why open shift for AI ML and then talk about the portfolio as well as talks from various partners that we have on the AI ML space. So our goal here is to help you to accelerate your AI ML projects and the delivery of the AI powered intelligent applications at a very fast pace. With that, that's the end of our discussion today. If you have any other questions, so this would be a good time to open up and and talk about your questions and see how we can help you make AI ML real for your business. So thanks a lot and have a nice day. Take care. Bye.