 Hi everyone, welcome to our talk today. Hope everybody is doing great. This is Abhinav Joshi, Director for Product Marketing at Red Hat. I have over 20 years of industry experience in various roles on both the customer side as well as on the vendor side around the key areas such as virtualization, cloud computing, data analytics, and AIML. I've been with Red Hat over four years now, and I manage a team of awesome product marketing managers who focus on driving workloads, focused conversations, and also on content marketing. Joining me today is my friend, Matt Atkins from NVIDIA. In the next few minutes, we will talk about how you can accelerate your AIML projects with enterprise-grade MLOps powered by open-source technologies. Over to you, Matt, to introduce yourself and get us started. Thanks, Abhinav, and thanks everyone for joining us. I'm Matt Atkins with NVIDIA. I've been with NVIDIA for about three years now. Prior to that, I've spent a lot of time in the open-source community and infrastructure software. I've done everything from an architect role to a sales role and everything in between. Again, thanks for having us here and excited to talk to you about our collaboration with Red Hat as a blade. As everyone knows, NVIDIA has been in the AI space for quite some time now. We obviously got our start in graphics processing, but now as the world evolves, very similar type of compute has been shown as a trend in the AI space. Instead of putting video out, it's taking video in. Parallel computing is at the heart of everything we do, and it's something that GPUs do quite well. As we've studied the space and seen the space grow, we've seen a number of different trends. No longer are AI projects subject to just data scientists, ML Ops, that sort of thing. It really touches every single piece of an enterprise. We've seen that as folks begin to adopt, it can affect the business in so many different ways. It can grow competitive advantages. The first folks, the early adopters, have really shown huge trends in their business. Whether it be in retail with suggested carts, they can be everything from back-end data center workloads to consumer-facing trends. Next slide. Not only is it specific to one certain part of a business, but we see the trends in all industries as well. Whether it be mapping genomes from healthcare to financial services, risk detection, that sort of thing. Telco, even far out to the edge, we are at the heart of everything powering 5G. Even insurance, again, insurance fraud, that sort of thing, fraud detection, and even an automotive. Automotive is a great example because we're present all the way from autonomous driving, all the way back to the machines that build the cars, and to the data center and everywhere in between. Again, this is what ML Ops is all about. It's very similar to a typical DevOps play that you'll see prior, but with the ML aspects, it really does take an army. Every single line of business needs to be able to work and lockstep to be able to provide the data and get everybody on the same page for the same goal and expecting the same outcome. Lack of data is not a problem, but the data can be siloed and not always as easily accessible for operations, IT admins. Oftentimes, especially when training model, it can be stuck back in with the data scientists and may not be seen or easily accessed by other lines of business. To go a little bit deeper into that, as I mentioned, this flow chart would look very familiar to everyone who's familiar with traditional DevOps. But again, with ML Ops, there are different expectations and everyone needs to be in lockstep throughout every piece of the from developing a model, to training it, to running it out into inference production. Every single line of business needs to touch those. It usually comes from a traditional line of business. They have some need and then that's pushed out to the data engineers for traditional model training. Again, these are typically done on very purpose-built high-performance computing platforms and may not intersect with other lines of business. As that model then matures, data scientists will typically be more hands-on, but still the line of business still needs to be involved to make sure that the model is trending in the right direction and isn't following tangential paths in order to achieve the ultimate goal when it goes out into production. By that time, it reaches the app developers and the IT operations. Again, these are purpose-built on very specific hardware typically, that the IT admins may not have access to. And if the application brings certain hardware requirements, that might be new things for IT admins. We at NVIDIA, we realize that everybody is not accustomed to working with GPUs. And honestly, that's really at the heart of what we do with our partnership with Red Hat, is to make GPUs easily consumable and accessible for anyone across your enterprise. So again, when I refer to the stack that's a little bit unique and individual for the ML platform, the infrastructure, again, this is pretty typical across any sort of enterprise. This could be physical, virtual, as far out as the edge, as accessible as the public cloud. What's unique in the ML layer is actually where NVIDIA resides, which is on the acceleration layer. We accelerate in traditional compute as well as networking. With our recent acquisition of Melanox, we've brought new components into the table to break down even fewer bottlenecks. So where the initial bottlenecks were around CPU-only compute, then the next were around IO throughput, that sort of thing. That's exactly where the GPU comes into play. And then from the next level up, again, very popular in enterprise today is everything around containerization. We really need strong partners in this area, like Red Hat, to be able to be able to provide an intelligent orchestration layer to maintain all of these heterogeneous platforms and complex workloads across the enterprise. And then on top of that, again, the typical data lakes, SQL databases, that sort of thing. Those are, again, I'll be the first to say, those are not NVIDIA specialty, but they're going to be at the heart of every enterprise compute platform. So they need to be done. It doesn't need to necessarily be accelerated by a GPU, but that's where something like OpenShift comes into play to be able to dynamically manage those types of workloads. And then again, NVIDIA partners a lot with the open source community to make sure that the tools and containers that are specific to AI run as efficiently and accelerated as predicted on top of these platforms. So that's why we've partnered with TensorFlow, Jupyter, Python, we even have our own repository that's free for all NVIDIA customers called NGC. And all of these containers, again, and pre-built models are tested on OpenShift and available to the public. Thanks, Matt. To make the ML Ops architecture real, all the personas involved in the AI project have to collaborate throughout the project and also operate as one team aligned on business priorities. The line of business managers are focused on maximining the revenue and lowering the cost. They are great resource of information for data scientists and developers to get an accurate understanding of the business needs. The models can be trained and developed faster when the line of business is involved and providing guidance. For example, consider a recommendation engine developed pre-COVID using customer data generated before we all started working from home. Now, the buying habits have changed dramatically for many industries. So the recommendations based on the pre-COVID buying preference would probably predict a very different result during COVID and now post-COVID. The marketing department might not know what those preferences are, but they know that the recommendation algorithms need to be updated with more current data. Now, the AI practitioners, such as the data engineers, data scientists and software developers, desire self-service access to software tools, data and high-performance computing infrastructure at the location of their choice so that they can get their job done quickly without having to depend on IT operations for all the day-to-day requests. Now, being able to automate these repetitive tasks have consistency to develop ones and quickly deploy anywhere and autoscale as needed is top of mind for these persona. Now, the AI practitioners would benefit from a modern cloud-nated development platform powered by open-source technologies such as containers and Kubernetes with integrated DevOps capabilities to achieve their goals. And their friends in IT operations are responsible for providing and maintaining this cloud platform and the required data resources. They have to ensure that these important IT resources are always available, highly scalable, secure and easy to manage. From cost-efficiency perspective, they would prefer enhancing existing infrastructure versus ripping and replacing with new that may cost a lot of time and also the money. Now, going back to the architecture slide, we have been working jointly for several years between NVIDIA and Red Hat to enable open-source-powered ML apps for many organizations globally. And all this open-source software across the Red Hat and NVIDIA portfolio is better tested, secure, scalable and interoperable with technologies from the partner ecosystem. So you still maintain the flexibility with your architecture while getting a very prescriptive and a fully tested solution. Also, all this also comes with professional expertise to help with the much needed people and process transformation required to make the initiative a great success. Here are some of the many organizations globally that have accelerated the delivery of AI-powered intelligent applications to by deploying open-source-powered ML apps architecture. All these organizations have benefited from the enterprise-grade open-source technologies from NVIDIA and Red Hat and other providers to achieve the desired business outcomes. Now let's hear directly from the head of AI at Turkcell on what enterprise-grade open-source means to them in order to operationalize the AI project at scale and achieve the key business outcomes. Now, speaking of this organization, they operate within Turkey and also internationally. So Turkcell currently serves more than 48 million customers with a wide range of communications and digital service offerings. Life is changing, technology is changing. Actually, it is an era of getting away from the ready tools to the open-source world. Open-source is good because it's a community-contributed actually environment. With the open-source, you have flexibility. You can easily manage and monetize your resources and your components. Using the open-source, you have also opportunity to get the support from the community all over the world. So this is important for us. We think that creating an AI solution, creating an AI platform, it is not just that think that you can achieve inside of the company. You should build the ecosystem, inside of the company, outside of the company. Inside of the company, you should have strong team with experts, with developers, with product managers, with business people, and with infrastructure guys. AI is nothing without using enough infrastructures. Outside of the company, it is ecosystem. It is startups. It is universities. It is academic people. It is some big companies, some platform providers, some infrastructure providers. You should embrace all these elements inside of your solution, inside of architecture, inside of your business strategy. So open-source is the future of the technology, future of the world, and we see open-source as a very critical part of our system. Now we converted all our red models into the open-source and our actual models' performance is 30% higher than the previous. Reddit means to us reliability, customer focus, innovations, and of course, open-source. Now let's dive a little bit deeper into their goals, challenges, what they built, and the business outcomes they're achieving. They had several challenges to take the AI project from pilot to production. First, the legacy infrastructure platform got in the way for them to quickly roll out AI-powered application and digital services. Second, approaching the AI project as a monolith and not as a cloud initiative also slowed the progress, and it required them to invest a lot in expensive resources impacting the cost of the project. And third, the lack of automation and orchestration of important tasks across various personas also negatively impacted the time to roll out new digital services for the customers. To solve these challenges, they built a scalable hybrid cloud platform powered by enterprise-grade open-source software. This helped them democratize data science as now the AI practitioners have self-service access to the tools and the data needed to build the AI capabilities as they need them. This cloud platform is based on containers and Kubernetes that also includes integration with technologies such as Jupyter Labs, NVIDIA GPUs, and so on to help speed up the important modeling and the inferencing tasks. It helped them transition from a slow monolithic architecture to a modern cloud-native architecture that also helped achieve consistency, security, scalability, and high availability for the entire DevOps lifecycle. The cloud platform also includes the DevOps capabilities to automate the MLops lifecycle and also to speed the delivery of the AI models and the associated intelligent applications into production anywhere, consistently and in a repeatable way. Now, going back to the architecture slide that we've already showed you a couple of times. Now, layering on the software stack that TorqCell deployed, this is their high-level MLops architecture stack. It shows the various enterprise-grade open-source technologies used up and down the stack, including the ones from NVIDIA and Red Hat. And you may recognize many of these from your own environment as well or something that you may have evaluated as part of your due diligence as you are looking to design your own solution. Now, let's get to the benefits. With this solution, TorqCell has been able to double the speed at which they can roll out new and differentiated AI services to the market. At the same time, they have been able to achieve operational efficiencies and save up to 70% on the costs associated with developing and delivering these services. Finally, they have been able to make the AI playground available to the developers and data scientists from across the organization so they can innovate at their pace without having to depend on IT operations or every small request. While innovative organizations such as TorqCell and several others have been able to take the best of the breed open-source software from various mentors and put it all together in a stack to enable MLops, we believe that for the mass adoption across organizations for various use cases will require the open-source community to think in terms of an integrated AI platform that has been pre-tested, certified, fully supported and almost ready for AI practitioners so that they can start using it instantly to develop AI capabilities rather than spending months trying to configure the infrastructure resources and the software tools. This is where Red Hat and Nvidia have been collaborating and have brought new open-source solution to the market. And at this point, I'm going to bring back Matt to tell you more about it and wrap things up for today. Sabinoff, really appreciate running through that use case with TorqCell. It's always one of my favorite ones to hear. So I just want to dig in a little bit more about the partnership that we have forged between ourselves and Red Hat. While we've been partnering and working together to some extent for over 10, 12 years, really reached a new high in 2019 with the release of our Nvidia GPU operator. Now, I don't want to get too deep in the weeds as I know that this is an excessively technical discussion but if you're not familiar with the operator framework, basically an operator is a way for third parties to be able to communicate with Kubernetes. And GPUs very much fall into that category of third-party resources. The way that GPUs used to communicate with Kubernetes was through a series of plugins. We had a plug-in through our drivers, our container runtime, GCGM for monitoring, and so on and so forth. Everything that we had that needed to be run on top of a Kubernetes cluster was a series of plugins. This is obviously a very manual process. Again, in a formal life, I had to lose a few weekends of my own just to make sure that GPUs were properly added upstream into a Kubernetes cluster and then not only that, but being able to find them within the cluster once they've been added. In 2019, with the release of the GPU operator, those days are gone. Now, all of our plugins are encapsulated within the GPU operator. So the user needs only to install the GPU operator on a node with a GPU and it becomes automatically uploaded into that OpenShift cluster. And customers can manage that simply through their GUI dashboard with OpenShift. They're as easy as a right click. If future, this is something that's fully open source and available to all GPU customers today. We are, it's something that's still in the forefront of our focus and something that we're adding functionality to on an almost monthly basis. The latest additions that we've added are our feature discovery nodes and DCGVM for monitoring. The feature discovery allows you to have a heterogeneous data plane of servers with GPUs or without GPUs and they're all properly labeled, even different types of GPUs and MIG control as well. MIG multi-instance GPU is basically a form of virtualization for GPUs which allows for one GPU to be split up into six or seven consumable bytes within a container. So even though the GPU operator is open source, a lot of the software that NVIDIA puts out is actually open source. It's our primary focus. However, as I mentioned earlier in the flow of a typical ML Ops lifecycle, there are so many lines of business and individuals within an organization that need to touch this project and touch the software. Ultimately, when it lands on IT admin, they have to deal with things like governance and other things that typical data scientists don't like to even think about. They have regulations on certain versions and may even have running an inference workload for upwards of three to five years. So it's very important to them that enterprise support is something that's offered as well as long-term release cycles. So while they may not have the flexibility to constantly update and get the latest and greatest versions, this is something that they've been asking for for a long time. So everything that we have included, everything that we integrate with Red Hat and OpenShift is available within this suite of software that we call NVIDIA AI Enterprise. Again, everything that you see in bold on this diagram is free and open-source and free for anyone to try. The support aspect is something that we offer for our enterprise customers looking for that next level of support, direct engagement with NVIDIA and Red Hat and fully supported on Red Hat OpenShift. So again, these are the things that a typical IT administrator would see as value. You know, I fully appreciate most data scientists are working on their own workloads and that's totally fine. This is, again, all open-source projects, containers, libraries, pre-trained models, everything you guys need to get started is available free and open-source. But as these workloads get moved over to an IT administrator, these are the types of things that they would see as valuable for them. We've simplified their deployment. We have real-time one-to-one access to support and it really put a nice bow on their ML Ops lifecycle. So in summary, NVIDIA and Red Hat have done a ton over the past 10, 12 years to make AI ready for enterprises to adopt on a global scale. Again, this isn't something that needs to be done in silos. We're making it so, this is something that a typical IT administrator can handle. No longer are GPUs a speciality to have to update, to know how they operate with Kubernetes. NVIDIA and Red Hat have done a ton to automate this process, make deployment predictable and scalable every time, whether it be private cloud, public cloud edge. We collaborate in all these spaces to make everything run as predictably and perform it as it should. So next steps, where can you learn more? We have a few links included within the deck. You can start with our solution brief where we also presented more in-depth with two of our project managers, a very similar but longer presentation to what Ivanoff and I went through today in the second link, which is now on demand as part of NVIDIA GTC, which took place last March. And then if you're more hands on, we also have NVIDIA Launchpad. This is an offering available in several colo locations. We have clusters of open shifts set up in ready with ready set demos for folks to come in and try to build a model and run it to themselves. So with that, I'll wrap things up. And if you have any questions, feel free to reach out to us directly or we can field them in person in Boston. All right, thanks everyone. Have a good rest of the day. Have a good show. And if you have any questions, we are here for you.