 Hi, I'm Peter Burris and welcome to another CUBE conversation from our fantastic studios in beautiful Palo Alto, California. Today we're going to be talking about what infrastructure can do to accelerate AI. And specifically, we're going to use a relationship, a burgeoning relationship between DDN and NVIDIA to describe what we can do to accelerate AI workloads by using higher performance, smarter, and more focused infrastructure for computing. Now to have this conversation we've got two great guests here. We've got Kurt Kukine who is the Senior Director of Marketing at DDN and also Darren Johnson who is the Global Director of Technical Marketing for Enterprise in NVIDIA. Kurt, Darren, welcome to the CUBE. Thanks for having us. Thank you very much. So let's get going on this because this is a very, very important topic and I think it all starts with this notion of that there is a relationship that you guys have put forward. Kurt, why don't you describe it? Sure. Well, so what we're announcing today is DDN's A3I architecture powered by NVIDIA. So it is a full rack level solution, a reference architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply, very completely. So if we think about how this is going to, or why this is important, AI workloads clearly have put special stress on underlying technology. Darren talked to us a little bit about the nature of these workloads and why in particular things like GPUs and other technologies are so important to make them go fast. Absolutely. And as you probably know, AI is all about the data. Whether you're doing medical imaging, whether you're doing natural language processing, whatever it is, it's all driven by the data. The more data that you have, the better results that you get. But to drive that data into the GPUs, you need great I.O. And that's why we're here today to talk about DDN and the partnership of how to bring that I.O. to the GPUs on our DGX platforms. So if we think about what you described, a lot of small files often randomly distributed with nonetheless very high profile jobs that just can't stop midstream and start over. Absolutely. And if you think about the history of high performance computing, which is very similar to AI, really I.O. is just that. Lots of files, you have to get it there, low latency, high throughput. And that's why DDN is probably nearly 20 years of experience working in that exact same domain is perfect because you get the parallel file system which gives you that throughput, gives you that low latency, just helps drive the GPU. So you mentioned HPC from 20 years of experience. Now it used to be that HPC, you'd have a scientist with a bunch of graduate students setting up some of these big honking machines. But now we're moving into the commercial domain. You don't have graduate students running around. You don't have very low cost, high quality people. You're just, you know, a lot of administrators who nonetheless good people, but lot to learn. How does this relationship actually start making or bringing AI within reach of the commercial world? Kurt, why don't we start with you? And that's exactly where this reference architecture comes in, right? So a customer doesn't need to start from scratch. They have a design now that allows them to quickly implement AI. It's something that's really easily deployable. We've fully integrated this solution. DDN has made changes to our parallel file system appliance to integrate directly within the DGX-1 environment, makes that even easier to deploy from there and extract the maximum performance out of this without having to run around and tune a bunch of knobs, change a bunch of settings. It's really going to work out of the box. And the, you know, NVIDIA has done more than just the DGX-1. It's more than hardware. We've done a lot of optimization of different AI toolkits, et cetera. Talk a little bit about that, Darren. Yeah, so, I mean, talking about the example I used, researchers in the past with HPC, what we have today are data scientists. Scientists understand PyTorch. They understand TensorFlow. They understand the frameworks. They don't want to understand the underlying file system, networking, RDMA, Infiniband, any of that. They just want to be able to come in, run their TensorFlow, get the data, get the results. And just turn that, keep turning that, whether it's a single GPU or nine DGXs or as many DGXs as you want. So this solution helps bring that to customers much easier so those data scientists don't have to be system administrators. So a reference architecture that makes things easier, but that's more than just for some of these commercial things. Also the overall ecosystem, new application providers, application developers, how is this going to impact the aggregate ecosystem that's growing up around the need to do AI-related outcomes? Well, I think one point that Darren was getting to there, and one of the big effects is also as these ecosystems reach a point where they're going to need to scale, right? There's somewhere where DDN has tons of experience, right? So many customers are starting off with smaller data sets. They still need the performance, a parallel file system in that case is going to deliver that performance. But then also as they grow, right, going from one GPU to nine DGXs is going to be an incredible amount of both performance scalability that they're going to need from their IO as well as probably capacity scalability. And that's another thing that we've made easy with A3i is being able to scale that environment seamlessly within a single namespace so that people don't have to deal with a lot of, again, tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as they're successful, right? So in the end, it is the application that's most important to both of us, right? It's not the infrastructure. It's making the discoveries faster. It's processing information out in the field faster. It's doing analysis of the MRI faster, helping the doctors, helping anybody who's using this to really make faster decisions, better decisions. Exactly. And just to add to that, I mean, in automotive industry, you have data sets that are from 50 to 500 petabytes. And you need access to all that data all the time because you're constantly training and retraining to create better models, to create better autonomous vehicles. And you need the performance to do that. DDN helps bring that to bear. And with this reference, architecture simplifies it. So you get the value add of NVIDIA GPUs plus its ecosystem of software plus DDN, it's match made in heaven. Darren Johnson, NVIDIA, Kurt Kukine, DDN, thanks very much for being on theCUBE. Thank you very much. Thank you very much. And I'm Peter Burris. And once again, I'd like to thank you for watching this CUBE conversation. Until next time.