 From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hi, I'm Stu Miniman and welcome to a CUBE Conversation. I'm coming to you from our Boston area studio. And we're going to be digging into some interesting news regarding networking, some important use cases these days in 2020, of course, AI is a big piece of it. So happy to welcome to the program. First of all, I have one of our CUBE alumni, Kevin Dearling. He's the senior vice president of marketing with NVIDIA, part of the networking team there. And joining him is Scott Teese, someone we've known for a while, but first time on the program, who's the general manager of HPC and AI for the Lenovo data center group, Scott and Kevin. Thanks so much for joining us. Hey, it's great to be here Stu. Yeah, thank you. All right, so Kevin, as I said, you've been on the program a number of times, first when it was just Melanox, now of course the networking team, there's some other acquisitions that have come in. If you could just set us up with the relationship between NVIDIA and Lenovo, and there's some news today that we're here to talk about too. So let's start getting into that. And then Scott, you'll jump in after Kevin. Yeah, so we've been a long time partner with Lenovo on our high performance computing. And so that's the InfiniBand piece of our business. And more and more we're seeing that AI workloads are very, very similar to HPC workloads. So that's been a great partnership that we've had for many, many years. And now we're expanding that and we're launching a OEM relationship with Lenovo for our Ethernet switches. And again, with our Ethernet switches, we really take that heritage of low latency, high performance networking that we built over many years in HPC and we bring that to Ethernet. And of course that can be with HPC because frequently in an HPC super computing environment or in an AI super computing environment, you'll also have an Ethernet network, either management or sometimes for storage. And now we can offer that together with Lenovo. So it's a great partnership. We talked about it briefly last month and now we're coming to market and we'll be able to offer this to the market. Yeah, yeah, Kevin, we're super excited about it here at Lenovo as well. We've had a great relationship over the years with Melanox, with Nvidia Melanox and this is just the next step. We've shown in HPC that the days of just taking an Ethernet card or an InfiniBand card, plugging in the system and having it work properly are gone. You really need a system that's engineered for whatever task the customer is going to use. We've known that in HPC for a long time as we move into workloads like artificial intelligence, where networking is a critical aspect of getting these systems to communicate with one another and work properly together, we'd love from an HPC perspective to use InfiniBand, but most enterprise clients are using Ethernet. So where do we go? We go to a partner that we've trusted for a very long time and we selected the Nvidia Melanox Ethernet switch family. And we're really excited to be able to bring that end solution to our enterprise clients just like we've been doing for HPC for a while. Yeah, well, Scott, maybe if you could, I'd love to hear a little bit more about that customer demand, those usages there. So, you think traditionally, of course, super computing, as you both talked about, that move from InfiniBand to leveraging Ethernet is something that's been talked about for quite a while now in the industry. But maybe that AI specifically, could you talk about what are the networking requirements? How similar is it? Is it 95% the same architecture as what you see in HPC environments? And also, I guess the big question there is, how fast are customers adopting and rolling out those AI solutions and what kind of scale are they getting them to today? Oh, yeah, there's a lot there of good things we can talk about. So I'd say in HPC, the thing that we've learned is that you've got to have a fabric that's up to the task. When you're testing an HPC solution, you're not looking at a single node. You're looking at a combination of servers and storage, management, all these things that have to come together and they come together over in InfiniBand fabric. So we've got this nearly a purpose-built fabric that's been fantastic for the HPC community for a long time. As we start to do some of that same type of workload, but in an enterprise environment, many of those customers are not used to InfiniBand. They're used to an Ethernet fabric, something that they've got all throughout their data center. And what we wanted to try to find a way to do was bring a lot of that rock solid interoperability and pre-tested capability and bring it to our enterprise clients for these AI workloads. Anything high-performance GPUs, lots of inner node communications, worries about traffic and congestion, abnormalities in the network that you need to spot. Those things happen quite often when you're doing these AI, enterprise AI solutions. You need a fabric that's able to keep up with that and the NVIDIA networking is definitely going to be able to do that for us. Yeah, well, Kevin, I heard Scott mentioned GPUs here. So, you know, this kind of highlights, you know, one of the reasons why we've seen NVIDIA expand its networking capabilities. Could you talk a little bit about the kind of expansion of the portfolio and how, you know, these use cases really are going to highlight what NVIDIA helps bring to the market. Yeah, we like to really focus on accelerated computing applications and whether those are HPC applications or now they're becoming much more broadly adopted in the enterprise. And one of the things we've done is tight integration at a product level between the GPUs and the networking components in our business, whether that's the adapters or the GPU, the data processing unit, which we've talked about before. And now even with the switches here with our friends at Lenovo and really bringing that all together, but most importantly is at a platform level. And by that, I mean the software. And the enterprise here has all kinds of different verticals that they're going after. And we invest heavily in the software ecosystem that's built on top of the GPU and the networking. And by integrating all of that together in a platform, we can really accelerate the time to market for enterprises that wants to, you know, leverage these modern workloads, sort of cloud native workloads. Yeah. Please Scott, if you have some follow up there. Yeah, if you don't mind Stu, I just like to say, you know, five years ago, the roadmap that we followed was the processor roadmap. We all could, you know, tell, you know, to the week when the next Xeon processor was going to come out. And that's what drove all of our roadmaps. Since that time, what we found is that the items that are making the radical, revolutionary improvements and performance, they're attached to the processor, but they're not the processor itself. It's things like the GPU. It's things like then, especially networking adapters. So, you know, trying to design a platform that's solely based on a CPU and then jam these other items on top of it, it no longer works. You have to design these systems in a holistic manner where you're designing for the GPU. You're designing for the network. And that's the beauty of having a deep partnership like we share with NVIDIA on both the GPU side and on the networking side, is we can do all that upfront engineering to make sure that the platform, the system, the solution as a whole works, you know, exactly how the customer is going to expect it. Kevin, you mentioned that a big piece of this is software now. I'm curious that there's an interesting piece that your networking team has picked up, you know, relatively recently, the Cumulus Linux. So, help us understand how that fits in to the Ethernet portfolio and, you know, would it show up in these kind of applications that we're talking about? Yeah, that's a great question. So, you're absolutely right. Cumulus is integral to what we're doing here with Lenovo. You know, if you looked at the heritage that Melanox had and Cumulus, it's all about open networking. And what we mean by that is we really decouple the hardware and the software. So, we support multiple network operating systems on top of our hardware. And so, if it's, for example, Sonic, or if it's our Onyx or Dent, which is based on SwitchDev, but Cumulus, who we just recently acquired, has been also on that same access of open networking. And so, they really support multiple platforms. Now we've added a new platform with our friends at Lenovo and really they've adopted Cumulus. So, it is very much centered on enterprise and really a cloud-like experience in the enterprise where it's Linux, but it's highly automated. Everything is operationalized and automated. And so, as a result of that, you get sort of the experience of the cloud, but with the economics that you get in the enterprise. So, it's kind of the best of both worlds in terms of network analytics and all of the ability to do things that the cloud guys are doing, but fully automated and for an enterprise environment. Yeah, so, yeah, I mean, I just want to say a few things about this. We're really excited about the Cumulus acquisition here. When we started our negotiations with Melonox, we were still planning to use Onyx. We love Onyx. It's been our IBM, our IB, NOS of choice. Our users love it, our architects love it, but we were trying to lean towards a more open, kind of futuristic knowledge as we got started with this. And Cumulus is really perfect. I mean, it's a Linux open source based system. We love open source and HPC. It's the great thing about it is we're going to be able to take all the great learnings that we've had with Onyx over the years and now be able to consolidate those inside of Cumulus. We think it's the perfect way to start this relationship with NVIDIA networking. Well, Scott, help us understand a little more. What does this expansion of the partnership mean? If you talk about really the full solutions that Lenovo opens in the Think Agile brand, as well as the hybrid and cloud solutions, is this something then that, is it just baked into the solution? Is it a reseller? What should customers and your channel partners understand about this? Yeah, so any of the Lenovo solutions that require a switch to perform the functionality needed across the solution are going to show up with the networking from NVIDIA inside of it. Reasons for that, a couple of reasons. One is even something as simple as solution management for HPC, the switch is so integral to how we do all that, how we push all those functions down, how we deploy systems. So you've got to have a switch and a NIC and a connectivity methodology that ensures that we know how to deploy these systems and no matter what scale they are from a few systems up to literally thousands of systems, you know, we've got something we know how to do. Then when we're selling these solutions like an SAP solution, for instance, the customer is not buying a server anymore, they're buying a solution, they're buying a functionality. We want to be able to test that in our labs to ensure that that system, that rack, leaves our factory ready to do exactly what the customer is looking for. So any of the systems that are going to be coming from us pre-configured, pre-tested are all going to have NVIDIA networking inside of them. Yeah, I think that's, you mentioned the hybrid cloud, I think that's really important. You know, that's really where we cut our teeth first in InfiniBand, but also with our Ethernet solutions. And so today we're really driving a bunch of the big hyperscalers as well as the big clouds. And as you see things like SAP or Azure, it's really important now that you're seeing Azure Stack coming into a hybrid environment that you have the known commodity here. So we're something that we're built in to many of those different platforms with our Spectra MASIC as well as our adapters. And so now the ability with NVIDIA and Lenovo together to bring that to enterprise customers is really important. I think it's a proven set of components that together forms a solution. And that's the real key, as Scott said, is delivering a solution, not just piece parts. We have a platform that software, hardware, all of it integrated. Well, it's great to see you. We've had an existing partnership for a while. I want to give you both the opportunity. Anything specific you've been hearing kind of the customer demand leading up this, either is it people that might be transitioning from InfiniBand to Ethernet, or is it just general market adoption of new solutions that you have out there? Yeah, I'll tell you, okay, you go and start. Okay, so I think that there's different networks for different workloads is what we've seen. And InfiniBand certainly is going to continue to be the best platform out there for HPC and often for AI. But as Scott said, the enterprise frequently is not familiar with that. And for various reasons would like to leverage each. So I think we'll see two different cases. One where there's Ethernet with an InfiniBand network. And the other is for new enterprise workloads that are coming that are very AI centric, modern workloads, sort of cloud native workloads. You have all of the infrastructure in place with our Spectrum A6 and our ConnectX adapters and now integrated with GPUs that will be able to deliver solutions rather than just complements. And that's the key. Yeah, I think it's to a great example. I think of where you need that networking like we've been used to in HPC is when you start looking at deep learning in training, scale out training. A lot of companies have been stuck on a single workstation because they haven't been able to figure out how to spread that workload out and chop it up like we've been doing in HPC because they've been running in the networking issues. They can't run over an unoptimized network. With this new technology, we're hoping to be able to do a lot of the same things that HPC customers take for granted every day about workload management, distribution of workload, chopping jobs up into smaller portions and feeding them out to a cluster. We're hoping that we're going to be able to do those exact same things for our enterprise clients and it's going to look magical to them but it's the same kind of thing we've been doing forever with Mellanox in the past now in video networking. We're just going to take that to the enterprise. So I'm really excited about it. Yeah, well, it's so much flexibility. We used to look at, you know, it would take a decade to roll out with some new generations. Yeah, Kevin, if you could just give us latest speeds and feeds. If I look at Ethernet, did I see that this has from N-gig all the way up to 400 gig? I think I lose track a little bit of some of the pieces. I know the industry as a whole is driving it but where are we with the general customer adoption of some of the speeds today? Yeah, indeed. We're coming up on the 40th anniversary of the first specification of Ethernet and we're about 4,000 times faster now 40,000 times faster at 400 gigabits versus 10 megabits. So yeah, we're shipping today, you know, at the adapter level 100 gig and even 200 gig and then at the switch level 400 gig. And, you know, people sort of ask, do we really need all that performance? The answer is absolutely. So the amount of data that the GPU can crunch in these AI workloads, these giant neural networks, it needs massive amounts of data. And then as you're scaling out, as Scott was talking about, much along the lines of InfiniBand, Ethernet needs that same level of performance, throughput, latency and offloads and we're able to deliver. Yeah, so Kevin, thank you so much. Scott, I want to give you a final word here and anything else you want your customers to understand regarding this partnership. Yeah, just a quick one, Stu, quick one. So we've been really fortunate in working really closely with Melanox over the years and with NVIDIA. And now the two together, we're just excited about what the future holds. You know, we've done some really neat things in HPC with being one of the first to water cool and InfiniBand card. We're one of the first companies to deploy Dragonfly topology. You know, we've done some unique things where we can share single IV adapter across multiple users. We're looking forward to doing a lot of that same exact kind of innovation inside of our systems as we look to Ethernet. We often think that, you know, as speeds of Ethernet continue to go higher, we may see more and more people move from InfiniBand to Ethernet. I think now having both of these offerings inside of our lineup is going to make it really easy for customers to choose what's best for them over time. So I'm excited about the future. All right, well, Kevin and Scott, thank you so much, deep integration, customer choice, important stuff. Thank you so much for joining us. Thank you, Stu. Thanks, Stu. All right, I'm Stu Miniman, and thank you, Sir, for watching theCUBE.