 Hey everyone, welcome back to the show floor at Mandalay Bay, it's Dell Technologies World 2023. Lisa Martin here with Dave Vellante. This is our second day of coverage, but you know because you've been watching since last night with our QBAP for dark. We've had great conversation so far. You probably saw in the last hour Michael Dell was here, we've got great guests. What if our alumni is back with us? We're going to be talking about trends in networking, how the impact of generative AI and AI is on networking. Please welcome Hassan Seraj, head of software products and ecosystem at Broadcom. Drew Shulke is back with us, one of our alumni. VP Product Management at Dell. Guys, it's great to have you on the Cube. Thank you for joining Dave and me. Thanks Lisa, thanks Dave. Great to be here. Let's start by just kind of describing your roles. Hassan, we'll start with you and then Drew will get you. So the audience really kind of gets that context. Absolutely. My name is Hassan Seraj, I lead product management for our software portfolio in our networking division at Broadcom. This is the division where we build Merchant Silicon, which is leveraged by some of the largest clouds. Also, some of our ODM and OEM partners and eventually consumed by enterprises, service providers and mid-market accounts. Excellent, Drew, give us your light out. So Drew Shulke, vice president of product management or infrastructure solutions group. Within that, my focus is on two areas, primary storage and then connectivity, which is that's the remit that I'm here today to talk about and excited to talk about all the developments that we have going on in networking. You build 5G radio frequency components? Is that you? No, no. Too bad, we talked earlier about the big deal with Apple and Broadcom. We did, we did. Okay, so what is the state of networking, Hassan? And then Drew, you can maybe from your OEM's perspective, but let's start at the Silicon. I mean, just a little bit of history on networking. You know, networking started off by, you know, with very vertically integrated systems that were built, right? A lot of companies who offered solutions which was based on proprietary hardware, proprietary Silicon and proprietary software. But a lot of it started with the hyperscalers where they talked about that look, we are frustrated with the fact that we have to deal with all of this. They called networking the SUV of the data center with the amount of power. Networking was consuming thousands of watts of power. It was hindering the workload placement. Networking is just 10 to 15% of the spend in a data center, but it can get in the way of the most vital optimizations. So the goal was to move networking to a model along the same lines as compute, which is you buy an X86 from an AMD on Intel. This is the way you buy motion silicon. You have open hardware, you have open source software and open APIs. And what has happened over the last, I'd say five, seven years is networking is moving from this vertically integrated closed systems to open networking. And whatever the hyperscalers started, they have now have this at mass scale, but now broader enterprises, service providers, and even mid-market accounts are starting to enjoy the benefits of all this. And you're talking about really the silicon layer of the stack, correct? Correct. And so, and the hyperscalers really looked to merchant silicon providers like you to do this, and it sort of forced those standards, is that correct? That is correct. Okay. So basically when they are taking the silicon from us, that's what we are providing, but at the end of the day, the systems are built with our partners like Dell, who will also sometimes are going to be putting the software stack and orchestration on top and selling it to the end customers. So the hyperscalers may leverage what we call ODMs, their own manufacturers, but this same technology, Dell is now taking, they will sell it to hyperscalers, but then they will also take it to the broader market. So, Joe, okay, so now go up the stack. You're consuming this for your customers. There's also, I've heard a number of people talk about that it's not just about the CPU anymore, it's about the surrounding components, the connect centricity. So that's really kind of fundamentally what we're talking about here, but what does this mean for you as an OEM and ultimately your customers? Yeah, I look at what's going on in this portion of networking, it's very analogous to other things that have come out of the hyperscale community, because the hyperscalers have to solve problems at massive scale, and when you think about networking in particular, getting networks to run efficiently and performately at massive scale is a very, very hard problem to solve, but if you solve it, boy, there's a lot of people that are really interested into it. You got a stack that's very, very hard and you know it can run at scale, and this is just another dynamic that has happened here where in the one particular case of Microsoft where they made heavy investments in their network, turning it into an open source project called Sonic, which is a very hardened stack that while designed for an hyperscaler has a number of attributes that are starting to become very, very relevant for large enterprises as well as maybe service providers that are smaller. Like what are those salient points? Well, a big thing would be this whole idea of multi-tenancy and the ability to have, even enterprises are considered basically service providers today, if you think about it, a large bank having to serve thousands of internal customers, thousands of apps, having a network that can have that level of segmentation and granularity and do it and manage and be easily managed is incredibly attractive. I remember in 2011, right after the first re-invent, I wrote a post that not only did the IT vendor community need to compete with Amazon, but the customer as well, the buyers have to become service providers exactly the point that you just made. Absolutely. It took a while to get that sort of scale and operating model, but we're finally here, I think. Talk a little bit from both of your perspectives about the nature of the partnership. Here we are in the Brocade, a broad company, the booth here at Dell Technologies World. We've been hearing a lot about, we always hear a lot about the partner ecosystem and how strong it is, but Hassan, give us your perspectives on Dell and the partnership and the impact to customers that it's delivering. And then we'll get- Absolutely. I think, I would say this partnership is full-stack partnership. As I said, one of the things that we wanted to do was, we wanted to simplify systems, networking systems. If you go back five years ago, people were buying these big chassis, large systems, and what we have done is over the last five to 10 years, double the performance and density in silicon every two years. So then now we have a 51.2 terabit chip that we call Tomahawk 5, in which you can get 512 ports of 100 gig in a few hundred millimeter square, right? So, huge density, huge performance, but also very low power. Now 100 gig is less than one watts of power. 10 years ago, this used to be 10 watts of power. So, what this has resulted for end customers is now they can get systems, which were four, five, six RUs, packed into one RU. Very simple systems, and that is something that we are partnering with Dell, right? They have a complete portfolio of those systems, and then this also makes software very easy to write for these systems, because a lot of complexity comes from software. When you have multiple chips, you know, you're trying to make them interoperate, and that's where the complexity comes in, and you know, what Drew mentioned about Sonic, an open source software, how do you take something and contribute to it so that it's consumable by the broader enterprise? Those are the two ideas that we have partnered on. One of the sort of narratives during the cloud, you know, ascendancy was network traffic's moving from North-South to East-West. The whole industry, you know, responded to that. What's the state of that trend with respect to AI? Will it change again? Is it now North-South and East-West? What's, how should we think about that? Yeah, it's interesting, like, as this AI phenomena has really started to take it up, and we were actually talking about a deal with an AI provider right before the show here came on, and it's really interesting that East-West dynamic is certainly very, very present in AI, especially if you're a large AI provider. And so there's a ton of parallels to what's already been going on in that service provider space to what these AI networks need. So maybe just serendipitous that this is all coming together here, but certainly beyond kind of that internal service provider use case for large enterprise, we see tremendous upside in terms of AI providers and the kind of networks that we're collectively working to provide to our customers. One of the things Michael Dell said yesterday at his keynote was that for businesses that are not using AI today, you're already behind. When customers come to you to solve challenges with respect to the impact of generative AI and AI on networking, what are some of those challenges and how does Dell and Broadcom together help customers eliminate those challenges so that they can really not just dabble in AI but really use it as a business driver or driver of their economy? So I would say in this case, we have come a full circle. If you look at AI networking, it's different in the sense the AI workloads, there are very few flows but they're very large flows and they all start at the same time and they can run for weeks or months. So there are very specific requirements on how to handle all of this. You need to have perfect load balancing, you need to have very good congestion control, you have to have failover mechanisms and you need to be able to manage very large clusters. Before it was all about compute, now these networks are about GPUs and CPUs. That's how they are. And some of these hyperscalers are building networks which are 32,000 cluster wide. So you need to be able to scale as well and that is what they're coming to us and asking, one thing they're saying is networking is once again getting in the way because 50% of the time Meta are just as the summit and they talked about that 50% of the time the workloads are spending in networking. So we are trying to make sure that we solve all of these problems around congestion control, failover so that we can improve these job completion times for these AI workloads. And one other thing I'll add is everybody is now on the same page in terms of look, they don't want vertically integrated systems here as well. They want an open ecosystem so that it can innovate and they want solutions which are based on Ethernet than anything else and that's something that we are working very closely with Drew and the team on and like he talked about, there are deals that are starting to happen not only in hyperscalers but other enterprise accounts as well. I see tremendous promise for AI from an operational aspect and for those of us living in IT is when something breaks or something goes wrong, it's like everybody attempts to prove themselves innocent and find the guilty party, right? It's just a reality of kind of the world but like the ability for us to take AI and ML and have that as a back end on all the observability engines that we have in terms of what's going on into the environment so that that sort of root cause and attribution of where the problem is can become a real-time activity. This is something we're doing across all of Dell at an ISG level and certainly plan on extending it into the networking space. Can we unpack the sort of characteristics of that workload that you just talked about, Hassan? So you said very few flows. So what are we talking about here? Just streams of high bandwidth data going across this internal network, right? And you said really important to have load balancing, congestion control and failover mechanism. So what does that mean? That you've got to have like lots of paths. In case one fails, you've got to be able to reconnect on another one and these have to be big pipes. Can you describe that in a little bit more detail? And then I'm interested in the up level stack and what that means for how you architect the system beyond just the silicon. No, absolutely. First of all, bandwidth is important, right? And this is why this, I talked about our 51.2 terabit chip but the other thing is you need to have what we call perfect load balancing. You need to have multiple paths and you need to be able to spray packets evenly across because if you have few flows and all of them are just going on a couple of paths and the other paths are empty, you're going to have congestion on that path. So I think that is why load balancing is extremely important. The other thing is you cannot have flows waiting in the pipeline. So you need to have mechanisms where receiver can signal to the sender that I can take your entire flow, right? And that's where some of the congestion management mechanisms come into play. And like I said, like when the failover happens, it has to happen very, very quickly, right? And you need to be able to switch paths very quickly otherwise the job completion time suffers. And all of this has to happen at scale, right? Of course some enterprises are probably building clusters which are 1000 GPUs, but there is need with LLMs, large language models. You need cluster sizes which are 32,000 GPUs. So you need to be able to scale as well. And Drew, your swim lane is the Dell's networking portfolio products. And how does that bleed into or does it bleed into sort of the stuff we heard this morning about AI servers, are they cousins? Are they, you know? Great question. Yeah, so we have a complete hardware portfolio and back to the bandwidth comment of it. I mean, we really need to give, you know, Broadcom credit here at the pace at which they've been allowing the entire industry to increase in bandwidth. If I think about just what we've done in the past five years and we're now having conversations around 800 gig switches and things like this, it's really phenomenal in terms of what they've done with Silicon. But we want to have a comprehensive portfolio of switch options to meet the customer where they need to be from that bandwidth perspective. And then we've got a, you know, the network operating system that's going to run on that switches, which we've talked about, you know, bringing from the open source community, hardening it, we want to support it. And then we want to add value on top of that in terms of automation features, telemetry features, insight and observability features. You know, that's, as we've democratized the network, that's where a lot of the value add is moving at this point in time. You know, like, I don't want to say they take us for granted, but they do take for granted the switch is going to work. That the NOS is going to pass the packet to the right place, that we've designed the network with those multiple paths that you talked about it. And at that point in time, it's like, okay, how am I optimizing this over time and making sure that it's up and running? That's where we're kind of focused from a software perspective and adding that value. And where are the flows going? Am I generally now back sort of inside my own data center? Am I connecting out to the clouds? What is the sort of normal state? It's primarily within the four walls of the data center and sort of the heavy work in terms of computing what the recommendation is. And the output is a relatively small component of that. So a lot more east-west pipes than there are north-south pipes in these kind of models. And I think absolutely. The other thing I would say is, you know, one of the things like based on the feedback that we get from Dell is we have very rich telemetry that we have embedded in our silicon as well. Because it's when you have this kind of scale and your failover is extremely important, right? Detecting failovers and fixing failovers. It's very important to have very rich telemetry. And that is something that we are haver and silicon. And we also want to make sure that we are exposing it. We are sending this data out, but we also send out data that matters. So it's very filtered data. So that the top level applications, right? When they are getting this data, they are able to give meaningful insights very, very quickly, right, to the end customer, right? Take us out here, Drew. And maybe a comment from Hisana as well. What does the future look like? You talked about some of the value add that's coming for customers, but give us a sneak peek into your crystal ball of Dell and Broadcom. And when can customers expect what those value adds you talked about? Oh, I mean, because there's software, expect us to deliver something just about every single quarter, right? I mean, we've gone through an evolution and we're very proud of our hardware products, but most of our development resources are sitting at the software layer right now. So expect to see us make slow and steady and constant progress working with Broadcom by every quarter by pushing this out. Excellent. We'll be keeping our eyes on that. You just got to figure out how to charge for that, right? Because you guys... We've got that figured out. We've got that figured out, Dave. Don't worry. We will keep our eyes on Dell and Broadcom. Guys, thank you so much for joining us on the program today. It was a pleasure having you talk about the trends in networking, the impact of generative AI and the depth of the partnership and the value you're going to deliver to customers. Thank you so much. Thank you. Thank you so much. We want to thank you for watching. Stick around. The co-COO Chuck Whitten joins Dave and me. He's going to break down his keynote from yesterday. We're going to really unpack multi-cloud by design. We're going to really talk with Chuck about what makes Dell unique for this moment in tech. Lisa Martin, Dave Vellante, be right back.