 Hey everyone, it's theCUBE Live at Dell Tech World. Day three of our coverage of the 2023 event from Mandalay Bay, Lisa Martin, Dave Vellante here. We have been coming to you since Monday night, all day yesterday, all day today we're here talking a lot about innovation, Dave. There's been a ton of talk about innovation. You've got two alumni back with us, Dell and Broadcom. Going to be talking about the latest gen of the Broadcom 9600 family. Please welcome Kim Leigh and our principal performance architect at Broadcom. Great to have you back, Kim. Yeah, thanks for having us. Hey, Miss Jones joins us as well, Director of Server Technical Marketing Engineering at Dell. Thanks so much, guys, for joining us. Of course, happy to be here. Yeah. So let's talk about the latest generation of the Broadcom 9600. Kim, give us kind of the technical lay of the land and all, there's some significant performance numbers that have been achieved. Yeah, and I was actually really proud to be a part of this. This came about about five or six years ago. I'm part of the architecture team and we decided that we were just going to blow up all the previous norms. We weren't going to do this incremental improvement. We were going to do this big, massive improvement in performance and we really did that. And we did that with a couple of different technologies using offload engines. We doubled the memory. We doubled the host bandwidth and using PCIe Gen 4 and 24 gig SAS. So it's been really exciting. Shameth, talk a little bit about Dell's involvement there and how you've been a facilitator or an enabler of that. Absolutely, so our partnership with Broadcom has gone on for multiple years and really the development of PERC 12 as a generational improvement has made a dynamic impact on our 16th generation of servers, which we just launched last year. Really the exciting part is that customers' implementations are seeing the benefit from those performance gains that Kim was talking about. And our partnership between the testing and the performance gain that we saw in the testing meant that we're able to deploy that in all sorts of different use cases. So when you're at an event like this and you see the main stage and there's a lot of very high-level messaging about how technology's going to affect the world and we get into some of the product stuff in the keynotes. But I like these conversations because now we're going inside, right? Because you've got to have deep technology. It's not just the microprocessor. It's all the surrounding components. The other pieces that you guys are building in to make this a reality. Kim, you and I have talked about the changes we've seen over the last 10, now even 12 years where everything was designed for spinning disk and now you guys have worked together for years and years and years and that completely changes the game. We've been looking forward to this era for a long, long time. Can you take us through that progression and how it's manifested itself in these products? Do you want to take this? You want me to talk to it? When you look at the full portfolio of products, even the ones that were launched around the XC9680 this past week, the performance within those incorporates the PowerEdge RAID controller. So it's appropriate that you talk about digging into under the covers, right? Because under the covers of every PowerEdge system is the capability to integrate this Broadcom controller, this PowerEdge RAID controller. And as a consequence, even for workloads like AI, these exciting new workloads, people almost take it for granted that these controllers exist and the performance that's associated with them, but they enable the rest of the architecture, even things like IO against the disk, right? Critical elements within the server drive performance. So if you look at drive types have been changing over time, we look at the integration of accelerators and the adaptation of these RAID controllers to get even faster. And the partnership that we have with Broadcom has really enabled that. What have been the enablers underneath? Is, I mean, we, again, talked about the balance, but take us inside, what are the sort of fundamental principles, the design pillars, if you will, what are the key aspects that we need to know about? From a performance standpoint, we actually had three different kind of pillars. And one of them was what we call the happy path IO. So that's your bandwidth, that's your IOPS. That's when everything's going good. The other thing is that we looked at latency and that's really important for a lot of different things like logging and telemetry and things such as that that are really important to keep that latency super low. We're talking about sub 10 microsecond latencies. And then the third thing is something called performance under rebuild. And what that is is we're trying to hide when there is something that does go wrong, when a drive fails and you have to undergo a rebuild. And we've actually been able to, in many cases, completely hide that from the end user. In terms of both latency and performance. So, when systems were designed for spinning disk, it was always that right performance that would kill you. Is, how has that changed with Flash and NVME? Is it the same, it's just faster? Or has the bottleneck shifted? Help us understand some of the architectural constraints. So I've been doing this now for 24 years. And when I first started out, I remember that hard disk, they could do maybe 100 IOs per second. Maybe 200 if you had a really fast one and you really pushed it hard. And there was this one day where I had this epiphany and we were playing around with the member when they first came out with these little USB disks. We were laughing about, if we plugged a bunch of these things in because they were built on Flash, we would have this really fast performance and then we went, uh-oh, now the game has completely changed. So before the drives were really slow, now the drives are so fast and there became a moment in time where we really had to go back and say, how do we redesign these? And what we did was NVME right now is our design leader. We went out, we looked at what NVME drives are capable of. We went out, we asked our customers, we said, hey, what kind of performance do you need? What kind of applications are you doing? So we didn't do this in the ether. None of this was designed just in our own little light ram. We talked to customers. And you know, one of the big concerns about customers that they have these days is that they said, you know, look, our capacity of these drives are going so fast, they're increasing at rates we've never seen. We're looking at- I mean, they're up to 15 terabytes now. Right now. NVME on the Flash. Absolutely. And even up to 40 terabytes, very, very large, you know, coming in the next 12 months. And what's happening is that the performance of these drives is not increasing at the same rate that the capacity is. So you end up with this little bit of a gap and that's where our performance under rebuild comes into play here is because a drive cannot rebuild itself any faster than it can write to itself. So that performance of that drive is a key indicator of how fast you can rebuild. And that's what our customers are saying, hey, look, I'm really concerned. What can we do about this? Give us a reference point as to what a rebuild time in say a hard disk configuration would be compared to where you guys are at today. Can you make a comparison? I mean, I've seen rebuild times, a hard disk to take, I don't know, weeks. It's 97 times. So if you look at 97 times faster with the Perk 12 than previous generation, I mean, usually you know how it works. Generation on generation improvements, you see 20 to 30% of a performance improvement. Not 97 times the performance improvement. And the key thing is that, look, none of us want drives to fail, right? But ever since the days of spinning disk drives do fail and you need to plan for that redundancy. That's why we have rate in the first place, right? But if you're able to plan for not just the rebuild times of being 5.5 times faster, but then while under load, you can have 97 times better performance of your application. It means that the net net of that result is that the end user doesn't see a performance impact while that drive has failed and you're having to replace it and rebuild. Historically before Perk 12, you know, drive would fail, you'd send off for replacement from Dell, we'd send one to you directly within a four hour SLA depending on your service contract, you replace it and then you rebuild for however long that might take based on the size of the drive. Which could be weeks or... Depending on the size of the drive or the full RAID set, maybe. A single drive wouldn't necessarily be weeks, but if you were to rebuild the entire RAID set, it could be quite substantial. Sheamus, you had me at 97X performance improvement. Yeah, two orders of magnitude. Right. The customer feedback must be pretty amazing. Share some of that with us. Yeah, absolutely. This past week, during Dell Technologies World, the feedback from customers has been, look, not only are we seeing this performance impact in 16G across CPU, memory, frameworks, DDR5, as well as PCIe Gen5, but we're a Gen4 devices, but we're able to see that within the drive types that are associated. And these RAID controllers enable that performance improvement across the entire ecosystem of the server. So it's a really exciting use case. They're seeing it for things like healthcare, right? Government applications, as well as financial and fintech types of customers. So those high frequency, low latency applications where they're doing things like database or queries that might have a real impact on their business. Is there an application for this product for a spinning disk still? Are they still relevant? Absolutely. What's the delta impact there? So we definitely designed it with hard disks in mind as well because in terms of cost per gigabyte, you really can't beat spinning media still. So it's really great for a little bit more colder kind of data, but we still are there to protect it to make sure that it's still highly available. In terms of performance, we can still 100% enable hard drive performance and not only that, but we can accelerate the right performance of hard drives by up to, we've seen two and three X just by doing some optimizations in the right caching algorithms that we have on our controllers. Go ahead. That being said though, the cost of flash has come down dramatically in the past 12 months. I was going to ask you because I mean, we've been, we certainly correctly called the crossover that's an oxymoron, but high spin speed, high performance disks and flash, that was clear. We thought it was going to happen sooner, and our premise was it doesn't have to even cross over because you get so much other advantages with so many other advantages with flash, things like data sharing and space efficient snapshots on and on, but to your point, Seamus, are we almost there where that crossover is? We're getting there. We're definitely getting there and we're seeing that customers, because of the performance improvements and the price per gig of that flash optimization and the capability of this Perk 12 to be able to do hardware based RAID from the RAID controller on those flash drives. That's something that historically customers might not have adopted. They might have thought, I'll do a direct attach on an NVMe drive and not have the benefit of RAID redundancy and run the risk for their applications. Whereas with this, we're enabling, we're making sure that they get peak performance. They're able to take full advantage of those NVMe drives and they're able to deploy across their full estate within their environment. Seamus, can you expand on, you mentioned healthcare, but some of the industries that are really primed to be significantly impacted by the power and the performance of this technology? If you look at the applications that this is most appropriately used for, things like SQL, things like databases, CRM, ERPs, I mean, those apply across a multitude of industries. Within healthcare, specifically, it's critical data, absolutely critical data. It needs to be accessed without lag. I mean, just imagine if you went to the doctor and they couldn't bring up your file because, oh, I can't query off the database quick enough, right? I mean, that's an impact, a direct impact for a user. And when we look at the rest of the use cases, I mean, FinTech, those are high frequency traders. Oftentimes, they're not necessarily using accelerators. They're actually using larger disk profiles and more of them. So that makes this rate controller even more important. You know, and one interesting thing he's talking about transactional performance is that this new generation, we're seeing an eight to 10x improvement over our previous generation for those kind of applications. So transactional performance is extremely important for a lot of different applications in different environments. And from an end user, how many times have been on the phone with a call center? And you've heard, oh, my system's really slow today. Exactly. Yeah, because you know, they're doing reads. They're trying to read a record, right? That's exactly it, that happens all the time. And they're trained now to blame the system, right? Yes, they are, yeah, absolutely. So in latency, again, latency is just, it's become so important. And one of the things that we've been able to do is we actually have these caching engines. They're hardware engines that handle all the right caching. So as these I-O write I-Os come into our controller, we're able to write those down at between five and 10 microseconds. And then write those out optimally at a later time and date or even just kind of re-sort them. And so, you know, with these great latencies, we're even lower than the best enterprise NVMe drives. We can beat that and we can make it, you know, consistent. So for those SLAs that he was talking about, it's really critical, you know, for messaging and logging and things like that, where you have to maintain these consistent. You can't have these 99.99 tail latencies that are 10 and 20 seconds, which we had seen, you know, kind of in some of the previous generations, we're like, well, you know, why are we having these latencies? And so we've been able to take care of that. We can assure up to 99.99% latencies up to the max I-O of less than 10 microseconds now. Sub 10 mics, wow. So the knee of the curve is not going to... Unbelievable. People want consistent performance. Yes. You know, they can deal with that as long as it's within some kind of boundary. Yeah, so, absolutely. Yeah. Guys, what, in our final minute or so here, so tremendous amount of innovation improvement, a lot of benefits and worldwide worldly applications as you talked about, Seamus. Absolutely. What's next for Dell and Broadcom? I'd love to get both of your perspectives on that. Oh, there's a lot. Let's put it down. There's a lot going on under the covers. We can talk about roadmaps in customer sessions, but there's a lot of innovation happening with Dell and Broadcom. Everything from Nix right through to Silicon on the board, right through to the new rate controllers. I mean, we're doing planning on future generations. We have strong partnership for the longevity of Broadcom. And the other thing that I did want to mention quickly is that within government applications, the use of a hardware root of trust, it meant that all the component parts within the server are able to validate and certify through the rate controller so that the devices, if they're tampered with, can ensure that they can be a stable platform. And if there's anything that's manipulated or changed from factory right through to customer deployment, it won't even boot the system. So it's an additional security feature that's embedded into the rate controller. And that's a key critical element for the rest of the pieces within the server. And every industry will benefit. Every industry, especially fed, especially government deployments. Can you work in the audience, go to learn more? So you can go on to the Broadcom website right now, so broadcom.com, and then you can go into the NVMe SSD section of that. And there's all kinds of information. And of course, Dell also has some dedicated reports. We have the Tollery report where we tested over 60 different data points. And so customers can kind of look and say, my application, what kind of performance can I expect? Excellent. Shamus, Kim, thank you so much for coming on here, sharing the innovations that Dell and Broadcom are continuing to deliver that are making global impact. We appreciate your insights. Thank you so much. Our pleasure. For our guest and for Dave Vellante, I'm Lisa Martin. After a short break, Dave and I are going to be joined by two CUBE alumni talking about what's driving storage in 2023. We're going to be talking about cyber threats and how Dell is helping customers overcome a lot of challenges and more on PowerStore. We'll see you after a short break.