 Cloud has changed the way that organizations think about hardware. Customers are dealing with so much data that they can't rely on processing power alone anymore to meet their performance requirements. Intel, AMD, NVIDIA and others have made notable strides in performance along with more advanced forms of memory. But modern system performance relies on other hardware components which have seen tremendous innovation in the last couple of years. We're talking about things like NICs, RAID controllers, accelerators, et cetera, that need to be in sync to optimize application performance and eliminate bottlenecks to continuously accelerate future system performance. Now importantly, how those components communicate with each other becomes increasingly vital. In fact, some believe that we're moving from a CPU centric to a connectivity centric world. Hello, this is Dave Vellante and theCUBE has been covering hardware performance for years and we're pleased to go deeper into the topic with a series of programs made possible by Dell. The program series provides more in-depth coverage on this subject and in the first installment, we look at the generational differences of Dell hardware with various upgraded connectivity components like storage controllers and network interface cards. The question is, does hardware matter? We're initiating deeper coverage of a technological shift that will indefinitely provide performance headroom for the next generation of modern applications. Not just general purpose systems like ERP but emerging workloads that inject AI, machine learning and massive amounts of data into applications. We believe that we're at the cusp of a renaissance in systems design and we're pleased to bring you an in-depth look at the future. First up, theCUBE was just recently at Dell Technologies World and Red Hat Summit in Boston covering these live events. Now we sat down for extensive conversations with a number of top execs including Jeff Clark who's the vice chairman and co-COO of Dell Technologies, Red Hat's CEO Paul Cormier and Red Hat's CTO Chris Wright and also Accenture Ways In. We had the chance to ask all of them, Point Blank, does hardware matter? And we've got some really interesting responses. Let's see some clips from those conversations. Welcome back to Las Vegas. We're here in the Venetian Convention Center. My name is Dave Vellante, I'm here with my co-host John Furrier, you're watching theCUBE's live coverage of Dell Tech World 2022. Great crowd, I would say 7,000, maybe even 8,000 people when you add in all the peripheral attendees. Jeff Clark is here, he's the vice chairman and co-chief operating officer of Dell Technologies. Great to see you face to face, man. Hi guys, good to see you again. Does hardware still matter? And if so, why? Of course hardware is still matters. Explain why. Well, last time I checked, doesn't the software stuff work on the hardware? Exactly. Doesn't the software things make hardware calls to exploit the capability we built into the software? Of course it does. Listen, does hardware matter anymore with all the cloud? Hardware totally matters. I mean, the cloud tried to convince us that hardware doesn't matter and it actually failed. And the reason I say that is because if you go to a cloud you'll find hundreds of different instance types that are all reflections of different types of assemblies of hardware. Faster IO, better storage, certain sizes of memory. All of that is a reflection of applications need certain types of environments for acceleration, for performance to do their job. Now, I do think there's an element of we're decomposing compute into all of these different sort of accelerators and the only way to bring that back together is connectivity through the network. But there's also SOCs when you get to the edge where you can integrate the entire system onto a pretty small device. I think the important part here is we're leveraging hardware to do interesting work on behalf of applications that makes hardware exciting. And as an operating system geek, I couldn't be more thrilled because that's what we do. We enable hardware, we get down into the bits and bytes and poke registers and bring things to life. There's a lot happening in the hardware world. You're saying it matters. Yeah, and infrastructure doesn't always, I mean, now that you can do infrastructure as code, right? I mean, I was at the Dell Summit last time and we read that is a huge partner of Dell now, right? Which was much more partnered with VMware, but I think the whole ecosystem is opening up and even the hardware providers are looking at this in a much more nimble way, but yes, it's very much part of the conversation. Did a survey, I think around last August or something, and one of the questions was around, where do you want your security, right? Where do you want to get your DevSecUp security from? Do you want to get it from individual vendors, right? Or do you want to get it from your platforms that you're using and deploying? Great question, what would they say? The majority of them, they're hoping they can get it built into the platform. You're going to see hardware innovation out in the edge, software innovation as well. The interesting part about the edge is that, obviously, RHEL made Red Hat. What we did with RHEL was we did a lot of engineering work to make every hardware architecture when the world was just standalone servers, we made every hardware architecture just work out of the box, right? And we did that in such, because with an open source development model, so embedded in our psyche and our development processes is working upstream, bringing downstream, 10 year support, all of that kind of thing. So we lit up all that hardware. Now we go out to the edge. It's a whole new different set of hardware innovation out at the edge. We know how to do that. We know how to make hardware innovation safe for the customer. And so we're bringing full circle and you have containers embedded in Linux and RHEL right now as well. So we're actually with the edge bringing it all full circle back to what we've been doing for 20 plus years on the hardware side, even as a big part of the world goes to containers and hybrid and in multi-cloud. So that's why we're so excited about the edge opportunity here. That's a big part of where hybrid's going. You can see that hardware is still a hot topic for many of these top tech leaders. By the way, you can see the complete interviews with all these individuals and many more of the guests that we had at Dell Tech World and the Red Hat Summit. All you got to do is follow the links on this page. Next up, let's have a conversation with Jazz Tremblay, who's the general manager of the Data Center Solutions Group at Broadcom. Broadcom is a company that designs and builds many of the connectivity components that are part of modern systems architecture. Broadcom is an enabler of many of the performance improvements that we'll discuss in detail. Welcome, Jazz, to the program. Hey, Dave, thanks for having me, really appreciate it. What are the trends that are driving that shift that we talked about earlier from a CPU-centric world to one that's connectivity-centric? If you look at the digital universe, it's growing at about 23% kager date. So over a course of four to five years, you're doubling the amount of new information. And that poses really two key challenges for the infrastructure. The first one is you have to take all this data and for a good chunk of it, you have to store it, be able to access it and protect it. The second challenge is you actually have to go and analyze and process this data. And doing this at scale, that's the key challenge. And we're seeing these data centers getting a tsunami of data and historically they've been CPU-centric architectures. And what that means is the CPUs at the heart of the data center. And a lot of the workloads are processed by software running on the CPU. We believe that we're currently transforming the architecture from CPU-centric to connectivity-centric. And what we mean by connectivity-centric is you architect your data center thinking about the connectivity first. And the goal of the connectivity is to use all the components inside the data center, the memory, the spinning media, the flash storage, the networking, the specialized accelerators, the FPGA, all these elements and use them for what they're best at to process all this data. And the goal, Dave, is really to drive down power and deliver the performance so that we can achieve all the innovation we want inside the data centers. So it's really a shift from CPU-centric to bringing in more specialized components and architecting the connectivity inside the data center to help. We think that's a really important part. Okay, am I right, Jazz, that you're essentially from an architectural standpoint trying to minimize the, because latency is so important, you're trying to minimize the amount of data that you have to move around and actually bringing compute to the data. Is that the right way to think about it? Well, I think there's multiple parts of the problem. One of them is you need to do more data transactions. Example, data protection with rate algorithms. We need to do millions of transactions per second. And the only way to achieve this with the minimal power impact is to hardware accelerates these. That's one piece of investment. The other investment is you're absolutely right, Dave. So it's shuffling the data around the data center. So in the data center, in some cases, you need to have multiple pieces of the puzzle, multiple ingredients, processing the same data at the same time. And you need advanced methodologies to share the data and avoid moving it all over the data center. So that's another big piece of investment that we're focused on. Talk a little bit more about the disruptive technologies or the supportive technologies that you're introducing specifically to support this vision. So the first one is I'll take an enterprise workload database. If you want the fastest running database, you want to utilize local storage and VME-based drives. And you need to protect that data. And RAID is the mechanism of choice to protect your data in local environments. And there, what we need to do is really just do the transactions a lot faster. Historically, the storage has been a bit of a bottleneck in these types of applications. So example, our newest generation product, we're doubling the bandwidth, increasing IOPS by 4x, but more importantly, we're accelerating RAID rebuilds by 50x. And that's important, Dave. If you are using a database, in some cases, you limit the size of that database based on how fast you can do those rebuilds. I wonder if we could take a couple of examples or an example of scaling with a large customer. For instance, obviously, hyperscalers or take a company like Dell. I mean, they're a big company, big customer. Take us through that. So I take a company like example, Dell, that's very focused on storage from storage servers. They're acquisition of EMC. They have a very broad portfolio of data center storage offerings. Scaled to them from a Broadcom, from a connected by Broadcom perspective, means that you need to have the investment scale to meet their end to end requirements. All the way from a low end storage connectivity solution for booting a server, all the way up to a very high end, all flash array or high density HDD system. So they want to company a partner that can invest and has a scale to invest to meet their end to end requirements. So Dell is a great company to work with. We have a long lasting relationship with them and the relationship is very deep in some areas, example, server storage, and it's also quite broad. They are adopters of the vast majority of our storage connectivity products. How unique is that, is the Broadcom model? What's compelling to your customers about that model? If you look at some of the things we talked about from a scale perspective, how data centers throughout the world are getting inundated with data, Dave, they need help. And we need to equip them with cutting edge technology to increase performance, drive down power, improve reliability. So they need partners that in each of the product categories that you partner with them on, we can invest with scale. So that's I think one of the first things. The second thing is if you look at this connectivity-centric data center, you need multiple types of fabric. And whether it be cloud customers or large OEMs, they are organizing themselves to be able to look at things holistically. They're no longer product company. They're very data center architecture companies. And so it's good for them to have a partner that can look across product groups, across divisions, says, okay, this is the innovation we need to bring to market. These are the problems we need to go solve. And they really appreciate that. And I think the last thing is a flexible business model. Within example, my division, we offer different business models, different engagement and collaboration models with technology, but there's another division that if you wanna innovate at the silicon level and build custom silicon for you, like many of the hyperscalers or other companies are doing, that division is just focused on that. So I feel like Broadcom is unique from a storage perspective, its ability to innovate, breadth of portfolio and the flexibility in the collaboration model to help our customers solve their customer's problems. Jazz, great stuff. I really appreciate you laying all that out. Very important role you guys are playing. You have a really unique perspective. Thank you. Thank you, Dave. Next, our colleague, David Nicholson takes a deeper dive into some of the performance data and the independent labs that perform these tests. Over to you, David. Thanks, Dave. Now, if we're truly going to explore this question, does hardware matter? We need objective data to evaluate. Where do you get objective data? You get it from independent testing labs. So in a moment, we're gonna be talking to two independent testing labs that ran tests, performance tests. We'll then be reviewing the results of those performance tests. Joining me is Aaron Suzuki, Founder and CEO of Prowis. Aaron, welcome. Thank you. Thanks so much for having me. Absolutely. Thanks for joining us. So let's dive right in. Tell us about Prowis. Prowis has been around for quite a while. We've been serving the technology industry from the very beginning, almost 20 years. We've always been able to bridge the gap between the story of the product and what it actually does. And a lot of times there's a pretty fundamental disconnect between what engineering says and what marketing wants to claim. And so this is sort of how we got down this road of getting into testing and validation of products such as we do today quite extensively. That's really what we're focusing on right now is this idea of your independence as a lab. And in this particular case, it's a series of tests that you've done for using Dell hardware combined with Broadcom cards. So talk a little more about that, about that concept of independence and what that means. Yeah, you know, it's important to us that we stay vendor agnostic, platform agnostic. And there are a lot of things happening concurrently in the industry. A lot of people want to get a lot of work done really fast. And most customers are not sort of vendor exclusive. In fact, we're not sure we know of any. We always try to keep this objective point of view. That is to say that we don't allow our customers to buy results when we're doing quantitative testing. We really are out there trying to come up with a story or a narrative. And that really seemed to be the missing link in all of this is that there are the quantitative houses that do traditional benchmark testing on one side and then system integrators and kind of on the other extreme agencies that would really do the narrative and the system integrator side build out a solution but they wouldn't be able to tell you how it would perform. And so reconciling those two things really became challenging. So having a source that would be able to give you that insight that goes beyond just transactions per whatever unit of time and finding some of these metrics in between that were more relevant to people's jobs was really the inspiration for creating this unique practice that we call prowess labs. Aaron, thanks for joining us to talk about prowess today. My pleasure, thanks for having me. In addition to the tests that were run at prowess other tests were run at Principal Technologies a second independent testing lab. Joining us now is Mark Van Name co-founder of Principal Technologies. Welcome, Mark. Thanks, Dave. It's great to be here. Tell us the story. Sure, here's the short form. We're in our 20th year. We created the company with the idea that the best way to sell products was to tell the truth which may not seem radical, but often is. And so we started out trying to prove the advantages of products. Have you had challenges over the years maintaining that independence? What do those conversations look like when maybe someone is trying to nudge you? How do you deal with that? From the beginning, we give away the entire methodology what we now call the science behind the report for every engagement we do. So we don't say, hey, we just tested this. If you go to most review sites, they'll tell you a little tiny bit about what they did and then they'll give you their conclusions. We make available in a separate document attached to the report, the complete methodology, the system information, the software information, everything about what we did, the detailed steps so that if you have the right hardware, software, and expertise, you can reproduce what we did. This means you don't have to trust us. You can verify it and it puts our work out on display for everybody. Mark, thank you so much for spending time with us and telling us a little about what your company does and how you do it. Well, thanks for having me. Now let's take a deep dive into the tests themselves. To do that, I'm joined by Kim Lanar from Broadcom. Kim, welcome. Hey, thanks for having me. Kim, tell us about yourself. What do you do at Broadcom? So I am a performance architect here at Broadcom and I've been working with them for 15 years in my 22nd year of working in storage performance. So what is the overall theme of what we're looking at in these tests? Why are we running these tests to begin with? Well, when we design these products, we have an idea in mind on how well they're going to perform, but it's really critical that we understand that it is actually able to perform at the expectations that we have for it. So using an independent lab as we have allows us to do that and convey that information to our customers. So let's be very specific about what's actually being tested here. What is the specific hardware that's being tested in this case? What are we looking for? So we're trying to focus a little bit more on the storage component of it, but the storage doesn't act in a vacuum. So there's a lot of other components that are critical to making sure that the storage runs well. For instance, the PCIe slots, the processing capabilities as well as the memory. Let's talk about test number one. Tell me what we did in test number one. So the first test was actually focused on transactional database performance. And what we were comparing here was the Dell PowerEdge 740XD to the new PowerEdge 750. And in the previous generation, our storage controller was actually attached to SATA drives and in the R750 instantiation, we updated that to NVMe drives. So it clearly shows the advantages of going from a SATA environment to an NVMe environment. Well, let's take a look at some of those results because I wanna get your input on exactly what they mean. I'm seeing increases in new order per minute performance. In one case, with eight drives, we see a 7X increase. With 16 drives, a 14X increase. We go to log disk rights and we see a 5.6X increase with eight drives. Going to 16 drives, we see a 13.5X increase. I'm assuming that this would be with two controllers instead of just one, going to 16 drives. For log disk reads, a 1.6X increase with eight drives and a 9X increase with 16 drives. Rebuild times, which are obviously important. Going from the 740XD to the 750 with the addition of the RAID card, 4.45X faster in the 750, 5.25X faster. Kim, what do these numbers mean and why are they important to people who are using server technology? Well, I think we can both agree that those are pretty impressive improvements and performance. And what we're testing here is a TPCC-like benchmark. And it's an industry-standard benchmark. It's been around for well over 20 or 30 years even in the form that it's in right now. And what that does is it adds what the transactional performance capabilities are of the server. So this is a lot more holistic than just testing the storage because this is actually testing the memory. It's really testing the CPU and it's testing the storage, too. And one of the reasons why we focused in on the database performance and the log performance is because there are a lot of different components that go together for a SQL-based server environment in order to generate good performance. And those are both very critical components, especially the log writes. Especially today, we got to make sure that we have very low latency log writes along with really high performance. And of course, you mentioned the reduction in the rebuild time. So one of the benefits of RAID is high availability, so your storage can keep going even if you have a dry fill. And what this shows is that we can get our customers' databases back online even faster than we could before. So, Kim, what does this mean in the real world? What does this performance translate into in terms of things that people care about? Well, I guess the reality is that that translates into more transactions per second from our customers' databases so that our enterprise businesses can actually get more work done. It's really important to make sure that they are able to realize the benefits of the entire system. So by exercising the database, we're actually showing what the entire Dell R750 is capable of doing, especially compared to the previous generations, and it provides an incentive and reason for our customers to be able to move up into the newer technologies. Makes sense. So, Kim, walk us through Test 2. What was it all about? So Test 2 was a little bit more focused just on the storage, unlike the other where we were testing transactions. Test 2 was really just focusing on the IOPS and the bandwidth capability. Again, we're testing the Dell PowerEdge 740XD and comparing that to the Dell PowerEdge 750. But what is different between these two is the Gen 3 versus the Gen 4 storage infrastructure. So in particular, we're using a Gen 3 PLX switch in the 740X, as opposed to the Gen 4 PLX switch. In both cases, we're testing 24 NVMe drives. What we're doing is just really trying to see how far we can push it. How many IOPS can this system handle? So not only did we scale up the number of drives that we were testing simultaneously, we also scaled up the number of cores that we were testing to see how well that correlated to the end user performance. Well, let's take a look at those results. Test 2, Test 2 shows an ability to process more outgoing storage requests up to 2.1x times the raw IOPS. These are random read tests and we're looking at up to 12.3 million IOPS from 24 NVMe drives. In the sequential read testing, we're seeing similar gains up to 2.2x gigabytes per second in concurrent throughput. That's a high number of 53.2 gigabytes per second with 24 NVMe devices. Kim, these are impressive numbers, but what do they actually mean? So what they mean is that the protocol for Gen4 PCIe is working. It's exactly what we were expecting. And in fact, it's even a little bit more than what we were expecting. We were anticipating a doubling of performance and where in the computer server world do you see a doubling of performance so easily? So by using these Gen4 switches along with these Gen4 NVMe drives, we're seeing a fantastic, amazing improvement in the ability to move data within the server. So what does that mean for customers? So for our customers, we all know that data's coming at us faster than it ever was before. We're storing more and more data every single day. So this shows that the ability to actually double the intake capabilities of these servers. Kim, let's talk about Test 3. First, walk us through the parameters of the test. So it's similar to Test 2, but what we've done is we've introduced not only NVMe, but we've mixed that up with SAS and SATA drives. Why is it important to test a mix of devices? Well, today's servers, you actually have a large choice of drives. Once upon a time, you really just kind of had one. You had spinning hard drives. Maybe they were SAS, maybe they were SATA. But nowadays, we've added into the mix, we have SATA SSDs and we have SAS SSDs and we have NVMe of all different types. And these all have different kind of storage characteristics that fit well with different kinds of applications, such as VDI or, you know, different backup or cold storage and things like that. So it allows our customers to kind of mix and mingle the drive types that fit their particular storage needs. Understood. Let's look at the test results. For test three, where we tested a mix of media, we saw a 2.4X increase in random read performance for 4K IOPS when moving from the 740XD to the 750. For random 4K writes, we see about a 1.3X increase in performance. Sequential reads, we see a 2X increase in performance. And finally, for sequential writes, we see a 1.34X increase in performance. So Kim, once again, help us understand why these numbers matter, especially in the real world context. Well, because in the real world, we actually have a lot of different applications, oftentimes simultaneously running on a single server. So what this shows is the ability to, you know, layer on performance as the applications need it and to be able to do so in a very articulate way where you can add just enough capacity or just enough performance to achieve your targets for your particular environment. Kim, is there an economic factor that's involved when people are deciding whether to run SAS or SATA or NVMe devices? Yeah, there absolutely is. The different kinds of device have different costs associated with them. And generally what we talk about is the dollar or the cents per gigabyte, depending on what makes sense. And so being able to design it, I mean, that's one of the most tough things right now for IT administrators is to try and balance the costs with the performance. And this really allows them to fine-tune that. So Kim, when we look at all three of these tests together, what's the overall message here? The overall message is that we understand that it's very difficult for our customers to try and balance their performance requirements with their costs, especially with today's budget. So what this shows is that you can double your performance just by going from Gen 3 to Gen 4 in the new Dell R750 architecture. So it's a big benefit for our customers. So is it fair to say, Kim, that hardware matters? Hardware absolutely matters. It is absolutely critical in the decision-making that goes into trying to design your servers for your particular environment. Kim, thanks for helping us understand these test results. Well, thank you so much for having me. Thank you, David. Great to see some of the performance benchmark data in action. Finally, for deeper insights on just why this all matters to Dell's customers, let's talk to Shannon Champion. She's the Vice President of Product Marketing at Dell Technologies. Welcome, Shannon. Thank you. Glad to be here. Yeah, it's always great to collaborate with you. Shannon, you've had a pretty impressive career. You've got this killer combination of you have an engineering degree, multiple engineering degrees actually, combined with a business education. You've worked as a semiconductor engineer, a quality engineer, product manager, product marketing exec, et cetera. And you now have responsibility for a variety of hardware and software-led infrastructure at Dell. How have you seen hardware evolve over the years? Well, first of all, thank you. I appreciate that intro, Dave. Yeah, it's been a fun journey. I think there's two things. I think there's product-led evolution and then there's customer evolution. Those go hand-in-hand. If you think about the technology from a hardware perspective, it's become more advanced, more specialized, and the diversification of chip architectures is really what's driving that. It's gone from general-purpose CPUs to GPUs to specialty processors like GPUs and purpose-built accelerators. And with all that specialization, obviously more and more software is required to really knit it together. We believe Dell is uniquely positioned to do that. You know, with cloud and software defined and hyperconverged, why specifically does hardware still matter? Well, if you know anything about Dell, you know we are driven by customer-first mindset. So I'm going to go back to that customer evolution I talked about. And from a customer perspective, purchase decisions used to be more about feature function, right? Like how much compute memory storage can you pack in and get the best performance characteristics? Of course people still care about this. And almost every customer, if you look at the widespread surveys that have been done in the industry projections, are still going to be making data-centered infrastructure purchases for the foreseeable future. But more and more, these sort of like traditional hardware capabilities are table stakes. And what customers are making purchase decisions on are the software-driven capabilities that provide the differentiating capabilities to allow them to do more with less. So with that sort of comes a refocusing of where IT adds value for their organizations. We know maintaining and managing infrastructure is not what differentiates companies and makes them stand out from the crowd. So that's what this whole notion of IT transformation is all about. Our customers are pulling us into a broader set of problems and their purchase criteria is moving away from hardware feature function to differentiated solution and software value decision-making with more focus on how they can drive business value beyond the infrastructure. So it's really the combination of hardware with software that optimizes and delivers the best outcomes and the tighter the link we can create between them, the more seamless the experience for customers. Gotcha. And I mean, this is more important than ever with the push toward digital transformation and everybody's trying to get digital right. Now thinking about Dell as a company and its broader strategy, the majority of revenue comes from what most people would think of as hardware. But as Jeff Clark often points out, the vast majority of engineers are software engineers. Can you explain how that dynamic works and what role hardware plays in that equation? If you think about IT transformation, infrastructure is the enabler of that transformation. But infrastructure needs to be smarter, easier, more automated, more secure. And that's done with software. And our software engineering focus is nothing new. I think, Dave, we were together five years ago talking about the latest version of HCI on the 14th generation of PowerEdge servers. And at that time, we were talking about how our hardware platform engineers were working with the software engineers to design in software defined storage capabilities within the PowerEdge platform. So we are not new to this. We've been looking at ways we can use software to exploit the underlying hardware features and capabilities and do that in a differentiated way. Because it delivers value for customers. And honestly, they're willing to pay a premium for that. I remember that well, 14G, now 15G, soon we're going to be talking about 16G. Can you give me an example of where hardware differentiation has created value for your customers beyond, you know, what a straight, software-only solution running on generic white boxes might bring? Yeah, I have a couple of examples. The first is easily VxRail, right? VxRail, our jointly engineered HCI system with VMware, it provides full stack integration of hardware and software for that consistent operations in VMware environments. When you think about evolution of infrastructure, VxRail is actually a cool story. When it was introduced six years ago, its scalability and performance, you know, had it be rapidly adopted, mainly in the data center. But customer demands have evolved and they wanted to extend that operational efficiency to a broader and broader set of workloads, not only in the data center, but in the cloud, at the edge. So VxRail grew and its portfolio today has maximum flexibility. You can choose the best platform to meet performance, storage, graphics, IO, cost requirements, a range of processor types and NBME drives and graphics cards. So it really is the most configurable HCI system to meet any workload demand. And we recently introduced some new node types. That's hardware-based, right? VxRail dynamic nodes and satellite nodes. And our customers and partners are really excited about these. The dynamic nodes, as you know, add the capability to scale compute and storage independently and extend to primary storage like PowerStore and the satellite nodes are single nodes for the edge. So that's all hardware stuff, but the secret to VxRail really is more about the software. So I'm going to go back there. VxRail HCI system software is what makes VxRail more seamless and simple than any other HCI system. And when managing your environment is easier and more automated and your workloads can stay up and running, leveraging that intelligent life cycle management, customers pay attention. So again, it's that combination of hardware and software. And for VxRail customers, it's how we're delivering that truly curated experience, like we like to call it, that they can't get anywhere else. Last question. Anything else you want to bring into the discussion before we close? Yeah, two things actually. I have another good example of hardware differentiation and how it creates value for customers. And this one is based upon PowerStore. So PowerStore inline data reduction uses Intel QuickAssist technology and it performs hardware accelerated compression. So it's basically handling data reduction in hardware. We offload the compute intensive workloads of compression and conserve the CPU cycles for storage IO tasks that save application and storage processing time cycles and costs. So it's a more consistent way to do storage efficiency and leverage PowerStore's advanced inline compression. And it's always on and it doesn't compromise performance or other services. So with PowerStore using this hardware differentiated approach to inline data reduction, customers get an average four to one data reduction across all their workloads. Don't compromise performance or services. And honestly, a lot of times we see them achieving up to 20 to one or more depending on the data type. So yeah, I just wanted to throw out that other example. The last thing I'll say is we just launched a trifecta of storage innovation at Dell Technologies World. We have over 500 new high value software enhancements that bring out the best in our storage hardware platforms and that's across PowerStore, PowerMax and Powerflex. So I encourage folks to go check that out and obviously let us know what you think. Yeah, we could put a link to those in the show notes and I was there at Dell Tech World. It was actually quite amazing. Shannon, thanks so much for coming on and sharing your insights. Really appreciate it. My pleasure. We're seeing how system technology and architectures are changing. The CPU is no longer the dominant actor in the system. We're seeing the value change shift other components like NICs, accelerators, storage controllers and other components which are becoming more advanced, integrated and increasingly important. These innovations will power the future of system design along with the CPU of course but GPU, the NPU, call it the XPU. The conclusion is hardware absolutely does matter. In fact, more than ever and the future will bring massive changes that we'll be here covering. Thanks for watching this episode of Does Hardware Matter made possible by Dell. You can watch extended versions of all these discussions with other source materials on demand. Tune into theCUBE and watch for future updates in the series. This is Dave Vellante and we'll see you next time. Thank you.