 Previous generations of VSAN were unable to fully exploit hardware innovations, particularly in networking. New test data shows that VMware's VSAN 8.0 Express Storage Architecture, or ESA, as delivered with VxRail, can show dramatic performance improvements when using 100 gig E network cards. In this special CUBE conversation, we analyze new test results that will show 100 gig E cards are the preferred option for customers that want the highest performance and the most cost efficient, hyper-converged architecture. And with us to drill into this data is Bill Leslie. He's the director of Technical Marketing Engineering at Dell Technologies. Hello Bill, great to have you on. Yeah, so appreciate that Dave. So when VSAN was first architected, around 2012 when it came into market, they were dealing still with the predominance of hard disk drives in the market. We were still dealing with that inflection of one gig to 10 gig networks. NVMe drives were something that was way down the pike. Over the last decade, VMware has continued to enhance VSAN. And with this new iteration of VSAN Express Storage Architecture, or ESA, is what we'll refer to it here today, they really re-architected how things were done to take advantage to the latest generation hardware technologies. Of course, Dell is delivering those through PowerEdge and a part of our VxRail platforms in HCI. And this testing that we did really showcases just how important those newer generation technologies are to the ESA performance levels that you can get to with VxRail and with VSAN. In a lot of ways, that first generation of VSAN, the OSA or original storage architecture, was using that same type of technology learning from technologies like VMAX, using a very fast tier performance tier for that right caching layer inside of VSAN, and then de-staging to a capacity tier of disks. And what we were finding is that caching tier was really the limiter to the performance. And we'll talk about that a little bit in changes, but it really meant that a lot of customers would stick with RAID 1 inside of VSAN because that was how you could optimize performance with cash drives rather than doing something like a RAID 5 or a RAID 6. And we've done a great job of telling customers, hey, to get the best performance, go with the RAID 1 in your VxRail and your VSAN experiences, and ESA really just changed the entirety of that game. Let's talk about what you actually tested here, Bill. We're showing that you created two VSAN-VxRail VSAN-ESA clusters using identical hardware, except for the networking. So you varied the networking where one cluster used Broadcom 25 gig E solution while the other cluster used 100 gig config. What was the objective here, Bill? Was everything else identical? The software, the policies, et cetera? What were you trying to achieve? Really what we wanted to do was isolate that network in the testing. With ESA, we know that they have a minimum requirement of 25 gig networks, but we really felt because of some of the performance testing that we had done on the OSA design point that we were going to blow away past that with the 100 gig networks. So we held everything constant except for the networking in this testing. It's the same version of software with VxRail. It's the same hardware components. We didn't vary the drives. We only moved the networking from the 25 gig cards to the 100 gig cards, just so that we could isolate that down to see what type of differential really existed inside the potential of our cluster and specifically the nodes of VxRail. Any change in performance that we're going to show you today is directly attributable to the change in network configuration. All right, so before we look at the results, what about the test bench and the workloads? Can you share, what's the background there so we're grouted in that? Well, obviously Dell and EMC have a great heritage of doing performance testing and profiling. You mentioned Symmetrics and Vmax, now PowerMax. We've been at this for a long time. There's a standard set of tests that we run across all of our products internally. The same teams actually on VxRail were originally trained on those Symmetrics platforms before. We've run a gambit of simulated workloads, mirroring various OLTP or database style workloads ranging from 4K up to 512K block sizes. We also do some relational database testing in here. So if you see OLTP, that's the databases of certain block size. If you see the RDBMS, that's going to be your relational database construct with it. And what we really want to see is what happens to the various characterizations of reads, writes, sustained writes, sustained reads to see where those anomalies might be and where the real performance bottlenecks are in the system when we're doing that testing so that our sales teams are making sure that our customers have systems that are going to meet their performance requirements at the end of the day. So let's look at the results. The blue line here is 25 gig E and the green line is 100 gig E. And you can see where the knee of the curve hits for each line and the steepness of that latency degradation. And so Bill, add some color here please. So one of the things we like to express on these is what's the workload we're testing? What's the block size 22KB? What's the read and write mix? That way our customers can go and replicate these tests on their own with the same like for like configuration. You don't want to hold anything back. In this case, we're using RAID 6. Now I mentioned that there were major improvements in VSAN ESA to the RAID architecture. RAID 5 and RAID 6 actually perform better with compression enabled than OSA does in RAID 1. So we wanted to really show this with those data services turned on. Really kind of change that paradigm where people think I can't use those data services with VX Rail or with VSAN. And so here we've isolated all of that. And you did a great job, Dave, of describing this. That inflection point in that blue curve is really hitting those theoretical limits of the 25 gig networking. And what we see on the 100 gig line, the green line there is we're starting to see that performance far exceed the right side on that chart where the performance of the 25 gig networks ended. Now that's important because what we really see here it's near a 50% gain on this particular workload that you get with that 100 gig networking. That's untapped potential if you're not deploying the right networks with VSAN ESA to really harness all of that potential. Because you're able to use RAID 6 relative to the previous original VSAN you're now compressing that price parity. Even though you're moving toward 100 gig E we're going to talk about that later but you're really getting the most cost efficient but still higher performance benefit. And kudos to VMware on this. They really by rearchitecting VSAN the way that they did adding in the way that they process things down and great information on the VMware site core.vmware.com from some of the VSAN tech marketing team to really understand how they rearchitected that so that they could really tap into this potential with these new hardware technologies that are on the market today. This next slide we're talking we're varying the block size and increasing the percentage of reads in the test rights in the previous slide to higher right. They're always the more challenging for IO. And you're seeing less steep for the green and even more advantages as the 25 gig config spikes again dramatically in the 100 gig the green line spikes much later and more smoothly. Bill take us through this data. We've got that steep curve again you're seeing basically the same top end limit being hit on that wall of where the 25 gig networking is but in this particular case you're now seeing the newer generation Intel processors start to shine through pushing that curve way far out to the right on the 100 gig networking. You're talking not quite a doubling of the overall performance level that you get. Now remember, we held everything constant. We didn't change the process which we didn't change the memory. We didn't change any of the drives that were in this. This is truly showcasing what potential is left in that node by going to the 100 gig networking. And that really was the eye-opener for us is just how radically different performance can be with the right networking with VxRail and these vSAN ESA nodes. All right, that's the only change. So these are Broadcom Nix, correct? Correct, both sets are the Broadcom Nix. We used the 25 gig and the 100 gig so that we could isolate that in a testing scenario. Now the previous two results that we've shown are reflective again of more real world situations that customers might see on a regular basis. But what we like to do, and I'm glad you guys did this, when doing benchmarks, what I call like top-end benchmarks, you put the pedal to the metal and see what the system is capable of doing. Bill, please explain the intent of this test and how we should interpret these results. Yeah, what we really wanted to push was that envelope of the largest block sizes. Again, going back to where previous generations may have had challenge with those cash drives inside of vSAN, what was this going to lead us to with the largest block size? And that was where usually we ran into performance challenges and really people would start to consider alternate technologies on those very large block, high throughput type workloads. And what we see here, and I'm not gonna go through each set of these, but you're seeing near doubling across the board in all of these types of potential, workload throughput potential that you see on the 100 gig networking framework and Broadcom cards that we're utilizing inside these nodes. And I've mentioned numerous times the right aspect of this. That was where the biggest sets of performance challenge presented themselves in the vSAN OSA architecture. And when we can start to double that throughput level on the rights, it's really showing that we're harnessing all of the power of the technology that's in these platforms. How should customers think about this if they had to go in front of their CFO and justify an expenditure? Dave, the best or real world cases, I was at a customer meeting in January, right when we brought these out. It's the first time he heard about it. And he said, I should get a quote on that. I'm thinking about buying some new VxRail 8.0. We've been using vSAN for a long time, came back right away and said, guess what, it was 10% less expensive on just the VxRail nodes side of that equation because we're eliminating that expensive cash tier of storage. Now, the 100 gig portion of this, that's what you mentioned being something where, well, how much more expensive really is it? And the networking part of this, when you're comparing against numerous all NVMe flash drives being inside of these systems, the networking pays for itself in less than the cost of a node. It does require an upgrade to all of the cabling, getting 100 gigs, switching for that environment. But when you're able to showcase just how much potential you can unleash with these, you need half the number of nodes in an overall design point. It pays for itself very quickly in that equation. We tell our customers, our account teams, go put the two side by side. You're gonna see that advantage right out of the gate. This is a conversation to have early with the networking team to make sure that that upgrade to the 100 gig networks is going to be about the same time you're doing that VxRail one. And we know not all customers can do that, Dave. So what we do allow is if customers want to change over the networking to 100 gig after that first deployment, maybe let their networking teams catch up, they can do that. But without a doubt that 100 gig is going to let them take the advantage of all of the performance that you want to get out of your VMware and VSAN environments. Is this a game changer? And if so, why? So short answer is yes, I think all customers should evaluate what their performance requirements are. What's the technology that's gonna get you to the outcome that you need to have with that? In this particular case, we're showing that you can do more and more with hyper-converged than you ever thought you could. This isn't just the throw a little bit of EDI on there, put it on maybe tier two or tier three app on there. We're seeing real world customers deploy very performant databases and other types of performance environments with VSAN ESA. And really expanding that use case where you're getting all the value that VxRail brings to bear with the automation for VMware environments, getting the operational efficiencies that you drive by having us offload a lot of the validation of different firmware stacks and software stacks together. We're doing all that testing for our customers so that they don't have to go and read which versions are compatible with which, test it out in a lab and then go forward. We're really offloading a lot of that burden and there's more value that can be tapped into by making the decision to move VxRail for more and more of those workloads that might have a performance requirement with them as well. Excellent, Bill. I think this is really interesting development that we're going to be watching closely. Appreciate your time and thanks for sharing all this great data. Dave, thank you for having me here today. You're very welcome. If you're going to want to dig into the white paper that Bill and his colleagues wrote, we're going to put a link into the show notes and you can check it out yourselves. Thanks for watching.