 Live from Las Vegas, it's theCUBE, covering Dell EMC World 2017. Brought to you by Dell EMC. Well, welcome back to Dell EMC World 2017. We're live here in the Venetian in Las Vegas, day one of the three-day show. I have Michael Dell out on the keynote stage earlier today. I also had David Blaine, world famous magician. Pretty interesting performance. I went out and got an ice peak. We'll get into that later, but it was interesting. Keith Townsend, John Walls. Also joined by Eatsuck Wright, who is the CTO of Extreme I.O. at Dell EMC. And Eatsuck, thanks for being with us. It's good to see you, sir. Thank you very much. I'll play from Tel Aviv, and great to have you. So, your sweet spot of the company is giving birth to the new baby today, Extreme I.O.X.2. Tell us about that, what spawned that, and then what that response has been. What have you developed? It's, I think, in order to understand I.O.X.2, you need to start with the beginning, X1. So, November 2013, I was having my class reunion, meeting my ex-girlfriend, and we've launched X1. And X1 became, within two quarters, the largest sold all-flesh array in the world. So, from nowhere to the largest all-flesh array, at least in terms of unit sold to the market, right? Both Gartner and IDC. And it was a huge, huge burden and a success for us. A success, because from nobody, we became the number one leader, and a burden because we didn't have the life cycle to normally mature a product, right? So, you mentioned being a father. I'm a father to two daughters, lovely daughters. One of them is six years old, one of them is five. And the young one is starting to show some signs of being a really clever person. And I'm afraid that somebody will tell me, oh, she can skip the first class, right? Because skipping class have some association with it, social aspect of it. So, we've been really busy trying to harness down Extreme IO X1, making it super stable. Today, we're already about five nines in the market, but it's also was time to refresh the product and come with something new. So, our life cycle wasn't the traditional year or year and a half of refreshing the product. It took us longer for us to come with X2, and this is what we announced today. So, what's new with X2? The first thing is the ability to come with really denser drive and denser configuration. In X1, each DAE could put, you could put up to 25 drives inside of a DAE. In X2, you can put up to 72 drives per DAE, right? And you can scale just like before up to eight X bricks. So, huge capacity, which you need for the vast majority of the use cases out there that are not just VDI or just a single database, right? Today, Extreme IO can fit pretty much every transactional workloads, including virtualization workloads. You just need a lot of capacity for thousands of VMs. So, that's one of the thing. The other thing we improved performance of the X2 array and the magic story around there was that because of the thousands and thousands of customers that we have out there, we really got a good insight of the workload that they are running. And what we found out is something very interesting. The majority of those customers are running workload that have very small block size. So, in storage, every IO that you write to the system is a different block size characteristic. And we found that the majority of them are using very, very small block size. And we wanted to improve the performance on those block size, the IOPS and the latency. And we also wanted to make sure that it's actually more economical, cheaper than the very expensive drives like the new NVMe drives that are out there. So, different design goals, making it faster and also making it cheaper in different dimensions. So, we come with a new feature called Rideboost. In a nutshell, Rideboost will give you 80% better latency for pretty much every workload that is out there. So, with that, small block sizes versus big block sizes, why is that important? You know, we're at a conference that we're talking a lot of digital transformation. CEO, we teach John earlier, you know, he's a sports guy, he doesn't do Legos. Help us understand the value of that data type. Sure, sure. So, you know, we'd love to think about digital transformation, but at the end of the day, you're a customer, you have a database. You'll use a run query or queries against the database. If it's a very large database, there are thousands, maybe even millions of queries every day. Those queries take time for the end user to get a response for. So, let's assume that you run a monthly report. And this report normally takes nine hours to generate. If I can shrink the report crunching time to two hours instead of nine, that means that I provide better value for the business itself, right? One of the stories that we have a financial customer in the Middle East, they need to generate the report every month between midnight because this is where they lock in their reports up until eight o'clock in the morning. Why eight o'clock? Because this is where the employees start to come to work. And every hour that they exceed after the eight hour generation, they get fined by the government. So if I'm saving this customer four hours and they are not getting fined by the government for generating the report, that's the true value for the customer itself. Those things are important. People tend to think about just performance numbers in terms of IOPS, where the real magic number is latency. How quick can you make the query, whether it's a database application or a VDI VM or just a generic web server running on a virtual machine? Those are the important things today. So, transactional apps, big deal. Are these transactional apps, we've learned a lot about virtualization and cloud computing to date. Are these transactional apps running in virtualized environments or are we still relying on big heavy metal workloads going at three mile two? Yeah, it's a good question. At least from my experience, I would argue that anywhere between 70 to 80% of the customer that are out there went full virtualized. So they're running their entire application, running either on the ESX or Microsoft Hyper-V, so they are fully virtualized. Some of the customers are still running their workload on a traditional physical servers, right? Even an ESX at the end of the day runs on a physical server towards the kernel itself. But yeah, the majority of them are already there in terms of virtualization. So what are customers really excited about when it comes to feature sets for XIO2 versus Xtreme IO version one? Right, amazing question. So performance, we've already discussed performance, 80% better latency. That's not something that you get because of the usage of better CPUs. Intel Moore's law is basically dead, right? They don't give you 200% performance between generations. So we wanted to do something else and solve the same problem. The other thing is quality of service. We are not shipping in GA, but that's coming soon. The ability to give a specific VM, a specific IO cupping and a latency cupping and also give you the ability to burst to more IOC fit needed for a couple of minutes. So quality of service is the noise enabled, right? Somebody generate too much noise, you want him to be quiet. That's what quality of service is. The other thing that we've announced is native replication. We finally have our own replication that can replicate between one Xtreme or to another, but it's not a traditional replication. The unique thing about Xtreme IO was always the cast, the content addressable architecture. People typically think about it as a deduplication feature, but in fact we don't have a feature called deduplication. We analyze the data as it goes to the system itself and we give a unique SHA signature to each one of those blocks. And if the SHA signature already exists in the system, we do the block, but it's not a feature per se. That's why the deduplication is so fast on Xtreme IO. So up until now, the cast architecture was only applicable to writing the data into the array itself. Now it's also applicable for replicating the data. So for example, if you have a data reduction of five to one, which is very common in virtualized use case, many VM, many, the same template and so on, you now need to replicate four times less the data, add a source to the destination target. So that's a very, very big thing because you need to replicate more and more data, but the 24 hour window hasn't changed. God didn't upgrade it with the service pack the time. It's still 24 hours per day. So this is super important for us and we're very excited about it. And the other thing is that again, larger denser configuration of the array itself so customer can have up until two thirds cheaper the drive, the cost drive of the Xtreme IO array itself. So it's cheaper for them to put their workloads on Xtreme IO, whether to really pick up just the database that needs all the performance in the world. So we can really become a two enterprise array with those features. It seems like it's got to be for you a constant chase though, right? You're looking for higher performance. You're looking for lower costs. You said you've just gained 80% an increase in your performance capabilities. And now people are going to be looking at you over the next year and say, okay, what next? Where are the gains to be had in the next generation of technology? And I mean, just in terms of philosophically approaching that, what do you do? Yeah, yeah, it's again, another good question. I actually gave a briefing about it just earlier. So the first thing you need to do as an industry, not just as daily MCs to lower the cost of the drive itself to be even cheaper than a mechanical drive. That's not there today, right? The hybrid mechanical drive. You can get a more economical drive if you apply data reduction on it, right? So if you're five times cheaper because of the data that gets in great into the array and get the doop and compress and things provisioning, then you can be on par with the mechanical drive. So first we want to be on par, if not cheaper. We want everybody to move to SSDs and we were the first to all flesh array in the portfolio of daily MC. So that's the first thing. The second thing is to really get a better insight into your application workloads. Today people analyze things like IOPS and latency, but what does your application really think? Where are the cues in the application stack itself? How can you find them out in the storage sub-system itself? So we are on a journey over there with our reporting mechanism. So here and a half ago, we completely, we started a new project to completely change the reporting mechanism of the web UI, the interface of the extreme array. And today you can really get a drill down into pretty much every aspect that up until now, you had to purchase a third-party software that will analyze your workload for you. So things like Instagram, IOPS, block size, read and write latency, par block, so you can really understand your workload. We also give you something like an anomaly. So we can tell you every week this application behave fine, but on that Friday, for some reason the response time wasn't that good. You should go ahead and check it out. Maybe it's in the application, there is a bottleneck, maybe there was a bottleneck in the storage layer, so you can actually find it out. But I would argue that the long-term goal, and that's a vision, right? I'm not announcing anything yet, is really the ability to merge or combine between the software defined world, right? The hyper-converge mechanism to the traditional arrays, right? Although SSD is not that traditional. Maybe you can have a denser configuration with a very smart DAE, but the performance aspect of it will not be dry from the DAE where it actually store the data, but from virtual machines that you can spin up and down in a cloud-like fashion that will bring you all the performance that you need. That's, I think, to me the only grail, really merging between the worlds, because there isn't one perfect answer, right? The software defined guys will tell you everything should go to software defined storage. We will tell you everything should go to all flesh arrays. But really the truth is always between, right? In between. And this is really one of the direction that we are approaching. Not to, for now, I'll let you enjoy X2 for now. How about that? It was a good day for you. And don't let that five-year-old skip either. I think that's a good idea too. There you go, there you go. Thanks for joining us. Thank you, thanks. Back with more here on theCUBE. We're live in Las Vegas at Dell EMC World 2017.