 From the CUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hi, I'm Stu Miniman and welcome to a special CUBE Conversation. Gonna be going through and digging a little bit into the history as well as talking about the modern storage environment. Happy to welcome back to the program. One of our CUBE alumni, some of them I actually known for many years, we've worked together for a number of years. Scott DeLandy, he is the technical director of Delft Storage and Data Protection Division. Of course with Dell Technologies. Scott, great to see you. Stu is so awesome to see you guys. Thank you for the opportunity to come and chat with you. Today we've got some really exciting stuff that we wanna go through and I know you and I are probably gonna have a little bit of an issue because I know when we get together we always wanna reminisce about the things that we've done and the stuff that we've gotten to work on and as well as the cool stuff that's happening within technology today. Everybody buckle in because this is gonna be cool. Unfortunately, we're only a few miles away from each other in person but of course in these time we have to do it while remote but you and I walk side by side for a little bit of memory lane. Yes, absolutely. As I hinted, you and I both worked at a company that many people will remember. I always worry Scott, the younger people will be like EMC, who are they? Back when I started at EMC in 2000 it was, you talked about Prime and DEC and some of the other companies here in Massachusetts that had been great and then been acquired or things that happened. So you even had a little bit, you've had a longer tenure at what is now Dell EMC. Of course, the mega merger a couple years ago so talk about a little bit, your journey and we're gonna be talking about Power Mac which of course is the continuation of the long legacy of this Dometric platform. Yeah, it's crazy. So I hit 30 years with EMC and now with Dell back in July. So it's been an amazing three, now going on three plus decades of being able to work with amazing technology, incredibly talented people within the organization as well as some of the best and brightest when it comes to users and customers that actually deploy the technology. So it's been a tremendous ride and I'm not planning on slowing down any times. Let's just keep going, man. Yeah, you talk about decades, God, it felt like 2020 has been a decade into itself. So, but we talk about that history, Dometric really created that standalone storage business, created a lot of technologies that help drive a lot of businesses out there. Bring us up to be Power Mac, where does that business fit in the portfolio? Got any good stats for us as to adoption here in 2020? Yeah, I mean, you kind of said it. So when Symmetrics was originally introduced and that was kind of one of the older generations architectures of what we now know today as Power Macs, a lot has changed with respect to the platform in terms of the technology, the types of environments that we support, the data services that we provide. So it's been, again, three plus decades of evolution in terms of the technology, but kind of the concept of external storage, buying and computer deploying compute separate from the storage infrastructure, that was an unheard of concept back in 1990 when we first introduced Symmetrics. So this month, September, is actually the 30 year anniversary from when we actually first introduced that platform. And lots of things have changed, right? Started as a mainframe platform and then evolved into mainframe and open systems. And then we started looking at the adoption of things like client server and then environments became virtualized. And throughout that entire history, Symmetrics and now Power Macs has really been one of the core tenants in terms of leveraging the storage infrastructure to make a lot of those evolutions happen in terms of the types of applications, types of operating environments and just the entire ecosystem that goes around supporting an organization's applications and helping them run their business. Now, where Power Macs comes into play today, it is still considered the gold standard when it comes to high-end technology providing the reliability, the automation, the data services, the rich functionality that has made that platform the success that it still continues to be. You know, one of the things that blows my mind is if you look at just the last earnings call from last month or a couple of months ago now, Power Macs business is still growing. What grew for that quarter at a triple digit rate? And you think of, you look at kind of what's happening from a technology standpoint and kind of, you know, external storage has been a pretty kind of stable segment in terms of the infrastructure business, but still being able to see that type of growth and just talking to users and, you know, hearing how much they continue to love the platform, how they continue to, you know, rely on the types of things that we're able to provide for their applications, for their businesses. Just there's a tremendous amount of trust that's been built up with respect to that platform. But it's cool to be a part of that and to be able to hear those types of things from the people that actually use the products. Yeah, one of the big changes during my time, you know, in the portfolio there, Scott was of course the real emergent of server virtualization with VMware. Of course, you know, I'd actually started working, you know, when I was at MC with VMware ahead of the acquisition. And then once the acquisition happened, there was long maturation of storage in a VMware environment. We kind of look back and say, you know, we spent a decade trying to fix and make sure that, you know, storage and networking could work well in those virtual environments. So we've got VMware going on, understand you've got some news on the update, you know, that constant cadence of always making sure that the storage and the virtual environment work very well together. So why don't you bring us up to date on the news? Yeah, so it's pretty exciting. So we are announcing some new software capabilities for the platform as well, some new hardware enhancements. But basically the three focuses are tighter integration with VMware specifically by introducing new support for VVols and kind of changing the way that we've been able to deploy and support VVols within the platform. We're also introducing new cloud capabilities. So being able to take your primary storage, your PowerMax system and being able to extend that to leverage cloud deployments. So being able to consume the capacity a little bit differently, being able to support some real interesting use cases in terms of why somebody might want to take their primary tier one storage and connect that and to be able to move some of those data sets into a cloud provider. And then the third part is some really innovative things happening around security, really around being able to provide additional support for data protection, especially for things like encrypted environments while still being able to preserve the efficiencies that we've built into these storage platforms. So those are kind of the three big things. There's a lot of other what we would call giblets also associated with the launch, but those are really the big ticket items that I think people are talking about in terms of this. Well, let's drill in a little bit there, Scott. So if we take the cloud piece, their message, of course, we understand Dell and VMware partner very closely together. VMware very much is driving that hybrid and multi-cloud deployment out there. So when I talked to some of the product teams, that consistency of deployment, say you take a BX Braille with VMware BCF, that that's similar environment to what I could do in a Google Cloud or an Azure, how does those cloud solutions that you talk about fit into that overall discussion? Well, when you look at something like VVols, right? So VVols is a little bit of a change or a newer way of being able to connect into an external storage platform. And one of the things that we're trying to solve with VVols is being able to provide better granularity in terms of the storage and the capacity being consumed at the individual VM level, but also being able to plug into the VMware ecosystem so that even though you have an external storage device connected into that environment, the way it gets managed, the way it gets provisioned, the way you set up replication, the way you recover things is completely transparent because all of that is handled through the VMware software that sits above that. So it seems like a trivial exercise to just plug in a storage system in kind of a way you go, but there's heavy lifting required in order to support that because you've got to, in some cases, make changes to the things that you're doing on the backend storage side, as well as work with the ecosystem provider, in this case, VMware, to have changes so that they can support some of the functionality and some of the rich data services that you're able to provide under the covers. I'll give you a great example. So one of the things that we have the ability to do today is when we plug into a VMware environment with a PowerMax, we can support up to 64,000 devices. And you just try and get your head around that. 64,000 devices, what does that even mean? It sounds like a lot. Is that just marketing number and nobody would ever get to that level in terms of the number of devices that you would have to support. But one of the kind of the technical challenges that we wanted to be able to solve is that when you deploy a virtual machine, each individual virtual machine consumes minimally three VVOLs in order to support that and sometimes dozens and dozens of VVOLs, especially if you're looking at doing things like copies or making snapshots of that. So the ability to scale to that large number of VVOLs and being able to support that in a single storage system is very powerful for our users, especially folks out there that are looking to do massive levels of consolidation where they really want to collapse the infrastructure down. They want to get as few physical things that they have to manage, which means you're spreading hundreds, thousands of these virtual machines into a single piece of infrastructure. So scale really does matter, especially for the types of users that would deploy a PowerMax in their environment because of, again, the things that they're trying to do from an IT perspective, as well as the things that they need to do in order to be able to support their businesses. Yeah, well, Scott, absolutely. Scale is such an important piece of the overall discussion today. It means different things to different people. It could mean you're massively scaling out like the hyperscalers. There's the edge discussion of small scale but lots of copies. Talk to me about scale when it comes to those mission-critical applications. So think about the solutions and data service that you're talking about. Of course, EMC, this metrics really helped create that category with things like SRDF and Timefinder back in the day. So what are you hearing today? What's most important for a mission-critical application? Really, excellent point. It really comes down to automation, right? Where you think of some of these large environments and we have users out there today that will have tens of thousands of virtual machines running in a single system. And the ability to manage those, you can't find human beings that are enough of them as well as ability to keep up with all the changes that happen in that environment. It's just something that cannot physically be done in a manual way. So having that environment as automated as possible is really important, but it's not just automation, it's being able to automate at scale, right? So if I have 10,000 VMs and I want to go ahead and make a change in the environment, going through and making those VM by VM by VM is incredibly impractical. So being able to plug into the environment and being able to have hooks or APIs into the interfaces that sit on top of that, that's where a lot of the value comes in, right? It's really that automation because, again, tens of thousands of VMs, 64,000 devices, cool, cool stuff, but you're not going to manage those individually. So how do you take that infrastructure and how do you literally make it invisible to everybody around it so that when you have something that you want to do, just worry about the outcome. You don't worry about the individual steps required in order to get to that outcome. Yeah, well, so important, Scott. I love when PowerMax first came out, I got to talk with some of the engineers and the comment I made is, we've been talking about automation for decades. Scott, you probably know better than most when some of the previous generations, automation would be discussed, but it's different. And what they really said is it's so much about, machine scale and being able, we've gone beyond human scale. Humans could not keep up with the amount of changes and how we do things. And it's not just some scripts that you build. So there really is that kind of machine learning built into what we're talking about. The other thing we've talked about for a long time and has always been critical in your space and you've heated up before, security. So give us the discussion of security in PowerMax, how that fits over in companies, overall security standards. Well, I mean, at a very high level, I can confidently say that there is a heightened level of awareness around security, especially for the types of applications and the types of data that we would typically support within these platforms. So it is very much a top of mind discussion. And one of the things that people are looking at in terms of how do I protect that data is it needs to be encrypted, right? And we've been doing encryption for many, many years, right? We first introduced that through a feature called DARE, which is data at rest encryption, which would allow us at the individual drive level to encrypt it. So if that drive was ever physically removed either to be serviced or someone just lost the drive, you wouldn't have to worry about that data being kind of out in the wild and being able to be accessed by somebody because there was an encryption key and unless you had that key, you could not access that data. And for many, many years that became a check in the box requirement. You cannot put your gear in my data center unless I can assure that that data that's being stored on that system is encrypted, right? What's changing now is just being able to encrypt the data on the array is no longer good enough for some environments. The data needs to be encrypted from the host from it being written by the application all the way through the server, the memory, the networks, everything, the controllers, right to the backend storage. So it's not just encrypting the data that's at rest but encrypting the data end to end, right? And one of the challenges that you have is that when you're writing encrypted data to a storage platform, especially an all flash storage platform one of the data services that provides a lot of value is the ability to do data reduction through a combination of things like data deduplication and compression and pattern recognition. There's all this kind of cool stuff that happens on the covers. So we will typically see a three to one, four to one data reduction for a particular application. But when that data is encrypted you no longer get that efficiency. It won't dedupe, it won't pass. So that kind of changes the sort of economic paradigm if you would as you look at these external storage devices. So we've been talking to customers. We had one customer in particular come to us. They were a large insurance company and one of their biggest customers came to them and said our new policy is that all of our employee data has to be encrypted end to end. And so as they looked at, well, how are we going to address that requirement? They quickly realized that in order to do that they're going to need to increase the amount of storage that they have three to four X because this data that they were getting really high deduplication and compression up against they were no longer going to get that. So what we did is we looked at, well, what are ways that we can preserve the data efficiencies, the data reduction on the storage side while still being able to meet the requirement to encrypt that data. So one of the new features that we're introducing within PowerMax is the ability to do end to end encryption while still being able to preserve the efficiency. So I can turn encryption on all the way at the host level. I can write that data into the PowerMax. The PowerMax has access to the encryption keys that are on the host. It has the ability to decrypt that data in line. So there's no bump in the wire. There's no performance impact. Apply the data reduction to it and then re-encrypt the data as we're writing it out to the back end. So it's a hugely important feature for IT organizations that are just now kind of getting their heads around this emerging requirement that it's just not the stuff that's at rest that needs to be encrypted. It's the data end to end that's in that process. So a big challenge there and it really is one of the innovations that we're kind of pushing in order to basically meet that requirement for this set of users out there that see this as either something that they need today or an evolving requirement where they want to put infrastructure in place. So if they're not doing it today but they see maybe a couple of years down the line that's something that they're going to need to do. They have the ability to enable that feature on the storage itself. Well, so Scott, 30 years of innovation driving through this. First of all, I hope if you haven't planned already you need to get one of those symmetric refrigerators that I saw from back in the day. Wheel that out to the parking lot of where our tools used to be. Sign of the times that it used to be a bar for a few times now an organic sushi place but socially distant gathering to celebrate but give us a little look forward. 30 years and not resting on your laurels always moving forward. So what should we expect to see from power back going forward? So two things. Number one, the person that came up with that idea of what we internally refer to as the V-fridge was an absolute genius. Just, I would say that person was a genius. Second thing is in terms of what we see going forward is I mean, one of the top of mind discussions for a lot of users is cloud, right? How do I have a cloud strategy? I know that I have applications that I'm going to continue to need to run in my in what we'll call a quote unquote traditional data center just because of the sensitivity of the application just the predictability that I need around that. I need to basically control that and I have the economics in place where that becomes a really cost effective way of being able to support those types of workloads. But that said, there are other ways that I consume storage infrastructure that doesn't require me to go ahead and buy a storage system and kind of deploy it in a data center that I own. So users want to basically be able to explore that as an option, but they want to really understand what's the right use case for that. So one of the things that we're also introducing within PowerMax and we expect there to be a lot of interest and we expect there to be definitely a solid uptick in terms of adoption is the ability to connect a PowerMax into a cloud, right? So this could be a Dell ECS platform. It could be Amazon S3. It could be Microsoft Azure. So there's a lot of flexibility in terms of the type of cloud connectivity that I could support. But as we looked at, what do we want to do? We don't want to just connect into a cloud because that doesn't mean anything. So we need to understand what's the right use case, right? So when we talk to a lot of our users, they had their storage systems and what they were doing is they were using a lot of capacity for things like snapshots, right? Creating point-in-time copies of their applications for a variety of reasons. Doing those for database checkpoints, doing those to support testing and development environments, doing those because they wanted to make a copy and do some sort of offline processing up against that, but very mature, very well-established concept of making copies called snapshots. Now when we talk to some users, they are, we have some out there that are very heavy consumers of snapshots. And in some cases, 25, 30% of the storage that they're using is being consumed for snapshots. And what the requirement was is, hey, if I could free up that space by taking these snapshots that I create, and maybe I'll use them within the first, couple of days, couple of weeks, but then I want to keep those snaps, but I don't really need to keep them on my primary tier one storage. Maybe if I could offload those to another type of storage that's either more cost effective, allows me to consume it on demand, gives me the ability to free up those resources so that I can use this capacity that I already own for other things that are growing within the environment, that would be something that I would be interested in. So we heard that requirement. And from a product management standpoint, when you look at developing new products, new capabilities, there's kind of three things that you always want to do. Number one, you want to identify what is the requirement? What is the use case? What is the problem that you're trying to solve? And you want to make sure you understand that really well and you build a technology that's designed to do that in a very good efficient way. So that's number one. Number two is you want to make it easy to deploy, right? We don't want to create an environment where you need, it's very fragile and you need specialized skills to go in there and deploy it. It's literally firing up the application, putting in the IP addresses for the S3 storage that you want to connect to, and then away you go, your setup is done. Really, really simple setup. But the third thing and really one of the more important things is what's the user experience, right? Is this something bizarre? Is this managed as a V app? Is this something that I have to click on another application? I have to fire up another screen. So you want to take the management of that data service and you want to build it right into the platform itself. So with the cloud snapshot capability that we're introducing, that's exactly what we're doing where we've identified a solid use case that we know a lot of customers out there are going to be very interested in understanding what they can do with this and what type of new flexibility it can provide. Number two, making it super, super simple to deploy. Matter of fact, it's included with a PowerMax. You buy the PowerMax, that software functionality, that capability is included with the platform. So there's not even an additional licensing charge required to do that, it's included with the storage. And number three, in ease of perspective, I create a snapshot, I have the option, do I want that snapshot to live on the array that created it? Or do I want to take that snapshot and do I want to push it off onto that provider? Whether it's an ECS in my data center or whether it's something that's sitting over an Amazon AWS. But really easy to basically deploy. And what we plan to do is to take this capability that we've narrowed down to a very specific use case in order to make sure that we have a clear idea of what the benefits are in terms of why users would want to deploy it, look at other things because there are other opportunities that we have to expand that to as that capability matures. And as we start to see adoption really take off. Well Scott, great to catch up with you. Thanks so much for helping us look down memory lane as well as a look at the new pieces today and where we're going in the future. So nice to talk to you. Stu always a pleasure, thanks a lot. Great to talk to you again as always and hopefully we can get to do this again sometime soon and maybe a real kind of physical sort of setting where we're not separated by a couple of counties and having to go to the West Coast and come back here but maybe actually in a similar physical location. Definitely, we all hope for that in the future that we can get everybody back together. In the meantime, we have all the virtual coverage. Be sure to check out thecube.net and of course all the cube conversations you can see linked on the front page as well as shows like VMworld that we alluded to. I'm Stu Miniman and thank you for watching theCUBE.