 Cisco, extracting the signal from the noise, it's theCUBE, covering VMworld 2015. Brought to you by VMware and its ecosystem sponsors. Now your host, Stu Miniman. Welcome back to VMworld 2015 here in San Francisco. This is SiliconANGLE TVs live broadcast of VMworld 2015. I'm Stu Miniman with wikibon.com. Happy to have on this segment talking about the future of software-defined storage, hyper-converge, everything there is Christos, Carol Minolis, who's the CTO and principal engineer in the VMware storage group. Christos, first time on theCUBE, thank you for joining us. Yes, thank you for having me here. All right, so the buzz over the last year, one of the hottest topics inside the VMware ecosystem has been this whole virtual sand. VVOLS, of course, has had quite a bit of activity. Can you first set for us, what's your role inside VMware? How long have you been there? Sure, I've been a long timer at VMware. I've been with VMware for 10 years almost now, and for most of this time, I've been working on storage and availability products. The last few years, I've been working on virtual sand specifically. I was one of the original architects of the product and the people that had the original idea. And most recently, the last few months, I've had a wider role. I'm now the CTO of the business unit with a responsibility for technical insight and road but for a range of products, not only visa but also our availability products, the course storage features included in this. Yeah, so Christos, Charles said, I think there's, if I remember right, 500 engineers inside the storage unit 10 years ago. I'm curious how many were in that group. Oh, we were a handful. We could, you know, you could always walk down the hallway to the engineer you need to deal with. So yes, it has been a very big change in that respect. Even though in the engineering teams, we still maintain a mentality of a small company, a startup of UIS, where everybody works closely with everybody else. And even though now we're distributed, we organize our projects in such a way that teams are very agile. They work very closely together. Yeah, so I mean, I think everybody that watches this space knows that, you know, VMware's always had a lot of storage pieces and interaction, you know, back to, you know, what happens with SRM, what's the storage of emotion when that came out. But the role has become a lot more front and center when you talk about what's happening with VVOLs and virtual SAN. Can you just give us kind of your personal journey and insight as to that transition? Actually, this goes back many years. I would say probably sometime around 2009, where we started thinking a little bit more fundamentally about what is storage, how is the industry evolving and what do we see VMware's role being in this new world. And we made an explicit decision that we need to drive the narration. We need to drive the industry in a direction we believe is the best direction for our customers, current and future. So our vision around storage from back then in 2009 when actually we shared a white paper with many partners back then was twofold. On one hand, we want to introduce a management model from storage that is much more application-centric. A model where the owner of the application, the administrator can require at a high level in the form of policies, as we call them, what they want from the storage without necessarily having to know all the gory details of the hardware or implementation details of every individual ventures products. So you say what you want, know how to do it and then the storage platform should be able to automatically configure, provision your storage and so that you get the quality of service, the properties you want for your application. That is one side and that led us to a number of projects and features now that range from storage, policy-based management to virtual volumes and a number of data protection solutions around that. On the other hand, we also decided that we should really give to our customers a storage platform that implements that vision in the best possible way. So that was the genesis of Virtual Sun. Essentially, Virtual Sun is VMware's own storage platform that follows a certain architecture. We decided that a hyper-converged architecture is the best way to go because it meets the best possible way, the requirements of our customers, requirements for streamlined, simple procurement, deployment, configuration, and operational management of their storage infrastructure and do that in a way that does not require specialization, that doesn't require to be expert in any specific vendor's products or don't need to even know the good details of the storage hardware. Instead of that, we want to offer to the customers a way to manage storage in the same way they manage today their compute infrastructure, the compute resources and now within SX also the network resources. A unified model can manage their clusters that provide all the fundamental services they need for their applications. Yeah, I think Charles Fan had a good way of looking at it. He said, we don't think of a vSAN cluster. It's just a vSphere cluster that uses vSAN. So it's a very different operational model. We know that the growth of the virtualization admin highlighted always this year and we see record numbers of attendees. Talk a little bit about, is this a major shift or just kind of a continuation and expansion of what we've been seeing from vSphere over the last decade? I would like to differentiate here, since I'm an engineer at heart, the technology and the product. The vSAN storage platform has been designed as a generic storage platform. And here at VMware, we have a number of sessions where we actually talk about that and we stress some of the advantages of that approach. Now, for the specific product we are releasing, we have released and we are supporting now, we decide to take a certain packaging approach, if you wish, which is make this product very easy to manage by essentially making the storage cluster to be the same as your compute cluster. That sounds like a very simple idea but has tremendous benefits, starting from the fact that we don't need to introduce new management abstractions. You don't have to configure and provision your storage and then decide which host has visibility to which data store. All those fencing and zoning techniques that you probably are very familiar yourself with which actually the kind of complex management operation we try to eliminate. Moreover, by making this simple constraint, putting this simple constraint on the product, we allow management to be done with simple extensions to existing management abstractions and workflows and even APIs that are extremely common among our customers that they're used to write scripts or code that automate the management of the infrastructure. So with Virtual Sun, now we have added a few new APIs and extend the few existing API so for the vSphere admin, this is a natural extension of managing their compute clusters. Yeah, I thought this came to me because you think back as to what's happened in storage in the last 15 years, there was many attempts to do what we called storage virtualization and let's put a layer of abstraction in there and try to help clean it up. Well, storage is pretty complex and while virtualization, from a compute standpoint, we've seen huge benefits. From a storage pan standpoint, there were usually real limits as to I couldn't leverage the functionality underneath it, true heterogeneity underneath what was difficult. You're not trying to virtualize storage here at all, I don't think, but you've really helped us simplify what's happening and you're leveraging the platform that you have. Is that a first statement or? It is, from a customer's perspective, yes it is. From a technology, yes there is, there are some complexities there, obviously, but that is the whole point. We're trying to hide the complexities and deal with some of those, I've worked on some of those early virtualization products myself, what we're trying to do is hide all that complexity that we were exposing to the administrators before and handle them in a way which is automated, where the options are the obvious ones and because we have certain constraints, we have the clusters, we have certain types of hardware, we can afford to do some of those things automatically now and so that in addition to an extensive hardware compatibility list certification process we have allows to deal with a broad range of hardware without having to expose some of the core details of the decisions of how you configure that hardware up to the administrator. So, as you pointed out very well from the administrator, this is not really about storage, this is about the data consumption, needs of their applications and that is exactly the abstractions we're exposing upstream to the application of the administrator. Yeah thanks, it's good to break down some of the technology versus the packaging. One of the frustrations I've had when people look at this market is they tend to say, okay, when the first version comes out there and we shrink wrapped it and shipped it out as here's the SKU and here's the sheet metal and they're like, oh okay, hyperconvergence, it's a box and it's like hyperconvergence as a trend, the box is the least interesting piece of this. It's super important to have the stack, the hardware compatibility list who have tested that out. I mean, if we simplify that, that's a huge savings because operationally we know how things break. But I want to give you, in your CTO hat, what do you see as the vision, this solution is good today but it's not the end. Where does this journey take us and what's the vision going forward for the future of it? This is the few billion dollar question I guess. So I see two directions there. On one hand, today we have a platform that as we discussed already, the management which is centered around the management of your compute clusters. And those compute clusters, those management abstracts exist in vSphere today because they are the core around which we do distributed resource scheduling, around which we deploy features such as AHA, DRS, Vimotion. And why do we have those? Because applications today are the so-called monolithic applications. They do not have natively the ability to be fault tolerant, to be highly available, to be able to tolerate and control resource changes themselves. So this is why vSphere has been so successful because we add all these business continuity features to applications that had no idea about such concepts when they were originally designed. Now we're moving gradually towards a world of cloud native applications, their platform applications, whatever you want to call them, where we see that the application by definition is more aware of the infrastructure, the scalability, distribution, and even fault tolerance features are natively integrated in the application. So needs for things like DRS or AHA are very different or may not even exist in some of the new applications. However, now we see these applications having scalability requirements which exceed the current limits of vSphere clusters, compute clusters which are up to 64 node as you understand. So one set of challenges and opportunities I see ahead of us is how to deal with storage infrastructures that can meet the demands of those applications. How can we use a platform like Virtual Sun to extend it and deal with the management of infrastructures that span thousands, perhaps tens of thousands of physical hosts with applications that even are distributed across geographical locations. So one set of challenges is management of storage infrastructure at a very large scale. And we have a few interesting ideas and I had the opportunity to talk to customers today in a couple of events about those. On one hand, what we are exploring as we speak with a few prototypes in the lab is new management models where we collect and process a lot of data that have to do with the physical infrastructure, with the application of workloads that run on that virtual infrastructure. We store them, we process them, and through that processing and analytics we run on them, we provide the users with a holistic view of their infrastructure, allowing them to zoom in in the areas of interest where those areas have to do with problems and help them do troubleshooting and help them decide what are the right remediation actions or there is just awareness of how the application is doing, how it is evolving and what are the trends they should be aware of so they are prepared in terms of investment in hardware, infrastructure and so on. So that is one dimension. I'm very excited that we have some really cool ideas. The other dimension has to do with this consumption of storage. I said all these nice things about fine-grain policy-based management where an application gets the quality of service it requires without the administrator having to do any fine-grain configuration of physical hardware. Well, we want to take this model to go beyond traditional virtual machines with a virtual SCSI disk to a model where applications that use other abstractions perhaps file systems or native block protocols like NVMe or perhaps even object storage like S3 and the similar types of storage that they can really take advantage of a single platform with a unified management model along the lines of what I described a few seconds ago but still be able to consume different types of storage and manage them with the same approaches. So that is the other thing. Offer to the applications, for example, containerized cloud-native applications, file systems, distributed file systems that solve some of the critical problems that we know they address. Images management, shared data volumes and so on. All right, well, Chris Joseph, I feel like I'm looking back to my year two summary that I did on server sand and one of the critiques I gave is current solutions today. They're using the same applications typically that sat in your traditional sand or NASA environment and they hadn't been, it's not the modern applications, it's not the cloud-native, hugely scalable architectures. You laid out a bunch of the challenges there. Do you think we're going to hit from a technology standpoint the growth of those applications and the maturity of this solution set? Do you think they match pretty well? You know, what's the outlook? Yes, that's a good question which is what we all are debating here, but I believe at a high level we have the building blocks for the technologies that are required. I believe we have the ability to scale to infrastructures of thousands of physical hosts. We have the ability to provide the storage, even a certain model of storage with high availability ensured by the platform for cloud-native applications. Where I think the biggest challenge is when things really make a difference is the model of managing those infrastructures. And this is something which is a little subjective, that is something we have to develop in an iterative fashion, join with customers and see what is the right model because nobody quite knows these things today. With the few software development teams that have currently built such applications, they are very sophisticated or they build applications for very specific environments. I think the challenge and the opportunity for companies like VMware is to develop a model, a management model that allows and facilitates many different software organizations from different companies to take advantage of these new ideas without having to reinvent the wheel from scratch. All right, well Christos, really appreciate you taking time. I know you've been talking a lot this week as of all of us, trying to keep our voices through the final sprint here. Lots of stuff to look forward as to the maturation growth of this really important trend. Thank you for having me here. It was an opportunity to talk with you and appreciate it. Awesome, thank you for watching. We'll be right back, wrapping up day three here over the next couple of hours. Here with Silk and Angle TV's coverage of VMworld 2015. Thanks for watching.