 From Las Vegas, it's theCUBE. Cover EMC World 2016, brought to you by EMC. And welcome back inside theCUBE here at the Sands Expo Las Vegas. I'm John Walls along with Stu Miniman, and we continue our coverage here on theCUBE of EMC World 2016 by talking with Nevin Sharman, who is the Director of Product Management at Scale I.O. And Nevin, thanks for joining us on theCUBE. Thank you. First time. It is my first time in the years. So, welcome to theCUBE. I don't know what's taken us long to have you on, but we certainly appreciate your stopping in today. Tell us a little bit about, if you would first, your direct responsibilities at Scale I.O. and your core competency. Sure, sure. So, again, so I joined Scale I.O. probably a little more than a year ago, and I lead the Scale I.O. product management team. And what that really means is I, sort of, I own the roadmap for Scale I.O. going forward, right? So that's really core of what I'm trying to do is, what's Scale I.O. trying to do in the future, and how does that meet customer demand and customer, you know, workloads and customer, you know, what are they trying to do with Scale I.O.s? That's really, prior to my job, is to make sure Scale I.O. is going where customers want to go. Yeah, so Nevin, it's interesting. We've been watching Scale I.O. since it was an independent company. Yeah. And what bucket it falls into, and how it's productized, is a little bit of an interesting story. So maybe you could talk about, kind of, some of the different ways that you look at, kind of, the packaging and deployment of the system. Sure, sure. So, Scale I.O. is very interesting in the sense, you know, unlike other technologies, Scale I.O. was born as a software-only technology, right? So, what this gives us is tremendous flexibility in terms of the hardware consumption model. So initially, when EMC bought Scale I.O. the only go-to market we had was software-only. What that meant was, you know, our customers generally go buy their own hardware and then put Scale I.O. and there we go. But going forward, what we're also doing is, as David Goulden mentioned already this morning, is, you know, we're actually putting Scale I.O. in a lot of other things, to make it, why? It's just to ease the consumption of Scale I.O. So the first way we're doing it was VXRAC system, right? If you look at VXRAC system under the covers, it is run with Scale I.O. And then, you know, going forward, you'll probably hear about the new announcement coming by and there are a lot of other things powered by Scale I.O. So what we're really looking at is, you know, software-only, right? We're also looking at fully engineered appliance of VXRAC system. And we have something called VXRAC node that's kind of a in-between options for those exact customers. So that's really, you know, how we see things. The other thing I'm wondering if you can help educate our audience on is, it has that kind of hyper-converged feature, very Scale I.O. architecture. It can also be just a Scale I.O. software to find storage, you know, without the compute. So kind of, can you explain where those two fit? Sure, sure. So Scale I.O. architecturally, like you mentioned, can be run in what we call two-layer mode, which is traditional, you know, Scale I.O. storage or you can run in hyper-converged, right? The reason we sort of designed with that was that, you know, customers ultimately want flexibility, right? It's never either or. And what we see with our customers, roughly half of them go to a storage two-layer mode and the other half stay with hyper-converged. And the real question becomes why customers choose either or. And the answer is really it's got to do more with internal processes and operations rather than the technology itself. We build Scale I.O. that it really does not matter in terms of, you know, the functionality. It's pretty much the same. But what it gives you is that, you know, if you're more of a traditional enterprise and you have a siloed organization where you have, you know, an app dev team, a server team, a network team, a storage team, you don't necessarily need to break that infrastructure that process you already have, you know. And we see bigger customers like Citi who talked about it this morning, you know, they're going with a two-layer approach whereby they get all the value of a true Scale I.O. in a block storage based on X86 architecture, right? But they haven't moved their application there yet. And then we have other customers who've sort of hyper-converged everything into this one building block of both compute and storage. And at the end of the day, it's really about customer choice and we see that, you know, it's something we're going to carry on, you know, as long as Scale I is around. Okay, so it's not necessarily the size of customer, it's just in use cases. Talk about service providers maybe, because we, I talked to some people that just had architecturally compute and storage scale very differently for service providers versus some of the enterprises. Exactly, so usually, you know, what we hear about is, you know, when you sort of have a compute and storage in the same block, right, as you're scaling up, you know, you may have applications that have more compute than storage and vice versa. So that definitely plays a role. One of the great things about Scale I.O. is we can actually kind of circumvent that because there's nothing preventing customers from mixing the modes, right? You could traditionally be a hyper-converge and what you find out is, oh, my application just needs more storage performance and not compute. You can actually just add in just the storage portion without the compute and that works totally fine. And in terms of, like, your service provider, what we see is those guys actually, you know, we're seeing are very conscious about cost and making sure they're delivering at a very good cost, you know, DARPA gig or DARPA CPU and actually we see some of them actually looking at hyper-converge as well because at that point, you know, their capex is even lower because at the end of the day, you know, your footprint is caught in half, you're utilizing most of your server a lot, right? So I think it's a little bit of, you know, either or and it really depends on what is the customer value, what's the internal processes and how willing are they to kind of go with a different paradigm. You know, if you talk about, you know, where storage is today, obviously, with so much data coming in, right? So many areas. How do you maintain in terms of speed, making sure that I can extract the information that I need in a fairly reasonable amount of time or even quicker, right? Because I need it now, now, now. I mean, so I mean, bandwidth, I mean, you've got to deal with that. Sure, sure, I think SkiLio is a little bit different in terms of architecturally compared to other traditional arrays. You know, usually traditional arrays tend to be kind of dual controller arrays and at some point, you know, that's all the performance you're going to get. So even if you put all flash drives in the back, you know, you would be, you would hit a limit and there's no way around it. SkiLio takes a completely different approach where, you know, all the IOs are paralyzed across the entire cluster, right? Whereby you can just add another node if you ever need more capacity or more performance and architecturally, inherently, it's not dependent on, you know, any kind of hardware controller anywhere in the data path, right? And that's actually really the secret sauce of SkiLio is it really will allow you to go from pre-nodes to a thousand node and you will get linear performance all the way through, right? And that's really fundamentally what we wanted to solve and I think we've done a pretty good job. Yep, so, you know, definitely scale out architectures. We're seeing that in EMC's Icelon product. Yeah. Talk about kind of applications, you know, what applications are a good fit and maybe are there other certain applications that just wouldn't be great. So I think, you know, the key use case we actually think about SkiLio for is actually something you, you know, Wikibon has mentioned recently. It's really about true private cloud and what does true private about mean? And it was the notion that actually, you know, Wikibon brought up in terms of how do you have a true private cloud when you have traditional monolithic infrastructure underneath, right? So what you're really saying in that notion is, look, I can have service portal pay as you grow all these things, but I have to point and be limited by my underlying infrastructure. So I think SkiLio kind of changes that. We're really saying, look, if you really want a true private cloud where you want to be like the Amazons of the world, the Googles of the world, whereby, you know what, you start with three nodes and you keep adding nodes, you know, and you get the flexibility, the agility that, you know, David Gulen talked about this morning. That's really the key. I think the use case for SkiLio and on top of that, it's always about, you know, it's an always on architecture. And what I mean by that is, you know, for the application level, you have SkiLio that it's, you know, gets volumes from and underneath it's different hardware. And as your hardware refreshes, it's completely oblivious to the upper level application and there's no migration. There's none of that, right? And you never bound by frame boundaries. If you want to grow your application, just grow, just add a node. And I think that's really the use case there is really a true infrastructure service is what SkiLio is really built for. So, Wikibon, when we laid forth the true private cloud, you know, I could definitely see kind of the, the flexibility and agility of kind of a scale out architecture, lends itself towards that. Can you talk about some of the operational models? Obviously you talked about monolithic storage, but how did the operational model change and how does SkiLio know anything about that? With all SDS, right, in the heart of it, when we start digging around and talking to customers, it comes down to what SDS provides you is really agility. Agility in terms of your CAPEX model, agility in terms of your OPEX model. And it really is around, look, it's the same x86 hardware that's in every machine out there. Nobody's doing their own system on chip or ASIC anymore. So it's pretty much driven by the same hardware, hard drive, same exact thing. We're not going and creating any hard drive. So what we see is physically, even traditional arrays versus SkiLio, is the same thing. But what it changes is in terms of the buying model, you don't have to sort of right size your application or your storage for three years down the line, right? You just buy, you know, however nodes you need today for your workload today. And then growing is a different operational model whereby, you know, city goes from nothing to up and functional in 34 days, including procurement, right? So that's just much more powerful in terms of how do you turn that cash and get revenue a lot quicker than traditional arrays. And second, it's a notion of if you never have to migrate, you never have to, you know, have a siloed approach that adds tremendous operational cost savings because you can have a lot of different application running on the same infrastructure and you can just keep adding that and growing that rather than managing silos of infrastructure. Can you talk about what management frameworks and kind of, you know, maybe DevOps tools that this wasn't into? So, SkiLive is also, you know, built to in the notion of we want to be agile, we want to be flexible. So everything we do is based on REST APIs, right? So the whole overall stack can be managed scripts. Nobody has to physically go touch anything. So I think that's how you really tackle it. When we talk to the largest customers, that tends to be true. Like when you ask them, hey, how do you deploy it? It tends to be we have an automated script, either, you know, you know, vrealized suite, open stack or something homegrown, right? Like Ansible or Chef or Puppet. And we have all the richness in our REST API that customers can just use those to, you know, bring up a cluster, manage the cluster, move things around. So it's all automated and all scripted. Okay, great. You talked about Citi being one of the showcase customers. Any other customers that are presenting at the show or customers you could talk about? I don't think anybody else is presenting, but you know, you can go to like the SkiLive website and we do have collateral with actual names and really big customers that are around. But again, Citi's the only one this year presenting. Hopefully next year we'll have more customer showcases as well. Well, Nevin, before you take off, I'd happen to notice in your LinkedIn profile, you said a third geek, a third business, and a third TBD. Yeah. How close are you to deciding what that third TBD is going to be? Every time I'm ready to commit, you know, my wife usually tells me now that's probably doesn't suit you. So I'm hoping, you know, that TBD will remain there for life because that's where, you know, it gets me excited about SkiLive and everything else. It's just new opportunities. And you know, it's, I like to keep it that way. I think it's just like you're offering, it's nice to be flexible and to leave exceptional performance for down the road. Exactly. Nevin Sharman, thanks for being, oh, I'm sorry. Yeah, I was just, if you could bring us home with kind of the 2.0 announcement was a couple months ago, I believe, just for those that haven't seen it. So I think 2.0, the key core messaging we talked about was, you know, it's a public cloud agility with private cloud resiliency. And 2.0 really did was, you know, all the value and goodness of SkiLive, it brought it up to another level for enterprises. And what we mean by that, we had, you know, full LDAP 80 integration. We had support for a lot more of component level validation, you know, end-to-end checksum. You know, those are all key things that enterprises cared really about and we brought it, you know, to that notion, right? And again, as you will see SkiLive evolve, we're going to keep making sure that we're covering more and more use cases and doing more such that end of the day, you know, customers don't have to, you know, manage around it, right? So that's, our end goal is, how do we serve the customer best? So what are you hearing from the market that has been out with two months, two and a half months, something like that? It's a great reception, right? Again, let's go back to Citi again, you know, they talked about 2.0 specifically and how it brought them into a pretty much mainline with every other enterprise things we've seen. And with 2.0, also we added a lot of support for OpenStack, that's one of the big things that customers talk about with that, we sort of became upstream with Liberty and had Ubuntu support as well. So, given those things, you know, we have a lot of traction with 2.0 and hopefully we can just continue on that momentum. Very good. Well, thank you for joining us. And again, good luck with that last third. Thank you so much. Well, let me know if you have any advice, you know, have a good lesson. No, I think you're doing fine. Leave your options open. That sounds great. The venture, we appreciate that. Back with more here from EMC World 2016 here at the Sands Expo on theCUBE, right after this. It's always fun to come back to theCUBE.