 Live from Las Vegas, it's theCUBE covering HPE Discover 2017 brought to you by Hewlett Packard Enterprise. Welcome back everyone, we're here live in Las Vegas for SiliconANGLE News theCUBE. It's our coverage of HPE Discover 2017, our seventh year covering HPE Discover, now HPE Discover. I'm John Furrier, my co-host Dave Vellante, our next two guests, Paul Miller, vice president, software defined and cloud group marketing at HPE. Welcome back to theCUBE, I'm a CUBE alumni, I'm Denny Yeo, system administrator at BYU Bream Young University. Guys, welcome to theCUBE, welcome back. Welcome to theCUBE. So tell us, what's your experience at Vegas so far? What's the take here from your perspective on what's happening at the show, your takeaway? A lot of exciting technology with HPE. Some things that I wasn't aware of what they were doing. And I'm very impressed, very impressed. Like what, what are the things that? One of the things I was just telling Paul is their memory-driven computing with genomic research. I'm with the College of Life Sciences, specifically at Brigham Young University. And we have people doing research in that area. Mapping the human genome, for example, we've got people doing DNA analysis and so forth. So for that, that was really. The Meg Whitman keynote really redefining compute, it's the vision and the messaging. Hybrid cloud, obviously the center of the action. How does that fit into the portfolio? With hyperconverged, still on fire. I mean, IT is just getting more automated away, but it's more scalable infrastructure. Yeah, so we see, you know, our mission in our organization is to drive software-defined everything, right? And hyperconverged is all about software-defining and making virtualization environments easy and the simplity and the simplity architecture, which is built on rich data services, will enable us to take software-defined storage to the next level, to make it super, super scalable and extensible and give customers that resilience that they need, the inline D-dub compression, all those great technologies. You'll see us, you know, push really hard in the hyperconverged space. As you say, it's on fire. And I can tell you, the sales are on fire, the sessions here are on fire, standing room only for every simplity session, hands-on labs, book beyond capacity with people loving and learning the technology, but we're not stopping there. We're going to take that same technology and embed it in our synergy offering. So just think about the ability to compose and recompose highly scalable, software-defined storage for enterprise applications and enterprise scale, and then you'll also see it be a key part of our technology on the news stack. So a lot of cool things, the sessions are really hot and on fire, as you say. So Paul, if we go back to like the 2009 timeframe, it was converged infrastructure, HP at the time, kind of coined the term and then, but essentially it was some compute, some storage and some networking kind of screwed together and, you know, pre-tested and pre-engineered and all good, but it's really evolved dramatically. And when you think hyperconverged, you think software-defined, software-defined everything. It's kind of what synergy was all about, fluid pools of infrastructure. We heard you guys talking about that last discoverer. So tell us, help us understand SimpliVity and how that fits in that portfolio. Okay, so yeah, so the whole convergence thing was all about static building blocks, right? You built them, you deployed them, but they were really static. But we're trying to go to as fluid pools of everything. So think about SimpliVity being a fluid pool of storage that you can compose and recompose or different workloads. And in our overall portfolio, the biggest advantage we have, like with a synergy product, is the ability for a customer who has, needs the scalability and resilience of sand today to be able to, on the time you're deploying an application, compose it for that workload. But now I want software-defined because I may need some lower cost basis, be able to, at time of deployment, at time of provisioning, deploy it there. So we see it this being a very complimentary strategy where now we have composability from software-defined all the way up to the largest sand type software architectures. All right, Danny, let's get into your situation. So can you help us paint a picture of what's going on in your shop? What are the challenges that you're having? What are the drivers that are affecting your IT decisions and take us through what you're doing with infrastructure? Absolutely. So before we got into a hyperconverge, we were essentially like everybody else who would not be exposed to a hyperconverge. We have the traditional server stack. You got compute notes, you got fabric, you got storage notes, and then you got the fabric for them to communicate. And when you have problems, you get the finger pointing, right? And so that was really frustrating. And then of course you got a hypervisor and all that put in place in the mix. It was frustrating. And supporting that, the OpEx was a little bit challenging because for example, my systems engineer would have to stay sometimes after hours, after fiving you start doing things and patching, upgrading, you name it. And sometimes too, way after midnight. That was a problem. We were trying to minimize that. The other challenge that I had in my shot was backup. We had a backup window during the weekend that we cannot meet. At some point in time, the RTO and RPO weren't sufficient. And so we had to look at a different strategy. Disaster recovery, that was like something unachievable. It's like out there. Hey, can you meet your backup windows? I mean, forget about disaster recovery, right? So summer 2014, I went to a VMware user conference, stopped by the SimpliVity booth and they asked me if I knew about the technology I didn't. So they spent some time explaining that to me. And after that, they asked me if I just had a little bit more time so that it can do a demo for me, a demonstration. During the demonstration, the engineer basically did a failover from California to either Boston or New York. It was in seconds, 22 seconds if I remember correctly. And then he says, well, that simulated a disaster and so you failover and if the disaster is now all over and averted, you want to fail back, right? To your primary location. And he did that again in seconds. I was blown away. I was sold. It reminded me of when in 2005, I saw VMotion from VMware. Right, everybody went, wow. Game changer, isn't it? Yeah. And so I thought to myself, I need to, it's like that movie. I got to give me one of these. And so I asked them to come over and visit us on campus, do a deeper dive of the technology and so that we can ask questions back and forth. They did. And then we decided to do a proof of concept. So we did that late 2014 and after the proof of concept, we were convinced that was the technology. So you had to make sure it was real? Yes. You did the proof of concept. Sorry, go ahead. No, please, continue. So I had the unique situation where after I have acquired SimpliVity and was running it in production, a competitor, I'll just put it that way, came in and asked us if we would consider doing a POC with their product. And we're like, you know what, I've already bought this. And they said, not a problem. We would let you to try our product and if our product is superior, we want to swap out those SimpliVity boxes. So I thought, what do I have to lose? So I had the opportunity to run both hyper-converged technology side by side. As we were thinking how best to really test which one works, which one's superior or if they're essentially the same thing, we had an engineer suggest, why don't we simulate a drive failure? Start pulling out drives. And so we did. We started pulling out drives and I had three notes with SimpliVity and on the other I had four notes in a box. As we pulled out the, after we pulled out the sixth drive, the other technology failed. We couldn't recover data, basically. We would have to send it to a data recovery center. SimpliVity was just, you know, it was business as usual. It was going, no sweat. Because you had it replicated? Is that right or? Not yet, we haven't had it replicated. It was in the Federation. Just all synchronous. So it's their technology, right? It's the rain and rate architecture. And that's the thing, the rain architecture that protected us. So we were able to pull the sixth drive. It was still continuing. It threw up a lot of flags, alerts, and we knew that there were. We're done to see the node level. As opposed to just the drive level. But that little experiment basically proved to us that we bought the right thing. It validated our acquisition. So you did the bake off. That's awesome, right? So what did you say to the other guys when they came back and said, well, we asked them first. We asked them first, help. Your box is not responding, help. They threw up their hands in the air. It's your fault. Yeah, here's the answer. You got finger pointing. Here's the answer. You love this, right? The answer is, you know, you can't just pull out the drive. You got to time them. You know, you can't just, willy-nilly you just yank them. You've got to time them. See that's the tornado that's coming down. Our earthquake that's happening, our flood. Yeah, how do you time those? How do you time those? How do you time those? So we decided, look, take your product back. We're happy with, with SimpliVity. We'll keep it. This is a huge issue. I mean, Hurricane Sandy, which happened in New York. Oh yeah. That was a game changer for a lot of the folks we talked on theCUBE. You don't know when this is going to come. And literally, this disaster recovery thing is, has to be part of the plan. And that's really the key. Now that you have SimpliVity now as part of HPE, what's your world like now with HPE, with the SimpliVity? It's too soon to tell, really, honestly. But after the keynote yesterday, I'm pretty convinced that SimpliVity has is in good hands. And only time will tell, right? So I want to just sort of summarize the story because we were throwing in all kinds of buzz, RPO, RTO. But basically you had a problem with your backup window. That's where this all started. You weren't meeting your backup windows. You really didn't even have a disaster recovery, an adequate disaster recovery plan. So RPO is recovery point objective, essentially a measure of how much data you're going to lose, right? And then RTO is recovery time objective, the time it takes you to get your applications back up and running. And of course, nobody wants to lose any data, but there's always some exposure. If you want to spend a billion dollars, maybe you can minimize that to near zero. And I presume you didn't spend a billion dollars on this. But those were the drivers. So you essentially solved your backup window problem and at the same time, you got disaster recovery out of the box. Is that correct? Yes. So backup is in seconds, right? It's, you know, to do a backup, takes only a few seconds, like six seconds and so forth. We bought an additional note, put it in a remote site, replicate it to it. Now we can fail over through that note and run only mission critical apps. And when everything's good in the primary location, we can just fail back. And that gives you your disaster recovery. Now in your RPO is what? I mean, what's the? Seconds. Oh, seconds. Yeah, second. Okay. Yes. Your RPO is down to seconds. It is that impressive. Okay, so you're at risk of losing seconds of data, which is not the end of the world necessarily in your world. And your RTO is minutes? About there. Tens of minutes kind of thing? No, no, no. Minutes. Just minutes. Minutes, under 10 minutes. Under 10 minutes, yes. Okay. Yeah, we're not as huge as some other data center in the College of Lifeline. And you're not financial services. Right. So now when you, what has been the reaction from your user base? I mean, do they even know? They don't know. It is completely transparent to them. We are now able to do maintenance work during the work day, business hours. We can upgrade, we can patch. They have no clue that this is all going on in the background, which is great because now my systems engineer does not have to work after five, hardly ever. So is this why you bought the company? Absolutely. We looked at them all. Right. And I mean all of them. And we did similarly. We brought them into our labs. We did failover. We did scalability. And that's another huge advantage of the Simplimity platform. Built and designed for scalability compression because system utilization is very, very important. And you know, Simplimity had a really great marketing tool that we're continuing. It was their guarantees. Guaranteed 90% capacity savings. Guaranteed the failover time, a terabyte of VMs in under five or three minutes. So we're carrying on those guarantees. But what those guarantees actually did was really highlight the architectural advantages that Simplimity designed in. They took a different approach, right? A lot of people started at, I'm going to simplify the VM management layer. They said, no, I'm going to make the most robust virtualization data services platform in the world. And that's where we really see the core advantage. And again, we looked at them all. We put them through paces. And nobody came close on scalability, availability, disaster tolerance, and Simplimity. But what does this mean for your other customers now? Standing out through your portfolio, obviously there's different categories, campus and different use case. But for the other use cases with the composability vision, how does this fit into the hyperconverge and overall? Yeah, so we have multiple customers now who are running a hyperconverge and composable in their same shop. Where they want to have just virtualization and a simple, easy deployment, whether it be for RoboSites or for different work groups, drop-ins, Simplimity, up and running in minutes. There are other use cases where they need the high performance of bare metal or they want to move into containers on bare metal. And that's where Synergy plays out. We have people like you saw DreamWorks using Synergy for rendering, right? You need bare metal, you need the power. They can compose and recompose for different movies that they do, different animations. They really love that. We were talking about a genomics research company we're working with. They're using it for bare metal as well, Hudson Alpha. They're driving bare metal, but they also have hyperconvergence for the developer community. He says, hey, I just need to do a few, build a new couple applications. Log in, self-service, get your work done on a few VMs, and then when they're done, then they'll move that research into bare metal. So a lot of different use cases across the board. Well, what I love about that, John, is it's horizontal infrastructure that can support multiple workloads of multiple applications, which is kind of infrastructure nirvana for a pro practitioner, right? I mean, having that single platform that you can throw multiple apps and workloads at is, I mean, we've not had that in the industry before. I know, I know. And building it on one view makes things easy for our customers to manage across the board. So yeah, we're seeing, I mean, what's interesting about, I think what we're heading is we're not only working with IT leads, but now developers are starting to become part of our core customers who we're talking to. You guys are really, really checking the boxes on making IT easier, and as it shifts to the cloud and hybrid, this is the kind of thing you want out of the box experiences, literally here in recovery is a good trend. Paul, thanks so much. I know you guys got a hard stop and you got a role to another appointment. Danny, thanks so much for sharing your story. Love that story, real practitioner on the ground, on the front lines, doing the bake-off, simply the story, great job. Thanks so much for sharing. Just a cue with more live coverage from HPE Discover after the short break. Stay with us.