 Live from the Mandalay Bay Convention Center in Las Vegas. It's theCUBE, covering VMworld 2016. Brought to you by VMware and its ecosystem sponsors. Now, here's your host, Stu Miniman. Including Ben. Hi, welcome back to theCUBE. I'm Stu Miniman here with my co-host for this segment, Mark Farley. And we're at VMworld 2016 here in Las Vegas. It's been five years since we've been in Vegas and a lot changes in five years. Pat Gelsinger this morning was talking about five years from now. They expect that to be kind of a crossover between when public cloud becomes a majority. From our research, we think that flash capacities really are outstripping traditional hard disk drives within five years from now. So the two guests I have for this program, Brian Biles, who's the CEO of Datrium. It's been a year since we had you on. It's when you came out of stealth, and I'm really excited because you brought a customer along. We love having customers on. Down from Alaska with insight view of Russia maybe. And it's Ben Craig, who's the CIO of Northam Bank. Thank you so much for coming. Thanks for having us. All right, so we want to talk a lot to you, but real quick, Brian, why don't you give us kind of the update on the company, what's happened in the last year, where you are with the product and customer deployments? Sure, last year when we talked, Datrium was just coming out of stealth mode. So we were introducing the notion of what we were doing. Starting in kind of mid Q1 of this year, we started shipping and deploying. Thankfully, one of our first customers was Ben, and our model of sort of convergence is different from anything else that you'll see at VMworld. I think hearing Ben tell about his experience in deployment philosophy, what changed for him is probably the best way to understand what we do. All right, so Ben, great leading. If we start with first, can you tell us a little bit about Northam Bank, how many locations you have, your role there, how long you've been there, kind of a quick synopsis? Sure, well, we're a growing bank for one of three publicly traded, publicly held companies in the state of Alaska. We recently acquired residential mortgage after acquiring Alaska Pacific Bank. And so we have locations all the way from Fairbanks, Alaska, where it gets down to negative 50, negative 60 below Fahrenheit, down to Bellevue, Washington. And to be perfectly candid, what's helped propel some of that growth has been our virtual infrastructure and our virtual desktop infrastructure, which is predicated on us being able to grow our storage, which kind of ties directly into what we've got going on with Datrium. Wow, that's great. Can you talk to what we're using before, what led you to Datrium, going with the startup as it's a little risky, right? I thought CIOs, you buy on risk. Well, and as a very conservative bank that serves a commercial market, yeah, risk is not something that we buy into a lot, but it's also what propels some of our best customers to grow with us. And in this case, we had a lot of faith in the people that joined the company from an early start. I personally knew a lot of the team from sales, from engineering, from leadership. And that got us interested. And once we kind of got the hook, we learned about the technology and found out that it was really the, and I dare say the word unicorn of storage that we've been looking for. And the reason is because we came from array-based systems and we have the same evolution that a lot of customers did. We started out with an iSCSI ecologic system. We evolved into a nimble solution, the hybrid era, if you will, of arrays. And we found that as we grew, we ran into scalability problems. As soon as we started tackling VDI, we found that we immediately needed to segregate our workloads, obviously because servers and production VDI have a completely different read-write profile. As we started looking at some of the limitations as we grew our VDI infrastructure, we had to consider upgrading all our processors, all of our solid state drives, all of the things that helped make that hybrid array support our VDI infrastructure. And it's costly. And so we did that once and then we grew again because VDI was so darn popular within our organization. And at that time, we kind of caught wind of what was going on with Datrium and it totally turned the paradigm on top of its head for what we were looking for. How did the, oh, I just heard that. Sorry. How did the Datrium solution impact the, or did you talk about the read-write balance? What was it about the Datrium solution that solved what was the read-write balance issue there for VDI? When we ran out of capacity with our ecologic, we had to go out and buy a whole new member, when we ran out of capacity with our nimble, we had to go out and buy a whole new controller. When we run out of capacity with the Datrium solution, we literally can go out and get commoditized solid state drives, plop one more into our local storage and end up literally impacting our performance by a magnifier. That's huge. So the big difference between Datrium, and these are my words, and I'm probably gonna screw this up, Brian, so feel free to jump in any time. All right, go for it. In my opinion, Datrium starts out with a really good storage area network appliance, and then they basically take away all of the UI interface to it and stick it out on the network for durable writes. Then they move all of the logic, all of the compression, all of the deduplication, even the RAID calculations onto software that I call a hyperdriver that runs at the hypervisor level on each host. So instead of being bound by the controller doing all the heavy lifting, you now have it being done by a few extra processors, a few extra gig of memory out on your servers. That puts the data as close as humanly possible, which is what hyperconvergence does, but it also has this very durable back end that ensures that your writes are protected. So instead of having to span my storage across all of my hosts, I still have all the best parts of a durable SAM and all the best parts of high performance by bringing that data closer to where the hosts live. So that's why Datrium enabled us to be able to grow our VDI infrastructure literally overnight. Whenever we ran out of performance, we just pop in another drive and go, and the performance is insane. We just finished writing a 72-page white paper for VMware, and we did our own benchmarking using iOmeter. Spheres can be using our secondary data center resources because they were frankly somewhat stagnant, and we knew that we'd be able to get the most level testing possible, and we found that we were getting insane amounts of performance, insane amounts of compression, and by that, I can quantify it, we were getting 132,000 iOps at a little bit over a gig a sec running with 2.94 milliseconds of latency. That's huge. And one of the things that we always used to compare when it came to performance was iOps and throughput. Whenever we talked to any storage vendor, they're always comparing iOps and throughput. We never talked about latency because the latency was really network bound, and no storage vendor could do anything about that. But by bringing the brains closer to the hosts, it solves that problem, and so now our latency that was like at 25 milliseconds using a completely unused Nimble storage SAM was 2.94 milliseconds. What that translated into was about 3x performance increase. So when we went from Ecologic to Nimble, we saw a multiplier there. When we went from Nimble to Datrium, we saw a 3x multiplier, and that translated directly into me being able to send our night processors home earlier, which means less FTE, larger maintenance window times, faster performance for all of our branches. So kind of went on for a little bit there, but that's what Datrium's done for us. Right, and just to amplify that, part of the approach Datrium's taking is to assume that host memory of some kind or another flash for now is going to become so big and so cheap that reads will just never leave the host at some point. And we're trying to make that point today. So we've increased our host density, for example, since last year, flash to 16 terabytes per host raw. With inline Ddupe and compression, that can be 50, 100 terabytes. So we have customers doing fairly big data warehouse operations where the reads never leave the host. It's all host flash latency, and they can go from an eight hour job to a one hour job. And in our model, we sell a system that includes a protected repository where the rights go. That's on a 10 gig network. You buy hosts that have flash that you provision from your server vendor. We don't charge extra for the software that we load on the host that does all the heavy lifting. It does the raid, compression, Ddupe, cloning, what have you. It does all the local caching. So we encourage people to put as much flash and as many hosts as possible against that repository. And we make it financially attracted to do that. So how was the storage provisioned? Is it a, they're not lawns, how? So it all shows up, and this is one of the other big parts that is awesome for us. It shows up as one gigantic NFS data store. Now it doesn't actually use NFS. It just presents that way to VMware. But previously we had about 34 different volumes. And like everybody else on the planet who thin provisions, we had to leave a buffer zone because we'd have developers that would put a VMware snapshot on something, apply patches and then forget about it, fill up the volume, bring the volume offline, panic ensues. So you imagine that 30 to 40% of buffer space times each one of those different volumes, now we have one gigantic volume and each VM has its performance and all of its protection managed individually at the VM level. And that's huge because no longer do you have to set protection and performance at the volume level, you can set it right at the VM. So you don't even see storage? You don't ever have to log in to the appliance at all? You do. Serverless. Yeah. Storageless. Storageless rather. Storageless is what we were talking about. From an admin standpoint. It's all through the VC interface. And because all the writes go off host, the writes don't interrupt each other, the hosts don't interrupt each other. So we've actually gone to a lot of links to make sure that happens. So there's an isolation host to host. That means if you want to provision a particular host for a particular set of demands, you can. You could have VDI next door to data warehouse and the level of intensity doesn't matter to each other. So it's very specifically enforceable by host configuration or by managing the VM itself just as you would do with VMware. So it gives us a lot more flexibility than we would typically get with a hyperconverged solution that has a very static growth and performance requirements. Yeah, so when you talk about hyperconvergence, number one, number two, and number three thing that we usually talk about is simplicity. So you're a pretty technical guy. You've obviously understand this for a while. Can you speak to beyond the kind of ecological nimble and how you scale that? How's kind of the day-zero experience? How's the ongoing? How much do you have to test and tweak and adjust things and how much does it just work? Well, this is one of the reasons that we went with Datrium as well. When it comes down to it with a hyperconverged solution, you're spanning all of your storage across your host, right? And you're trying to make use of those resources. But we just recently had one of our servers down because it had a problem with his BIOS for a little over 10 days. We've been troubleshooting it. It just doesn't want to stay up. If we were in a full hyperconverged infrastructure and that was part of the cluster, that means that our data would have had to been migrated off of that host as well, which is kind of a big deal. I love the idea of having a rock solid, purpose built, highly available device that makes sure that my rights are there for me but allows me to have the elastic configuration that I need on my host to be able to grow them as icy fit and also to be able to work directly with my vendors to get the pricing points that I need for each of my resources. So for our Oracle servers, Exchange servers, SQL servers, we can put in some NVME drives. It'll screen like a scalded dog and for all of our file print servers, IT monitoring servers, we can go with some Samsung 850 EVO drives, pop them in a couple of empty bays and we're still able to crank out the number of IOPS that we need to be able to differentiate between those at a very low cost point but with a maximum amount of protection on that data. So that was a big selling point. So we're using both NVME and Block S? We're actually going through a server refresh right now. It's all part of the whitepaper that we just built. We decided to go with internal NVME drives to start with two terabyte internal PCI cards and then we have a 2.5 inch NVME ready on the front load but we also plumbed it to be able to use solid state drives so that we have that flexibility in the future to be able to use those servers as we see fit. So again, very elastic architecture and it allows us to be kind of in control of what performance is assigned to each individual host. So what apps beyond VDI do you expect to use this for? Are you already deploying it for other apps? VDI is our biggest consumer of resources. Our users have come to expect that instant access to all of their applications. Eventually we have the ability to move the entire data center onto the datrium and so one of the things that we're currently completing this year is to roll out of VDI to our remaining 40% of our branches. 60% of them are already running VDI and then after that we're probably going to end up taking our core servers and migrating them off and kind of through attrition using some of our older array-based technology for test and dev. So I can't let you go without asking you a bit. Just your general relationship with VMware. How's VMware meeting your needs? Is there anything from VMware or the storage ecosystem around them that would kind of make your job easier? Yes, if they got rid of the vSphere web client that would be great. I am not a fan of the vSphere web client at all and I wish they'd bring back the C-sharp client. I like to get that on the record because I try to every single chance I can get. You know, the truth is the integration between the datrium and VMware is super tight. It's something I don't have to think about. It makes it easy for me to be able to do my job and at the end of the day that's what we're looking for. So I think the biggest focus that a lot of the constituents that are at the Anchorage VMware user group, I'm the leader of said group, are looking for stability in product releases and trying to make sure that there's more attention given to QA on some of the recent updates that they have at the hypervisor level. Brian, I'll give you the final word, takeaways that you want people to know about your company, your customers coming out of VMworld. We're thrilled to be here for the second year, thrilled to be here with Ben. It's a great, exciting period for us as a vendor. We're just moving into sort of nationwide deployment. So check us out if you're at the show. If you're not, check us out on the web. There's a lot of exciting things happening and convergence in general and atrium's leading the way in a couple of interesting ways. All right, Brian and Ben, thank you so much for joining us. You know, Ben, I don't think we've done a CUBE segment in Alaska yet, so maybe we'll have to talk to you off-camera about that. All right, we'll be back with lots more coverage here from VMworld 2016. Thanks for watching the CUBE. You guys are good at this. Oh, you're good at this. Thank you.