 Welcome to theCUBE. It's a wonderful Tuesday, and we're here talking to Craig Nunes, who's the VP of Marketing at Datrium. Good to be here. And Craig, you guys had an announcement today, and the announcement particularly refers to the further conversion, so the opportunity to further converge, not only hardware, but now increasingly operating environments, specifically bringing some of the Red Hat ecosystem over to the Datrium product set. So, why don't you tell us what happened? Well, we've been making a great business with customers in the VMware environment. We debuted our kind of new generation of convergence back last year. And as we were picking up customers in vSphere, we're running into a number of them who are saying, you know, God, this is awesome. I do have some Linux stuff going on. Can you guys help me out there? I can't seem to, you know, find a modern converge platform to really take on both environments. And so that's precisely what we've done. We're announcing today that we've partnered with Red Hat to use their stack, Red Hat Enterprise Linux, and their full Red Hat virtualization stack. Run that on our DVX, on our compute nodes, alongside vSphere servers. Beyond that, because we observe there's a lot of activity going on in the container space, and CICD is becoming something that more and more folks are moving to, we've also partnered up with Docker, and we're also going to provide bare metal container support with a persistent volume plug-in for the platform. So this is all in one go. You now have, really for the first time, a modern converged system that can handle what you're doing today with vSphere, probably handle what you're already involved in, but you're looking for a way to bring this stuff together in your Red Hat environment, but then more importantly, you're kind of set up for where you're going with containers. So when you say handle, Daytrim has made some interesting decisions regarding the how to solve some of the engineering problems associated with convergence. Take us through a little bit about what it means to handle. So what were you doing on VMWare that you're now especially doing on the Red Hat ecosystem, and will be doing as you move more closely towards containers? So in the world of converged infrastructure, of course we started with packaging convergence with arrays and servers, and hyperconvergence came along to really bring storage into the x86 architecture, super cool idea in principle. The challenge with that is because storage is now part of your server, everything is stateful, everything's a storage node, and it's tougher to scale, it's tougher to service, and taking nothing away from the hyperconverged guys, it's great for single use case, great for edge, but we're really aiming for what people are trying to get done in the private cloud data center. And so for that, we found that by separating the persistence, the durable capacity from the IO processing on the server, we could provide this wonderful converged platform that scales, that you can use any server you like, you could bring your blades, you could use our own compute nodes, whatever. And it gives folks just a lot more freedom to get the job done, servers are stateless like they were with your arrays, but have all the benefits that you're desiring with a converged infrastructure. So we brought that to vSphere and what folks have taken away is wow, since everything runs local on the server in flash, it's faster than an all-flash array, sure, because there's no sen, but it's all VM-based and brings all the simplicity you would expect from a hyperconverged platform only at scale. And so what we're doing is taking that model to Linux and containers. Now, one relatively new thing we did just recently in addition to taking on VM consolidation and acceleration, we built right in all the data management capabilities you would need for backup and instant recovery, disaster recovery, archive, compliance, search, analytics, copy data management, right into the platform. So really the virtualization guy, the DevOps guy or gal, whoever is running the applications can not only run them, but protect them, share them, et cetera, from one cockpit, one UI. So we're really bringing, you know, taking a whole load of stuff that folks have had to deal with and tossing that for, you know, one very simple platform that scales as you grow. So you're bringing new services to the basic management console of Datrium and expanding that set of services across platforms. Exactly, that's right. So talk to us a little bit about how you see this evolving as the whole world of containers comes on. Because, you know, containers means more of them, new security models. Today, most communication takes place through the VM. When you start talking about adding the kind of storage flexibility, the data flexibility you guys are providing, it suggests that you've got some new ways of looking at containers. You're looking at some new stuff. Yeah, absolutely. Talk to us a little bit about that. Yeah, here is where, you know, a modern platform really is important. And again, not to knock hyper-converged, but, you know, five or six years ago when that was born, it was pretty cool to manage things at a VM level, right? An era of virtualization was hot and heavy. As we move into containers, you know, VMs are just not granular enough. And in fact, folks want to be able to manage at this per container level. Arrays, you know, we're talking about LUNs there. Hyper-converged is gonna stop at short at VMs. And so what we're bringing folks is a way to manage, you know, in the VM side, VMs, VDISCs, files that make up VMs, individual container persistent volumes, so you can protect and share the way you need to. And what we do, because it's kind of a, you know, double-edged sword, you can manage everything at that level, but now you've got thousands and thousands of them. So we actually give you an opportunity to group those, what we call protection groups. Think of it as a policy group. And you set it up around your applications and you set your policies per group. And through naming conventions, if you spin up a new VM or container, it's going to get included as a part of that group without you having to manually go in and assign it. So we're effectively putting the capabilities in so you can manage tens of thousands of objects very simply. And that is the world of containers, right? If you thought there were a lot of VMs, there's a whole lot more in the way of containers that'll be there. Well, one of the things that Daytream has done, correct me if I have this wrong, but I believe I got it right, is one of the things that Daytream has done is it facilitated the kind of any to any addressability between storage or compute resources and data resources, the various types of nodes that are in there. So you used to have all the data inside a particular server and that kind of created some segmentation along those lines. And so in many respects, you created networks of resources that Daytream would manage in that way. Are you doing something similar now as we think about containers where you're literally describing a network of containers as part of that resource mix and being able to add things to that. Is that effectively what the group becomes? Yeah, the group of containers is completely independent of the servers that are hosting them. So you can literally group a collection of containers across all of your Linux servers and treat that in a special way. And so you've got great flexibility and it's, like I say, it's something that is really intended to scale. We've got some very powerful search tools as a part of that. So if you do need to find things quickly and get it rolling, when it comes to containers, you know, it's all about, you know, speed, keeping up the pace. And partly what we bring to the party is great data reduction capabilities. So when you're doing development in like a, let's pick on a Jenkins development environment, you've got, you know, master, slave, and you are collecting data as part of every object, all of that stuff has to move through the master. And the better you are at handling data efficiency, the faster your runtime is gonna be. We're observing about a 30% faster runtime for developers in that Jenkins environment and capacity-wise, we're probably consuming 95% less capacity than you otherwise would have to do in kind of your more traditional storage environment. So it is a 20 to one reduction because there are so many copies in development. And, you know, we can de-dupe all of that away. So it's fundamentally a breakthrough for guys thinking about development tests, dev ops, et cetera. So you talked about the capacity improvements that you get in the throughput improvements, but as you said, when we start going to containers, we increasingly start thinking about how fast we can add new function, how fast we can bring new capabilities together. And one of the things that we're fascinated about this world, you tell me if this is a benefit that you see is that it dramatically accelerates the entire process of doing development. 4, 5, 7, 12 times speed in the development process. So not only do you get better runtime and do you get dramatically better utilization of resources, but you were also accelerating the productivity of people that are actually doing the work. Are you seeing that as well? Absolutely, there's, in fact, there are two things going on here. One is as a part of the platform, when you clone a container, you know, you do that on your dev server or whatever, that clone is immediately available to all other servers in the cluster. There is no copying and moving around. It is immediately available for the developers who just can go. The other interesting thing is there are in development environments, depending on the number of developers and executors involved in development, you can have problems maintaining the state that you desire. And so part of what we are doing here with these very efficient cloning capabilities, we can spin up a new environment for folks that has, you know, got pristine state, which means down the line, quality is better and you're not gonna thrash on those iterations in your QA cycle. So from, you know, from end to end, it's all faster, runtime, QA, the whole nine yards. So Daytrims are also the new company? We began shipping in February 16th. We've had a great 2017. In fact, well, of course it was great. We had a wonderful fundraising in December 16th, one of the largest of last year. And so that's really propelled us in the market and we had a wonderful set of announcements just about a quarter ago with the data management capabilities and we added these Daytrim compute nodes. And just last quarter alone, our installed base, which had been, you know, already showing record adoption, grew a whopping 50% in a single quarter. And one of the most interesting statistics that- Sequentially or year to year? Sequentially. Sequentially. Sequentially. That is welcome. From the end of Q1 to the end of Q2, boom. And not only that, one out of every three of our customers already has multiple DVXs deployed. So that's a huge testimony to, you know, they're kind of liking what they've got. So yeah, so it's been a sprint. And like I say, we've been very kind of v-sphere focused. Our founders are a couple of Diane Greene's early principal engineers at VMware. But customer demand, customer is king and they're looking for the same kind of capability in their Linux and container environment. So here we are. Hey, speed is important to infrastructure people too. Right on, yeah. So, Craig, thanks very much for joining us here on theCUBE. Once again, great to have Daytrim talk a little bit about an announcement that they did today about adding the Red Hat environment to what you, the great work you've been doing in VMware and v-sphere and the future of how containers and related technologies start getting folded into that whole thing. Great results, good early start, keep it up. Thank you, all right, see you Peter. Peter Burris, good to have you once again with theCUBE. We've been talking to Daytrim about their new announcement. Craig Nunes of Daytrim, Vice President of Marketing. Thanks for being here, Craig. My pleasure.