 Live from San Francisco, California, it's the Cube at VMworld 2014, brought to you by VMware, Cisco, EMC, HP, and Nutanix. Now here are your hosts, John Furrier and Dave Vellante. Hey, welcome back when we're here live in San Francisco for VMworld 2014, this is the Cube, I'm John Furrier with my co's Dave Vellante, our next guest is Andy Warfield who's the CTO and co-founder of Coho Data. Andy, welcome to the Cube. Thank you. It's good to be on. We'd love to have tech experts and tech athletes, as we say, you've been around the block and you were one of the office of Zen, which is the original hypervisor used on Amazon. Everyone knows kind of how that started the beginning of this cloud revolution, so you're no stranger to technology innovation. I got to get your take on where you think we are right now. I mean, obviously a huge change from the original cloud days to kind of how mainstream it is. What's your take of the state of the cloud meets enterprise, IT, data center, on-premise, off-premise reality? It's really exploding. Is it truly being rolled out across the board? Well, small question. I think, I mean, there's a lot of stuff going on right now and I think that nobody knows exactly how it's going to play out in a lot of regards. So, on the data center side, you've got these incredibly sophisticated environments, massive scale environments, they're quite a different creature than a lot of the stuff that you see even here with large scale VM and stuff like that. So, I think the hypervisor itself, to a large degree, has played itself out as an isolation layer. We're starting to see it climb up the food chain in terms of things like containerization for apps and stuff like that. I think really a lot of the excitement that's happening and is going to happen over the next three or four years is going to be on the IO side of things. Network can store it. Yeah, so you're seeing, obviously, the revolution with software-defined networking, the shot heard around the world a few years ago when this year was acquired by VMware, Martin's still here, he'll be on the Cube. Virtualization certainly abstracted away a lot of complexities, but now you have this containerization. So, we want to get on the containerization question first, which is, you know, break that down. I mean, where does it make sense? Certainly, RESTful applications, stateless applications makes a lot of sense, but there's still a lot of state involved in the data center. Absolutely. So, I mean, is it really going to thread through that? I mean, what's your take on in that area? It's pretty complicated. I think containerization is something that's going to... The, first of all, containerization, I think even the containerized guys say is not a security technology, it's not an isolation technology, right? It's something that's really more around higher-level APIs, packaging, right, and deployment. So, the big successes that you see with it are with things like CoreOS style fleet management deployments, where you're rolling out, you know, hundreds or thousands of containerized applications and using it to do, you know, upgrade data migration, stuff like that. Less clear that at a single-instance server, right, it's as immediately valuable in a cloud-style virtualized environment. So, the future of the hypervisor is SDX, is that... SDX. Software-defined everything? Yeah, software-defined whatever, networking, storage, et cetera. Is that how you see it evolve? I think... Or is that more narrow to VMware, where sort of point of view? Well, how do you mean narrow to VMware? Like, that VMware is co-opting the software-defined term in terms of... They're attempting to anyway. Right. If you look at the way that software-defined kind of came around, ahead of the virtualization people really taking it, right, if you look at software-defined networking, and before that, software-defined radio, right, which is this term predates loads and loads of stuff. It was really about kind of simplifying the hardware layer of systems and centralizing control over the software layer of systems, right? So, software-defined radio was saying, you know, let's do a lot of the frequency modulation and stuff for these radio devices and software and make the devices themselves last a lot longer. Software-defined networking kind of said a lot of these protocols, right, this protocol-driven development that we've done on things like BGP and spanning tree and Ethernet and stuff like that, we've taken this protocol-based approach and we've built systems that kind of work at scale because they agree on some convergence properties. SDN kind of goes, you know, if you own the whole system, you can centralize that because it would be a lot simpler and you can simplify operational tasks like provisioning, right? So SDN, to me, kind of is realizing a value that virtualization had already realized for CPUs, right? It's fast to provision the system, decouple hardware and software, right, easier to manage life cycle. I think that as we go forward, this aspect of SDN, right, and SDX, as you said, is going to be a big deal, right, that decoupling the hardware from the software and really centralizing control is going to be something that takes advantage of convergence and lets us do some cool stuff. And Coho is trying to take advantage of that, so software-defined storage that leverages SDN. Is that the right way to look at it? Absolutely. Can you talk a little bit more about that? Well, so one of the things that we saw really early with Coho was this weird sort of similarity between the way that some of the first round PCIe, this is the newest round of flash devices, worked, right? So flash has been around for 10 years. Flash people have been obsessed with durability problems on flash. It's like the thing that haunts flash. It's created a whole industry. It's created a whole industry. You deal with wear-leveling. Yeah, right. The wear-leveling stuff, to a large degree, is pointless at this point, right? It's no value other than, I think you don't lose your data. It's done. The card does it. And the vendor sort it out. They warrant you the card for 10 rewrites or five rewrites a day for five years, right? It's just like, forget it. The thing that's really, you know, that struck us early on with the PCIe flash in particular was that flash is very quickly going in a direction where it looks like the CPU did 10 years ago. When we started working on Zen, when VMware started to really make sense in data centers, right, that CPU was an incredibly expensive resource. It was expensive to buy and it was expensive to manage operationally. And the thing that you're about to see with flash is it is so performance-dense, per cost, that it's going to outweigh the CPU, right? That flash, this resource that's falling in price by half every year and a half to two years, that is still very expensive on a dollar per IO basis relative to traditional storage, but is incredibly capable is something that's really, really upsetting to how we build data systems. It's very expensive on a dollar per MEG? Or you said dollar per IO? Both. Both things, right? Sorry, sorry, sorry, cheap on dollar per IO. Yeah, yeah, right, okay. But IO per gig, if you want to think about it that way. Much higher. But it's strong forehand as dollar per IO. Dollar per IO, absolutely. Okay, so you see these trends and you see the, so carry that through. Okay, so yeah, let me fall through with it. So we have one card, right? We're originally actually, we went out and presented to Fusion IO. Fusion gave us a few cards. Really the first gen, this is like 2010, fastest flash that you could buy. And we found that building, well we stuck it in a machine and we couldn't saturate the flash. But we couldn't reproduce Fusion's numbers on a single card. And it was because the software stack in Linux at the time was getting in the way, right? Just driving the thing at the block device layer was demanding enough that you couldn't saturate the flash. And so the realization that we had off of that was, you know, we did a bit of work on it. We found out that that device could saturate a 10-gig nick. And so suddenly, the idea that you're going to build a storage system with 10 of those devices behind a CPU and a network, the way that we've always built storage, doesn't make sense, right? That second card has no value to offer in terms of performance. And so the really, really challenging thing with this flash is that it is so demanding and so difficult to drive at speed, right? To really expose the value from, right? It looks a lot like the CPU did. And that's what kind of took us to the SDN. Okay, so the world needs help in making that resource more efficient and utilizing that asset better. So talk more about how you do that. Let's go to the secret sauce behind it. Sure. So, I mean, from, I guess, the way to think about what we ended up doing is that we converged storage in the network. And so the OpenFlow guys have been doing all sorts of work on open standards and opening up what you can do with the switch. And from our perspective, they've created a bunch of really interesting APIs that irrespective of whether or not OpenFlow is deployed in an environment, whether you've done a full-scale, you know, SDN-based network, suddenly you can program the switch. Suddenly all of this really cool merchant silicon that's been there for five years is available to be developed on. And so with the realization that this flash was so high-performance, that you were necessarily going to have to build we incorporated the switches and interconnected. And so by using the storage system to program the switch, we're able to do things like make a single IP address, a single apparent NFS server scale across hundreds of devices. Okay, so that leads me to the next question about you mentioned NFS. We're talking about V-Valls today. So what's the future? First of all, what do you make of V-Valls? Obviously it's a good thing. And it's still not clear it's here. But so how should the world be thinking about V-Valls? Is that going to be the standard mode of operation going forward? I guess yes. It's kind of obvious, but it's hard to get there. What happens to NFS? I think we'll see what happens. With V-Valls as an NFS provider, so VMware specifically, obviously there's a huge amount of value available to us serving NFS to VMware. We get a lot of visibility into the VMs and we get visibility on top of us into their data and so on and so forth. Whereas running VMFS on top of an iSCSI line or a fiber channel line, you don't have any of that visibility. V-Valls initially is a response to that. It's trying to put the block based storage providers back in the game that the NAS guys kind of got up front. A lot of the parallelism and scale of access we were already getting with the SDN switch. And so there are opportunities for us to win off of V-Valls but we've actually managed to get a lot of that value without it. So do the V-Valls level the playing field or do they allow the guys who can expose all those functions in a V-Vall world at a VM granularity level? Does it make them stronger? What do you think? I think the traditional Ray vendors have been trying to catch up with what you can do with the virtualization there for a long time and this is another way for them to get a little bit more caught up but they're still Yeah, well the big question is will they catch up or will they fall further behind? It seems to be getting harder and harder and harder for those guys. The question I want to ask you is we're going to have Martin Casado. What should we ask him? This is like me throwing rocks that he can't throw back until the end of the show. The secret question, Martin's a good friend I see him around town at the stores and stuff but he's technical but he's very vocal. He's on the case, he's very involved in the community, so what would you ask him if you were involved? I know Martin, a super fun guy. We were doing PhD at the same time so we know each other from Zen and earlier stuff like that. I remember Martin talking last year on this actually he was talking about how he thought that virtualizing the network was really going to be the way into virtualizing a lot of the rest of the data center. In fact, I think he said that virtual easy thing after networking. My take on this is that unlike compute and the network, storage is the thing where if you screw it up you're in trouble. Yeah, plain trouble. The consequences are financial penalties and also catastrophic. Yeah, not to mention you're fired. Yeah. I guess I would be really interested to know how, especially from the VMware's perspective, Martin's doing some really exciting stuff with the direction that NSX is going, how those are going to open up new opportunities for the people that are building storage systems and for storage to work. We're getting an enormous amount of value off of the switch, but none of that value is actually exposed through things like NSX as an endpoint based network virtualization technology. So as a simple example I can saturate a 10 gig NIC. I'd like to plumb 7, 10 gig paths across data center. I'd like the tools to ask for that topology and to plumb that path and NSX as it's currently conceived doesn't let you do things like that. That's a DevOps philosophy. When you think about what you just said that's basically auto provisioning, auto configuration. Pretty complicated, it's not easy. No. And it's not just virtual, it's physical as well. It's both things. So I've got to ask you the PhD question in the academic world. What's getting you we're in the academic world now running a company as CTO and co-founder. What is the hot things that you're excited about in academics that's translating quickly into business and entrepreneurship where there's a lot of entrepreneurs out there really smart who want to do more than the Y Combinator app. They want to really do some heavy duty science and engineering around the DevOps. Sure, that's a great question. I think one of the biggest things on this side is the realization that especially for large scale compute data processing applications you need to move that as close as you can to the data. We don't have the APIs for that today. It's not NFS. It's not sticking your stuff in Hadoop inside of VM with 15 layers of indirection in between it. It's how do you build scalable, interesting efficient computing systems that aren't so pie in the sky that they don't work with anything that we built today. That's an interoperability question. The Docker is a nice roadmap starting point for that concept. We brought up stateful applications versus stateless applications. It's a little bit complicated, but we've got some questions in the crowd. Tim Crawford, our favorite. Tim, good to see you out there. Tim Crawford asks, does VMWare really need to venture into hardware? Is this a colon rail question mark? Or is that heading in the wrong direction? Will it be a distraction? What's your take on that question? I don't know. I guess the broader bit around this is really the hyperconverged thing. VMWare is under pressure from hardware appliance vendors that at the end of the day could replace the hypervisor with a different hypervisor in their offering. So EVO ends up as the counterpoint to that as a package software offering. The hyperconverged itself is a pretty naive way initially of thinking about things. Insofar as a lot of these offerings seem to really, really want to chase a fixed ratio of compute, networking and storage. And no one has that workload. Yeah, it's all dynamic. Yeah, it's super dynamic. No two customers are the same on this side from our perspective. And so being able to scale all three things and have different lifetimes for those three components in the data center is super important. Yeah, diversity of workload, you've got different application architectures. So that's another dimension. Just to clarify, for Tim's question, VMWare would say, well look, we're enabling the ecosystem to do that. We're not getting into the hardware business. VMWare is still a software company. Andy Warfield. No, no, no, that's it. Unless they start running EMC. The one thing that I think is absolutely true of all these sort of EVO or hyper-converged appliances is that they are absolutely responding to an operational simplicity need. And that might be the thing that is really worth thinking about as you look at scaling the data center. It's not storage or hyper-converged, it's not like managed network or virtualized software-based network. It is simplicity at the end of the day. And a lot of the especially newer vendors that are coming into both storage and networking are actually putting an enormous amount of effort into that simplicity. Right? Well, that's the abstraction issue. You want to abstract away the complexities to get to simplicity. That's the end game. Andy, great to have you on theCUBE. Really appreciate you coming on as our guests to break down some of the trends. Always great to have folks, CTO, co-founders, both technical experts and entrepreneurs. Congratulations and we'll be watching you guys. This is theCUBE live here in VMworld in San Francisco. I'm John Furrier with Dave Vellante. We'll be right back after this short break.