 Hi everybody, we're back, this is Dave Vellante and we're live from HP Discover in Las Vegas. We're here at the conference in between the Venetian and the Palazzo and we're here with Rick White, who's the CMO of Fusion IO. Rick, welcome to theCUBE. Thank you. Thanks for having me. And Stu Miniman is here with me, my colleague and big event, HP's big customer event. You guys obviously a big partner, it's no secret and of course you guys don't make it a secret HP doesn't always like to talk about the other brand but they actually were on theCUBE today talking about Fusion, you know, openly Tom Joyce was on here talking about your partnership. Yeah, we're good partners. Actually, most of the world doesn't realize this but when we first launched the company we were at a trade show. Some of you may be familiar with demo and we were up on stage on demo and the server there was loaned to us by HP and John Kegel was one of their system architects. He was up there on stage making sure everything was working. In fact, they were out of rental cars. So David and I, John actually had to come over and pick us up and give us a ride over to our own presentation and conference. You know, that's what I like about you guys. Those are the days, right? I mean, the humble beginnings. No question, actually, I remember those days but no, HP's been there with us every step of the way. They've been a very good partner. Yeah, well, so we've seen a lot of action. I mean, you guys obviously tremendous success, great IPO last summer. Really moving the needle. People obviously paying attention to you. Nobody's paying attention to you four years ago. Ah, poo, poo, poo, poo, and bang, all of a sudden, like a rocket ship. I mean, you had your first card in beta, I think, about four years ago, right? Maybe even more. Yeah, that was about right. It was four and a little change. It was four years ago this last April. And so you're leading the charge there. You see a number of other companies come out of the woodworks. Coincidentally, you guys started right around the same time as EMC announced its Enterprise Flash Drive inside their array, the Symetrics at the time. Yeah, we never properly thanked them either because we were struggling to get funding. VC's were looking at us and going, ah, you know, this whole flash thing, this is great for laptops. It'll boot faster. You can drop it, drop a mobile device. But this isn't what the enterprise is about. The enterprise has a long established history of reliability and infrastructure built around magnetic media or the disk drive. And we didn't get it. We thought, okay, looks great in a laptop. But back then, a single SSD could cost $1,500. So I'm going to buy a $900 laptop and put a $1,500 drive in it. That's one tent the capacity of my hard drive. It made no sense to us. But when you looked at it in comparison to enterprise performance requirements, it made sense. And if we could move that performance into the server and eliminate some of the network bottleneck, it made a lot of sense. But we weren't getting quite the traction we wanted. And thank you EMC, they came out and said, hey, we're going to add this into our array. The next thing you know, we're getting term sheets from everyone. Fantastic, right. And the whole market is floating in. And then, of course, then EMC is, you know, taking a look at some of your moves and they came out with a PCIe card recently. Yeah, they did. And made a big acquisition, sort of, you know, validating things. I mean, extreme IO, 300 plus million doesn't have a product in the market, you know. No, that's that. Congratulations to them. That's fantastic. That's exciting for him. I love that. We were talking earlier. We had David Scott on, who's the former CEO of 3PAR. And I made the comment that, you know, he participated in what, if I can recall, it's probably the greatest wealth creation that I can remember for companies, you know, startup companies in the storage history with 3PAR, Compellant, Isilon, Data Domain, Left Hand, Equalogic. I mean, I'm trying to go back. I can't remember such a, you know, a great success. A lot of people think that Flash is going to be potentially even bigger. I'm one of them. So there's some reason for that. If you think about the history of where we're at in processing, if you go back like late 80s to keep a processor fed only took a certain number of drives, you could really get away with one, two, three, four drives, a disk controller inside the machine. But next thing you know, we're coming into the mid 90s. And I think it was about 96, 97. We started seeing the first third party rate controllers come out onto the market. And server suddenly had drives right along the front. Remember that big transition? They used- HP had a lot of those. That's right, they did, they did. And they had those drives right along the front and, you know, pop those two and a half inch drives in and out. They had rate controllers built in. So we're building them into the motherboard. But guess what? Moore's Law. Processor's got faster. So that's the mid 90s. Fast forward 10 years and to keep the same performance ratio between CPU and the amount of IO you can feed into that CPU, you now needed something around two or 300 drives to keep the same ratio of 25 drives in 96. So you fast forward to 2006. The next thing you know, you're seeing these large arrays with disk drives. When Moore's Law is continuing, what are we going to do? Start building vans that pull up next to the server? No, something had to give. And that's where flash steps in and takes over, not unlike magnetic media did for paper. Yeah, I mean, it's potentially that big. And, you know, you love that data under the actuator problem. It's the only mechanical thing left. But there's more to it than that, I think. And I wonder if you can comment on this. I mean, we see an enormous impact potentially impact on productivity. In other words, companies are going to be writing applications and taking advantage of this persistent device, this persistent medium on the other side of the channel to develop applications that are going to drive productivity through the roof. Bottom line impact. It's interesting you say that. Not everyone sees that, but it's going to have a fundamental change on the way software is architected. If you had the ability to go back in time, 10 years ago, 20 years ago, and say, hey, look, we're going to give you this memory-like performance and it's going to be persistent so you can have terabytes in a server, it absolutely would have changed the way file systems, OSs, applications were written. Completely changes the game. That's why it was so important for us to get our SDK out because we understand that a lot of software companies are going to be rewriting and rethinking what they're doing. If we can give them a bunch of libraries free of charge to take advantage of this, it's only going to accelerate the pace at which Flash is adopted as a new memory tier. Well, you have to, in my mind, you have to be the platform there because then you have to facilitate the development of those new applications because they're game-changing. And so everybody focuses on the cost of Flash. It's too expensive, it's too expensive. Well, yes, Flash is more expensive than Magnetic, but if I can drive bottom-line productivity, top-line revenue per employee, if I can increase revenue per employee dramatically, then I'll pay anything for that. So I think that the way practitioners utilize applications that exploit Flash is really what's going to be driving value. Yeah, Fusion I.O. and other companies are going to make some money and you guys great IPO, everything else, but the value that's going to be created at the other end of the value chain, the practitioner spectrum, is going to far dwarf anything you guys can create. Absolutely, actually if you think about just the impact RAM had on workstations and servers and the productivity enhancements of having more and more faster RAM, applications completely change. And in fact, being able to have enough memory to run a windowing system, like Windows or like the Mac OS, made it so that the learning curve, at a time when there was a whole generation who wasn't familiar with a computer, I remember being younger, we didn't really grow up with a computer, the internet don't even talk about that, we didn't have a computer. My parents before me had no idea, right? A calculator was like this new fangled machine. Suddenly you take a very complex system that used to have buttons out the front and blinking lights, right? And I would program it from the front panel to hey, I've got command lines to a GUI, this user interface, this graphical user interface, that anyone can move a mouse and double click on something and launch something, it changed everything. I didn't have to do C, colon, dada, and try and find the file path and launch it. It made it very intuitive and simple. And just that was made possible by having enough memory, enough RAM in the system to be able to run that type of application. So what's going to be interesting is now our lives are interconnected. We're carrying mobile devices, but we want applications or instances of our life running in the background in the cloud or data centers around the world, always happening. And we want to be able to access that anywhere. It's hilarious how I watched one of the members of my team, very young, just fresh out of college, just sit there and getting so frustrated. I'm like, wow, must be some bad news, reading on a phone or I'm in the break room. What's wrong, everything okay? Off, it's been like 30 seconds since application won't download. I'm like, what? You're all wired up about that? They're like, yeah, I was looking at the top free 25 downloads. I'm like, seriously, you're pissed because 30 seconds of a free application and still haven't downloaded, it's free. He's like, eh, if it doesn't finish here soon, I'm just gonna quit. Yeah, yeah. That's the mindset. Or you click to see your family's photo on a social network. If that doesn't pop, if it spins, it sits blank for a while, you get frustrated. Of course, it's because they charge you so much, free. You want free to be fast. You want services to be fast. We want to be interconnected. We want it all in the cloud. Well, guess what? Something has to give. Just like you had to have enough RAM in your PC to run a windowing system, you're gonna have to have a whole new type of memory, infrastructure and applications to run this back end so that you can seamlessly upload this, link this, share it with this person who gets to comment here. All of this. It's very complex. The world is going real time, people. Yes. So, Rick, when you talk about kind of massive sea changes in the storage world, the other thing we've been covering a lot, and I'm sure you've got some comments on this big data. If we look at Hadoop, Hadoop was not built for that big storage array that you were talking about. Not at all. It was built for more kind of commodity, Intel type machine. So, how do you see Flash playing into that? Oh, well, that's a key part. As Flash gives Hadoop the ability to hold larger data sets, to be able to serve it up faster. I mean, it actually plays a very key role. Otherwise, you've got to do a lot in RAM, right? Again, you can use disk drives, but it's all a matter of how fast you want the data. This is the same reason you don't see massive web companies using big shared storage arrays because the latency of getting that data going to disk. Imagine if you had information on your website, like your images, your banner ads or anything like that, and you didn't have it stored, cached, hot, ready to roll. You had to go back to some disk and find it in a storage array. Did you imagine how painful that would be? Your new guy would never click through that site. Never, never! Now you put this on scale? No way, so it's a big deal. There's a nuance here, too, that I want to talk about because a lot of people don't understand this about Fusion.io. First of all, they confuse it with the storage company. Then they confuse it with other Flash companies. I mean, your strategy is completely different than take a violin who's basically plugging into the existing infrastructure. Great strategy. Love it. Okay, good. Your approach is to basically eliminate the horrible storage stack, I would say, and use software guys. And I think a lot of people don't understand that. So it's not just about the Flash medium. It's about all the other infrastructure that we've built up in the storage protocols and really changing the nature of I.O. Talk about that a little bit. You hit it on the head, the hardest part of talking about that is it's really technical, right? It is really technical. Okay, what we do is we virtualize our storage layer. And we integrate and we appear to the host system as memory. We don't go, we bypass all the legacy storage protocols and we interface more like an I.O. memory subsystem that we do a, you know, an I.O. controller. And so I mean, it's just all this just crazy, just it's just so confusing. So people tend to say, hey, look, it's PCI Express. Must be the same, you know? You're right, a Hyundai has four tires. It must be the exact same as the Mercedes. Yeah, so and I think that again, the proof is going to be in the applications that are developed and the productivity impacts that those have, so. And that's the key differentiator and it really does drill down to our architecture. Where are the companies are really trying to take storage controllers and then bridge them to PCI Express and then have a driver kind of interface from the host to this? We actually don't put embedded processors out here. We actually use the host processor, just like RAM does. Does RAM use an embedded processor? No. Could you imagine that? Well, we put an embedded processor on our memory dims so that we would offload the host CPU. Makes no sense. Do we put SRAM on these memory dims? You know, as a cache, you know, to buffer between. No, we don't. The host CPU talks directly to memory. You know, using DMA engines, we do the same thing with our flash. We don't go through all these legacy controllers in the storage stack. We don't bridge, we don't emulate. At the end of the day, we architected this to really be a persistent memory, which is why the SDK is possible. Other companies will talk about it and go, yeah, we're going to do something like that too. No, no you're not, because you don't interface with the host the same way we do. You're not integrated with the CPU. You haven't virtualized all those memory interfaces and you're not running your data on the host CPU like we are. You're using embedded processor, storage controllers, very traditional system, and you're emulating what we do. Until you're ready to use a cut through architecture and eliminate the storage protocols, all those controllers, and on load to the host, you can't do things like atomic rights, auto commit memory. And these are going to be very important building blocks for software developers over the next two years. What's the patent portfolio look like around those? You know, if I comment and I get it wrong, the legal department actually has a gentleman who played college basketball. He'll put me in a headlock and give me a noogie that really hurts. I know he's done it before. But I believe we've talked about several dozen, dozens of filings, hundreds of inventions. We have a healthy portfolio. Why can't somebody just copy what you're doing? You've got patent protection there, right? They're welcome to try. It's taken us almost five and a half years to get here. It's a lot of hard work. Everything we're doing right now is step upon step innovation and we've got great partners, some huge companies working with us. I mean, this list of API libraries, I mean, these are pieces that some of our biggest customers have been using in their data center saying, oh, wow, that's great. I'll re-architect my SQL to do this or I'll re-architect my software to do this. We just decided to open it up to the world and make it generally available to all software developers. So you talked about the SDK before. Talk about Microsoft Denali a little bit. Are the things in there that you can take advantage of? Oh, absolutely. In fact, right here at HP Discover, we've got a great demo running with HP. Describe that. As far as the demo, we're showcasing data migration. So Microsoft's added some really cool functionality where you can have a dozen servers migrating data between them across the network. It's all run natively. So you don't have to have that shared storage. So you can have high performance storage in the server and then let the application actually manage migrating its own data. So I don't have to have this whole abstraction piece that some third party is managing migrating data for Microsoft SQL Server 2012. It's doing it itself. And that's the thing. Oracle does the same thing. So a lot of different applications have the ability to manage their own data migration. There's no need to have a third party do it. That's cleaner when the application. Much cleaner. There's no doubt about it. But like you said, we've put these abstraction layers out of necessity. In the past, we did. We did, but you know what's interesting is, we like to call it sucking elephant through a straw. You have this really amazing fast new media, right? And then I'm gonna put it in a package and make it emulate really slow magnetic media. Then I'm gonna run it through the same infrastructure for the slow media. And I'm gonna run it over a network that can only handle this much bandwidth. And the only way to get more is, I don't know, I'll have eight connections, 16, 32 connections. Wow, look at all these connections coming to the server. Look at all my performance. Yeah, you're right. If you continue to scale out, it's gonna get faster. Your latency is gonna get higher. And this is where I'll just kind of end on this point, is great, you can do a Jillian IOPS. That means you can support a lot of users. Anyone can, I just add more and more hardware. What really matters to a user is latency and how fast the response time is. Remember that young person I was talking about downloading that app? He didn't care that they could serve up that app to a million people along with him. He just wanted to know how fast it was gonna take for his response time, how quick he was gonna get it. It was about response time. I agree. Not serving out. All about latency and response time. Rick, I was actually, I was talking to actually a practitioner here at the show, and they were talking about doing VDI with one of your solutions, which was news to me. I mean, obviously flash is really important to VDI, but I always think about, you know, I need it for boot storms, but I need the disk behind it. So what am I missing? Oh no, actually we've got quite a few customers using it for VDI. They're using it in virtualized environments. More VMs per server. What's interesting is they're also using us to, for IO intensive applications. Generally you can't run those in VMs, right? Because if I've got the shared storage pool right here, inside the host server, and I've got eight VMs sharing a couple of disk drives, every request is random. They're all sharing it. It's this IO blender. But it's the IO turbine acquisition. Right, right. So now all of a sudden we have this ability to manage not only the performance, but migrating that data between servers, the supporting things like VMotion, which is why we acquired IO turbine. So give us an update on IO turbine. How's the progress? I would love it. Going great. Customers love it. It's a good product. How about we were at NAB. We did have the cube there. It was actually pretty cool. We were inside Intel's booth. It was a big, big giant production. You know, not our little humble cube here. But anyway, you guys announced IOFX at the events. How's that going? You got to update us on that. Going well. The launch was fun. I mean, we had Rob Legato, right? Academy Award winner, Hugo, talking about his Hugo production, talking about using our IO memory products for that production. They're tight, tight window. We had the waz there. We had, you know, Vincent Bresbois who actually owns and really drove this product internally. And what's cool is all the studios we're working with, the different movies, the different production houses. I mean, we actually had Adobe, who's a great partner, rewrite parts of their creative suite to support IO memory. Again, another partner saying, hey, let me take advantage of this. Let me treat this as a persistent memory tier, not as some emulated disk, right? Which makes no sense. I'm sorry. I hate to go off topic, but could you imagine if disk drives use the same primitives or, you know, as tape? As tape? So, tape used to use what? Fast forward, rewind, play or stop. It's quite so primitive. Right, right? Okay, so imagine how absurd that would be on a disk drive. But we're going to use that same infrastructure now for SSDs. I don't know the last time I saw an SSD with a head that had to actually go out and seek anything, right? So it's time for new primitives. It's time to treat this media and take advantage of it for the benefits. It's a new persistent memory tier and it's exciting when innovative companies like Adobe see that and they actually make architectural changes to their software to take advantage of it. That was pretty cool. That was a cool show. Yeah, that was a cool show. My first time at NAB. All right, Rick White, hey, thanks for your time. Thanks for having us. Fusion IO, Pave and the Wave. A lot of action in that space and really appreciate you coming on. Thanks a lot, guys. Thanks for having me. Thank you. Take care. We'll be right back. Hold on, hold on, hold on. Sorry about that. With more guests right after this. You're tethered.