 Alright, great. So we're going to bring on Rob Pegler now. Hey, Rob. So, excuse me everybody, I just wanted to get our next guest. Rob Pegler, who is a fellow at Xyotec and a friend of Wikibon and SiliconANGLE. Rob is a great technologist. He really has a good vision, understands the business, into the whole storage networking world. A very active member of the Storage Network Industry Association. So Rob, come on in and get nice and close. Rob Pegler, everybody. Thanks, Dave. Rob, it's great to see you again. Thanks for coming on the Cube. My pleasure. Good to be here. It's good to be here at SMW. It's nice to have this out in California. Yeah, it is. It doesn't matter to you, because wherever it is, you're traveling. That's true. I live on airplanes. Unless they deliver heaven in Missouri. You're right. You never know. Winding down day two, our final day here at the SMW, Dave. So thanks for coming by. Yes, so Rob is, as I was saying to the audience, plugged into storage. Came by earlier. Really active in SNEA. Yeah, I'm on the board of directors of SNEA. And so forth. So let's talk about, I mean, we want to pick your brain about what's going on at Xyotec. But first we want to pick your brain about what's going on at the event here. Oh, yeah. You're a technology watcher. You're sort of a trend spotter. What are you seeing here that's interesting? Well, as you probably seen by, you know, the other guys that you've interviewed, I mean, lots of people are very interested in cloud. They want to know more about it. There's a lot of material surrounding it, which is good. But in addition, there's, you know, material on solid state. There's still material on virtualization at this show. I mean, that's why people come to this show is really the content and the speakers. You know, I've been doing the tutorials for a decade now. And the SNEA puts those together. But there's some really interesting stuff. You know, people are looking at paired storage, right? The combination of solid state and rotating. They're looking at several different types of hybrid approaches to storage. So there's, it's, it's a hybrid. So hybrid paired. What's the difference between hybrid and hybrid? So, yeah. So hybrid, you know, and there are some examples on the market now, is a single device with both, you know, memory. Great scope is telling me about this. And or both, and rotating in a single drive. And then paired storage is the use of, you know, separate drives of solid state media. And then separate drives of rotating, but together in a single tier, if you will, or as a single unit acting like one pool. Like a nimble. Exactly. Yeah. You can think of it that way. Yep. Okay. So that's interesting. And seen some flash startups. Yeah. Everybody wants a piece of that Fusion I.O. action. Really what? There's a, there's a lot of interesting techniques. You know, there's several now PCI Express, you know, flash on PCI Express players now. And then there's, of course, the, the flash drive people and, and we're involved in that space as well. But yeah, it's actually pretty exciting. We've heard, we heard Brian Reagan tell us that you guys have had some success in, in applications. We have. Absolutely. And actually have caused some people to rethink some of their flash decisions. Is that right? It is right. You know, the approach of, you know, there, there is this campus that is, well, I have to have shared storage. Right? Storage, I can share multi-tenant storage, if you will. Or I got to have non-shared storage, you know, it's almost a religious war between the two. And what we're doing for applications is that, you know, we're, we're fitting into both worlds. We can act as a non-shared device, you know, directly attached to a server or a cluster of servers with a hybrid ice. Or we can use it as shared storage as well, multi-tenant, lots of different applications, you know, dozens and dozens of servers sharing the space. And we, you know, we try on purpose to fit into both worlds because you're going to see some applications like Microsoft, for example. They're telling their Exchange 2010 users to go back to direct attach. You know, after all these years of using shared storage. Microsoft has its head up its ass, I said. Well, there you go. Well, but it's sensible in a lot of use cases. It is. Here's the influential. Yeah. You know, I'll tell you, it doesn't always make economic sense, in my opinion. True. You have to study it. That's right. It really depends. It's one of those, it depends. If you want to share the storage across an application portfolio, you're going to save money doing the same. At scale. That's right. If you're, you know, under a thousand seats. Then maybe direct attach. That makes a lot of sense, right? So we're trying to fit our portfolio. Or if the politics of your organization are such that you don't want to get your storage from the storage guys. There you go. The application group and you want to maintain everything and manage it. And that makes sense too. It does. I mean, there's, I know, in particular, one, you probably know him too, large financial institution in New York that approaches that. And they have more than a thousand seats. They do. In exchange. A few more. That's right. And they have their head up their ass too, but for good reasons. Let's talk about VDI a little bit. Sure. We were at the tech target session. I think it was last week. Yeah, it was last week. Stu and I went. Okay. And our new friend Brian Madden was speaking. There you go. He's quite a character. He is. He is. Very knowledgeable in that space. And he was, he was very pragmatic too. He was like, look, get virtual desktop out of your data center unless it really needs to be there, you know. And so I thought I was pretty pragmatic for a guy who, that's all he does is virtual desktop. And but what are you seeing in desktop virtualization? You're starting to see an uptake. What's Xiotech's, you know, unique value proposition? Yeah. So I'll take the uptake first. So in certain segments of the market where we are seeing an uptake, and it's pretty interesting. And the first two segments we saw rapid growth in were education, both K through 12 and university. And then the second segment was healthcare hospitals in particular. And those were kind of the early adopters, if you will, of VDI. And they did it for all sorts of, you know, correct economic and technological reasons. Both of those verticals tend to be very thin on IT staff, you know, a school, a hospital. That's not what their mission is, right? I get to educate students or I'm going to, you know, heal patients. IT is kind of an afterthought, right? So if they're thin on staff, they want a very simple deployment and they want to be able to scale that out. You know, I've got 400 nurses coming on shift now. I've got to have a virtual implementation because I can't afford eight helpdesk people running around trying to fix problems. So those two took up the adoption now. We're seeing, I'd say, incremental, relatively slow but incremental progress in some of the enterprises. And again, you see it more. It tends to be larger. And if I can justify, like you said, do I have enough seats? Am I at scale? Can I do that? And then on the back side, the problem is, well, what does my storage infrastructure need to look like in order to support this? In order to survive boot storms? In order to support, you know, 1,000 people logging in at the same time? And then the other interesting and actually pretty cool dynamic is, you know, the concept of bring your own client, right? I don't care if you use an iPad or a PC or a smartphone or whatever. I'm going to give you a virtual desktop image. You know, you provide your own client, whatever you like. I'll give the desktop image and I'll either stream it to you or you can log into it remotely from whatever client device you like. And some companies are looking at that as, maybe I can save money, you know, not providing clients to my own people because they're going to have their own choices anyway. And then again, at the back end, you've got to worry about the storage. And that's why we've been very successful, you know, both not only with the rotating, you know, the Ice One product but the hybrid Ice as well, carrying these, you know, phenomenally concentrated workloads. And they're all random, right? We've talked about that before. They're all random. Can you talk about that a little bit more? So talk about sizing, the nature of the workload. Why is sizing so important in a virtual desktop environment? What are the nuances there? Yeah, that's a great question. So the sizing does turn out to be really important and there's really two ways to do it. The traditional way is the size by capacity, right? How much space do I need per desktop? And there's various ways to play that game. You can have hypervisors de-dupe the space down or try to shrink it. But what doesn't go away is the IO, right? How many IOs do you need at desktop in a sustained phase? And then, how many IOs do I need if, say, I'm going to have 200 people boot their machine at the same time or log in or log off at the same time? So trying to survive those peaks, because what the users won't tolerate is, well, I've got my desktop here and you know what? When I log on, it's actually three times as slow as if I had a disk inside it. So you know what? Give me the disk back because I don't want to wait that long for my machine to log in. So you have to size by IOs is the bottom line. Capacity is a very simple thing, but sizing by IOs. So you talk to the different VDI providers and there's three of them, of course. I've seen as low as five IOs per desktop. I've seen as many as 30 IOs per desktop. It really depends on who you talk to. But if you do the math now, gee, if I have, you know, 200 at 30, I need 6,000 IOs right now just to sustain this load. Can I do that in 3U? Or if I can't, how many new do I need? How many spindles do I need? And how does that map to my capacity requirement sometimes diverge? They do. And the other thing is, I mean, server virtualization and desktop virtualization are the only thing they have in common is the word virtualization. Pretty much. It's like a lot of customers go, hey, I got my server virtualization from VMware. So I guess I'll buy my desktop virtualization from them too. And that may or may not be the right decision, but that's not the right reason for the decision. And so desktop workloads are very right-intensive, aren't they? They can be. Much more so. They can be. Absolutely. And we're writing and updating, you know, the typical desktop workload. Sure. There is such a thing as typical desktop. More so oftentimes than a server workload, again, depending. Right. So that's another unique thing. So my question is why ICE in VDI environments? Yeah. What is special about the Xiotek ICE technology and why is it such a good fit? Yeah. So boiling it down, the reason is really what we call IO density, right? In order to get, you know, if you're at a physical disk level, for example, there are lots of metrics. You say, how many IOs can I reasonably and with good response time, response time that the users would tolerate, how many IOs can I get out of an individual spindle? Well, if you talk to some people, maybe it's 150 or maybe even 180. But what we're doing with ICE technology is getting, you know, 300 even on a two-and-a-half inch drive and 400 on a three-and-a-half inch drive. So now that doesn't sound like a lot of difference, you know, per drive, but it adds up. And even at a 3U scale, that's thousands of IOs difference just in that 3U platform. And now if I have a rack of these, it's tens of thousands of IOs different. And that's just with, you know, an all rotating solution. When you go to the paired storage, the hybrid solution, you know, that number goes up by an order of four or five. Per 3U. Because you're right, the workload is mixed. I might have more rights than reads. But if I can satisfy some of those IOs, you know, the most frequently occurring IOs out of the solid state portion, especially reads, now I've got a chance to service that a workload, you know, 10, 20 times faster than a rotating drive ever could, even with, you know, using Rags technology that we have. Right. So the IO density, the sheer amount of IO horsepower that you can get in a given footprint, that's what makes us different. You guys have really taken a different approach here. And I know we've had Steve Cicola and Richie Larry. Oh, yeah. We've had him on a peer inside. I think, I think, I think John's had Steve on. And, and yeah, he had Steve Cicola on with, with Alan. Okay, right. Yeah. You did one with Alan. Palo Alto. Right. And you've really come at it differently, haven't you? He's kind of trying to simplify the storage from the standpoint of let's make the storage storage and let's let the application do its job. Right. And so you've come up with this set of APIs, you call it Cortex. Right. So what led you to that, you know, approach and that decision to go that direction? That's a great question. So, you know, several years ago when we, you know, we as a company began to even look at ICE technology and then we acquired the group from Seagate that actually put it together, Cicola's group. We said, well, you know, for years, literally decades, we've had the, you know, relatively fat controller and then relatively unintelligent bays and drives underneath it. And that design persists to this day. So we kind of took a step back and said, is there a better way to solve the problem? We know that IO density needs to grow because the CPU horsepower is getting so large now. The CPUs are so fast that there's this divergence between what the CPUs can do and what the storage has been traditionally able to do. We need to close that gap. It's diverging now. These 12-core CPUs, very fast, very powerful. Now I need some IO to go along with that. So the answer is, do you put controllers in the way with a lot of feature function that could, you know, detract from the actual job of getting IO service or do you take a step back and say, I'm going to make the enclosure itself very intelligent, very reliable so I don't have to change it out a lot. I don't have to worry about common mode failure. And then make that thing perform as fast as it can, but let it directly attach also to a server. So some people call it the disaggregation of the array, right? Taking the controller, the traditional roles of the controller away. You know, we still do some function in the controller, like mirroring and copying, lungs, things like that, because we can take advantage of that in tiered storage. But some of the high stack functions, you had it exactly right, Dave. The applications are doing that for themselves. They're replicating. They're, in some cases, deduping. They're taking their own snapshots, things like that, like Microsoft Exchange, for example. It's able to replicate itself. All the databases have replication engines, right? So if you take that combined with the CPU horsepower, because now I get the CPUs that can handle that sort of thing, you know, 10 years ago I would never dream of doing host-based replication. It would be too deadly slow and I'd steal too many cycles from the app. That wouldn't scale. Wouldn't scale. That's exactly right. Now I've got all these multi-core CPUs. I can take advantage of that and have a core to work on moving the bits of the replication, or doing dedup, or doing compression, or doing snapshots, techniques like that, protecting its own data, and then let the storage array down at the back end service the IOs, which is the job it's supposed to do anyway. So for years we've seen function migrate out of the host to the sand. Yeah, and it went outboard. And now it's coming back. Of course we've talked about this. It's got persistent flash on the other side of the channel. And you're seeing some people like Fusion IOs have done some really interesting things to try to eliminate storage protocols, which is quite clever. Yeah, absolutely. And it is clever. I mean, putting stuff on the memory bus, there's some application to that. And there's some drawbacks to it too, but in certain cases that works well. And then if you put flash inside a server, or some of it, and then couple it with a very fast media out a channel, or directly attached even, that's an interesting proposition. So with cloud, with application functionality, particularly around recovery, you know, becoming more robust, with big data, the trends around big data, and we'll talk about that a little bit. Do you think the days of sand, or the sand as we know it are numbered? Well, I'll give you a qualified no. There are, and still to this day, and will, for the foreseeable future, be a good set of use cases for shared storage. No question about it. Clustering is one example. People are still going to run clusters. Clusters still require some sort of shared mechanism between the two. Now, there's different ways to architect how that shared storage looks. Right? Can I share storage literally inside a server? You know, maybe yes, maybe no. Right? Do I need an external channel to do that? Maybe yes, maybe no. Right? So there's different ways to do it. Well, it's an interesting problem if the industry could solve it. Yeah, absolutely. That would be, I mean, somebody would need the next veritas if they could solve that problem. Well, it's an interesting thing, you know, you look at, you know, what is the difference between memory and storage? What is the difference between bus and channel? Right? They're kind of blurring. That's actually a good thing. Yeah, right. I think so. Right, right. Good. How about big data? I mean, it's a hot topic. Yeah. I don't know if you guys are directly sort of planning stuff there, but it's getting interesting. It's getting interesting. It is getting interesting. What's your angle on that? I'll tell you what. You know, we kind of look with very great interest on, you know, some of the map-reduced techniques and Hadoop and things like that that are taking these amazingly large quantities of data and trying to analyze it. Right? And of course, now, you know, you're in the world of IOPS, but now you're also in the world of throughput. Mm-hmm. Right? And lots of these big data workloads are actually throughput dominant versus IOPS dominant. Right? So your ability to stream data, because what these map-reduced things want to do is read a lot of data at once or as fast as they can at once and then reduce it down to some smaller set and then write it back out. And the HPC world, both in commercial and academic, is kind of that way as well. They're much more throughput dominant. And so now I need something, you know, be it channels or inside memory or whatever that has this blazing fast throughput to feed these, you know, these map-reduced problems that need that kind of scale. Mm-hmm. Because these guys are all about scale, right? Right. Five-definition big data is about scale. So how does a storage, a traditional storage company, or any storage company, for that matter, play in that world? It's very software-based. It's very, it is. It's all open source. A lot of it is open source, right? A lot of it is open source, right? Cassandra. Absolutely. Do you envision that you guys will actually put development resource into providing open source code? We have, through the Cortex API, that's actually all open source. It's all RESTful. It's based on the RESTful protocol. Do you see that extending potentially into big data? Oh, absolutely. Absolutely. So that's a contribution that you might make to the community and by virtue of that contribution, maybe improve your leverage. Because imagine if you're an app that's doing data analysis and today you can write code that allocates memory in a machine from the OS. What you can't do is allocate storage. You have to have some intervention to do that. I've got to give you a lunn. I have to map the lunn to you. Well, imagine if the big data applications can reach out and grab a chunk of storage. Maybe it's temporal. Maybe it's permanent. But they could get it. They could use it. And then they could release it. So this is the elasticity part of it all. And with RESTful protocol, if you can talk to a storage directly, which you can to ours through Cortex, that gives you a leg up. And it's all open. So I think that's really interesting. Interesting angle, Rob. Yeah. I think we were at Hadoop World in October. And Mike Olson gave us a talk. There were about 900, maybe close to 1,000 people there anyway. He did a survey. It was kind of an informal survey. But the average database instance of the attendee, the average was 115 terabytes. And I was one of the only storage people there. And I said, oh, wow, this has got to be an opportunity here at this standpoint. So yeah. But it's different. It is different. It's not just conventional. Here's a box. Not at all. Store a bunch of data because that just won't fly. That won't. Right. Well, Rob, listen. Thanks for stopping by the queue. Oh, my pleasure. Great to talk to you. Always a pleasure. Rob Pegler from Xyotec giving us his perspectives on SNW. We did a quick rundown. And pretty much wrapping up the day here. Here in sunny Santa Clara. Santa Clara. That's right. It's nice to be out in Silicon Valley. So Rob, thanks for stopping by. You bet.