 Okay, we're back, this is Dave Vellante, and I'm with Stu Miniman, we're with Wikibon.org, and this is SiliconANGLE.tv's The Cube, and we're here at the Dell Storage Forum in Boston 2012, and we're here with Carter George. Carter is the executive director of Dell's storage strategy and came to Dell via an acquisition. We're talking at the top of the show about Dell's very aggressive acquisitions. He came from Ocarino, which is a storage optimization specialist. Carter, welcome to The Cube. Thanks, it's good to be here. I believe this is your first time on The Cube. It is, yeah. Hopefully, we will see you many more times. So, executive director of the Dell Storage Strategies, you've got a growing portfolio that you're looking after, so tell me in a nutshell, what is Dell's storage strategy? Well, we're a company in transition. Dell is becoming more and more of a solutions provider rather than just a seller of products and storage is really a key part of that. In storage, that transition really is manifesting itself as a shift from being a reseller of other people's products to being a true storage company where we create new IP, new solutions, and actually soup to nuts to develop new storage stuff and bring it to market. And the sort of core vision that we have there that frames all of those acquisitions and the other things that we're doing, it's sort of a virtuous circle where the first half is bringing high-end enterprise feature function down to the mid-market customer, being able to do things that may be previously only the Fortune 500 data center could have afforded to do. We want to bring those to a much broader audience. And as we succeed in doing that, then we should also have created mid-range products that we could take and change the economics of enterprise storage with. So, what's an example of some of those functions that you'd expect, you know, Fortune 50, a Fortune 500 company to have that SMB traditionally have not been able to access? Well, for example, all of our storage architectures are scale out where we take industry standard building blocks and allow you to group those together to make small, medium, large, extra-large systems. So, a small customer might buy just one at a very, you know, reasonable price point. A large customer might buy seven or eight or ten of those and group them together and be able to do a high-end thing for a fraction of the cost that a traditional high-end enterprise thing might have cost. Can you talk a little bit about integration? So, you've got all these assets that you've acquired. And oftentimes, storage companies will acquire these assets and just let them be. You know, some companies run their storage acquisitions almost like a, you know, GE, like a portfolio. My sense is Dell has a different strategy there. Is that true? And can you talk about the integration? Very much so. Our ambitions are much more aggressive. We're really trying to take one plus one and get three out of it. So, when we look at these acquisitions, they're all building blocks towards building, towards this fluid data vision of the future that we have. I mean, we really looked out three to five years and said, what kind of problems are customers going to be facing in storage? You have, you know, the tremendous growth of unstructured data, you have cost pressures, you know, all sorts of things. And we said, what does storage need to look like then? And we've set out to get the pieces we need to put that together. And, you know, you're going to see really starting this year and on into the future, a Dell storage personality emerging where, you know, the things that we've acquired will start to look more consistent and compatible and more like a Dell thing rather than the thing, the standalone thing that we bought. So, you talk about fluid data architecture and fluid data vision. Tell us why, tell us what that is, you know, it sounds like great marketing, but tell us why it's more than just great marketing. What's underneath the covers? Well, fluid data is our architectural vision of how storage needs to work together to solve customer problems. There's several tenets to it, like that scale out design is one of those. Right place, right time is another one where we feel data needs to be able to move from one place to another to get data to the right place at the right time. Transparently without having expensive storage admins or a lot of manual process, we really feel like the building blocks need to work together by policy and automatically to do the things that need to get done. And that's what fluid data is really all about. You know, we like that sort of fluid metaphor, water seeks its own level, you know, it flows to the right place. And it was a phrase that Compellent had used. And before we acquired Compellent, we really still liked that fluid idea. But you know, we looked at alternatives, wet data, no, liquid data. It wasn't really working for us. So when we got Compellent, we really liked the phrase. And now we've extended it to talk about the whole, the whole picture. So when you think about fluid day, one more, if I may say, when you think about fluidity, and because we I met you when you first were at Ocarina, and you think about moving data around the storage portfolio. And a lot of times you've got to, you know, optimize the data, whether you compress it or de-dupe it, and then you got to rehydrate it. And that, you know, that's that's time intensive, it's overhead intensive. Is that part of the vision where you can actually seamlessly move data throughout the storage portfolio without having to rehydrate? And is that part of the actual execution? Yeah, absolutely. I think, you know, two key concepts are tearing data to get it to the right place. And that means both, you know, traditional tearing up and down inside a product set, you know, that sort of thing, but also horizontal tearing where we move data from one product to another through a workflow. And if you think about that, optimization becomes really important. If you're going to programmatically be moving data from one platform to another on a regular basis, you really need to make that very efficient. And so, you know, de-dupe compression, erasure coding, thin provisioning, these are all ways to get the data smaller and keep it small while you're moving it. And, you know, when I talk about a Dell personality emerging, you'll start to see some things be consistent and compatible across multiple products. And de-dupe is one of those and compression. The Ocarina technology is being integrated into the file system, into the object store, into the backup target, into the components, into the equal logic, and eventually you'll have that consistent and compatible de-duping compression on every Dell platform. And that means that we can move data in its most de-duped and compressed form without having to rehydrate it. Just to, we've been talking a lot on the cube about converged infrastructure for the last several, you know, months, you know, a couple years now. Take it away. Absolutely. So actually, Dave, I was wondering if I could take a slightly different direction. We talked about kind of your Ocarina, the de-duping compression. Most people really know, you know, the equal logic exanet on the NAS side and compelling. The more recent acquisitions of RNA networks and APA-sure, can you kind of frame up where those fit in the portfolio and just give us a little bit of data as to what those are? Yeah, absolutely. So RNA, a very small company, one that probably not a lot of people have heard of, but it's become a very important part of our plans going forward. You know, we talked about tiering and Dave mentioned converged infrastructure. RNA is a software platform that lets us extend our storage up into the application server. So if I think of a platform like Compellent, you know, it supports a very rich set of tiers inside the Compellent. What we want to be able to do is extend that tiering up into the application server so that resources like Flash or SSD that are in the application server can become logically part of the storage. We can move hot data right next to the CPU and access it there while still having things like snapshots and replication and so forth that are driven in the storage be consistent and accurate. So we're really excited about RNA and you'll be hearing more about that through the course of this week. Apisher is our first foray into the world of data protection. And, you know, that's a very important topic for most customers. And we, as part of fluid data, are forecasting a major convergence of what historically have been two separate and independent areas of data protection backup software and the whole area of snapshots and replication. We see these two things coming together into a single managed data protection environment. And Apisher is the platform that we're going to build build that around. Okay, great. So if we look specifically at converged infrastructure, so obviously, Dell's got the components, you've got server, you've got networks. And you've got, you know, the storage pieces. But we really look at the kind of the orchestration and automation layer as, you know, the real killer piece of a converged infrastructure. So can you talk about how all these these different software pieces and these acquisitions kind of roll into like the VIS orchestrator that you guys have for your convergence? Yeah, we do have orchestration and automation happening at the server layer with VIS. And we also have what we call workload aware storage management being developed in Dell Storage. And that will provide a framework that we can plug into VIS, but also other server management things like vCenter, for example, or the Microsoft Management Suite. So we'll have a consistent way that you can reach out and manage Dell storage, whatever the orchestration layer is that customer chooses. And that will include a Dell Storage API, let's call it, that if a developer or manager writes to that API, in turn, that will manage all the different Dell storage elements. So you don't have to develop specifically for equal logic or for compelling or for our file system, for things like provisioning, and so forth. You can write to one API, and that'll take care of it for the whole portfolio. Great, that kind of builds if you saw IBM's recent announcement on Pure Systems, they called it kind of the expert aware, it's understanding that workload, and how it fits in the entire solution. So it sounds like what you're talking about kind of the workload aware is kind of along that same vein. Absolutely. I mean, you know, today, when you provision storage, you're really trying to provision multiple things at the same time. Sure, I might need 10 terabytes of space. But I'm also trying to figure out is that a fast 10 terabytes or slow 10 terabytes? Is it a highly protected, fault tolerant 10 terabytes or just a pool of you know, storage that I can throw stuff in. And what we'd like to see is the storage infrastructure taking care of that for you. If you say I need 10 terabytes for VMware or for exchange or for a mission critical database, there'll be some templates there that will help you solve for the protection level, the performance level and the capacity. Carter, the flash market's pretty frothy right now. You just saw, you know, one of your competitors made a big acquisition, you know, reportedly at 400 million in change for a company that doesn't even have product out, you're seeing valuations, you know, huge for, again, companies that don't have product or in the case of say violin memory and $800 million valuation, you know, reported you see fusion IO. What's your take on the whole flash situation? Is it as game changing as a lot of people believe in what's what's Dell's angle? Yeah, we really believe it is. I think we expect in most customers within three years, let's say most active data will have moved from disk to memory. Now that doesn't mean all your data, the average customer has 10 15 20% of their data is active and the rest of it is some version of cold. So we're talking about the active data, but we really see a major paradigm shift as that active data moves to memory based storage. And, and, you know, we expect Dell to be a leader in that space. This is an area where converged infrastructure really does matter because as servers get faster and faster and faster, storage has to get faster to keep up with the capability of those cores. And the best place to be if you want to be fast is close to the CPU. So with things like RNA, we're going to be enabling memory based storage, not only in the shared storage fabric on compelling and equal logic, but also right up there on the same bus as the CPU. And our goal is to tie that all together into a single coherent fabric. So it's just it feels like there's a and by the way, we would agree, we think most if not all active data will be on flash within the next three to five years. And it seems like from an application development standpoint, there's a real opportunity to change the way in which we've written applications for the last 40 years, you know, sort of reach around reaching around the horrible storage stack, if you will, talk about developers. You mentioned, you know, the Dell Storage API. What is Dell's sort of reach out to developers? There's a big movement around DevOps, you know, the intersection of application development and operations, infrastructure operations. What are your thoughts on that, particularly as how it relates to new applications that might come into the marketplace? Well, I guess we sort of view that as two different audiences for the application developer, the, you know, the group building business applications in support of a company's mission. We're not really asking them to do anything different or unique to Dell. We want to deliver storage that's compatible with the ways that they do things today, with the tools that they use and so forth and to allow this cool new infrastructure to just fit in there transparently. For the storage developer, the channel partner or the group developing management tools, that sort of thing, that's where we're going to offer some things to that development community, the people developing new storage widgets and apps and tools. And that's where the Dell Storage API comes in. We really view that as somewhat different than the business application audience. Okay, I wanted to switch gears a little bit and ask you about just as this capacity's increased, you know, mammoth sizes, you know, data on an actuator gets bigger, you know, performance is an issue, but there's also the issue of rebuild times, you know, raid rebuild times, raid five and then we have raid six and even during a long raid six rebuild, there's exposure. And so a number of companies are talking about erasure coding as a way around that. Do you have any thoughts on that? And, you know, what's the strategy there? So maximum efficiency is one of the tenets of fluid data Dell is really investing in being the best in storage at translating a certain amount of actual logical data into how much physical space that takes. And there is no one right answer for that because you have to define at the customer level how much protection you want, right? What are you protecting against? So things like compression and D-dupe and thin provisioning all play into this. We view erasure coding as one of the most important aspects of this going forward. And, you know, in a sense raid five and raid six are examples of erasure coding, but just for a specific failure case where you're encoding data to protect against the loss of a disk drive. But erasure coding in general allows you much more flexibility in what you're protecting against. You could turn the knob to say, well, I'm protecting against disk failure, or enclosure failure, or rack failure, or site failure, or citywide disaster. And mathematically, then a good erasure coding system will figure out how much parity, how many copies, where they need to be distributed to geographically to protect against the scenario that you want protected. And we really think that's where it's going. We think there's a lot of opportunity for real differentiation at the math, at the actual mathematics used to figure this sort of stuff out. And, you know, there's always a trade off of space, cost, and protection. But what we want to be able to give people is basically a big knob that goes to 11. If you want to protect against everything, you'll be able to turn the knob there and get the most efficient possible space utilization for that level of protection. And that instance, throat core is at the problem, right? That's right. And given that it's so math heavy. So now is that something that's part of the portfolio today? Is that a potential future acquisition? Is that invention? It's a little of all those things. We do have erasure coding shipping in our scale out object store, the Dell DX object platform. But you could expect to see erasure coding propagating out through all the platforms over time. This is an area where, again, we want to see a Dell personality emerge. And you'll see that knob, you know, that you get to set be a consistent thing across the portfolio. So Carter, my last question for you, we're running out of time here is, how do you define leadership and what makes Dell a leader in storage? Well, we have in fluid data, a core set of tenants, the things that we think are really important in storage going forward. And those tenants align where we're investing in. And, you know, you could measure revenue or market share, those sort of things. We think those are sort of after the fact measures. But we want to be as clear thought leaders in the technology, in those six or seven areas that we've identified. Excellent. Carter George, thanks very much for taking some time out coming on the Cube. Great vision, great visionary. Good luck with the with the strategy and the rest of the show. Thank you very much. All right, keep it right there. We'll be right back with more coverage live from the Dell Storage Forum 2012 live from Boston. This is SiliconANGLE TV's The Cube. Keep it right there.