 and critical, so that brings us to Rick Wallsworth. Rick, welcome. Good to see you. Rick, my colleague John Furrier. Rick is the Director of Product Marketing for the EMC Infrastructure Management Group, and they've got some cool products in there, in particular, Recovery Point, which is a relatively new system. I mean, it's been around through XM acquisitions, but you guys have been hardening this capability, but before we get into that, I mean, I want to talk a little bit about enterprise apps coming to the cloud. We're seeing this amazing growth of cloud. CIOs are concerned about security. CEOs want them to get to the cloud, get to the cloud, get to the cloud as fast as possible, and so their response has been to virtualize, and then of course, then they're faced with, all right, well, how do I make these applications, these enterprise applications, SAP, and Oracle, and Microsoft, how do I make them enterprise ready? You guys have some experience there, so maybe talk about that a little bit. To move applications into the cloud, the expectation is that I'm going to get better service level performance. The reality is that what happens as it goes in, is that when you start to virtualize and consolidate, you lose some control. So the ability is how do I take advantage of some of the capabilities in the infrastructure to be able to deliver service levels that meet the needs of the enterprise? So that brings me to sort of the notion of IT as a service, right, and specifically data protection as a service. I mean, a lot of people have said, oh, data protection's broken. We sort of think about data protection as a bolt-on, or as an afterthought, oh, I get this application, I have an understanding of what the application requirements and performance requirements are, oh, I got to protect the data. Is that changing? Absolutely, it's changing. And what happens is the economics start to become very compelling. When you look at the economics of trying to consolidate the infrastructure and bring that in, now I have the ability to be able to outsource my backups, outsource my replication from the standpoint of being able to take it out through a service provider that pays for the WAN, the connectivity, and the recovery capability. So the economics from a CIO standpoint starts to make a lot of sense, but only if they can guarantee the service levels. So talk specifically about how that manifests itself into a solution with the stuff you're working on, whether it's RecoverPoint or partner products. Can you give us an example? Yeah, absolutely. So from the standpoint of where RecoverPoint fits in, RecoverPoint delivers quality of service so that for my SAP, Oracle, and my mission critical applications, I can deliver a quality of service that is as I can do within a physical environment. So I'm not giving anything up. The ability now to extend that out across my tier two applications also starts to make sense because now I can take my tier two applications and also leverage that same infrastructure. So now I have the ability to take advantage of the services running in the cloud, but then at the same time, make sure that that CIO has the comfort level that the services of the applications are being met. So one of the things that you hear is when you talk to in the Wikibon community, we're talking to our clients, our end user clients all the time, and a lot of them don't do charge backs, right? And that's one of the fundamental sort of premises of the cloud is, all right, we're going to pay as you go, pay by the drink. Right, exactly. Are you getting more people interested in doing that, or are you seeing more interest in charge backs, or is it more of a showback model? How is that whole thing rationalizing itself? So it really depends. It depends on where you're at relative to your virtualization, the amount of storage you have virtualized and servers you have virtualized in the environment itself. The deeper in virtualization you get, the more the expectation is that absolutely, I want charge backs, so I want the accountability back to the business unit for the service levels I'm giving so that for my ERP and my CRM systems that have the highest levels of quality, I'm putting more infrastructure in those environments and I want to be able to make sure that they have the ability to charge back. So charge back is absolutely a requirement. We're seeing more and more, and I think as you get more companies virtualizing more and more of their infrastructure, charge back is going to be a fundamental requirement. What kind of makes sense? Go ahead, John. Dave, we have 3,000 people watching right now and just want to give you the heads up so they'll get a lot of big audience out there. And one of the big things that people want to hear about is the infrastructure, what is the big disruption happening out there? So we have a lot of people who are coming in and out of the live feed who are experiencing EMC world and they hear big data, they hear big cloud. Can you just give us a summary of what the hell is going on out there? What's the big aha disruption? Right, so there's a couple of disruptions and one of the first ones you're going to run across is the ability to be able to take that within my heterogeneous infrastructure. So I may have EMC storage, I may have IBM storage in there. How do I consolidate that? How do I provide one way to protect all the data within that infrastructure? So RecoverPoint, one of the fundamental units that we're using as part of this cloud-based DR is one of the tools that allows the ability to be able to interconnect VMAX and VNX and non-EMC storage across that same infrastructure. And then at the same time, one of the big pain points you have in any type of data replication service is logical corruption. That if indeed my primary copy of data is corrupted, I'm also going to potentially corrupt my replica as well. So how do I protect the business against data corruption within an environment and be able to recover? So the other disruption that we're seeing is the ability to be able to roll back in time to give you that Tivo capability of your data right within the data center itself. What do you think about the PlayStation hack? So in the big data side or on the big cloud side, you got Amazon Crash, which caused a lot of services that were startups like Quora and these fast-growing services and maybe some credit card-like services for enterprises. RSA got hacked, PlayStation got hacked, and then you got Hadoop, innovation. How do people think about that? And how do you change the infrastructure? Is there a change? What are, how do you make sense of that? Yeah, I mean, it goes back to the old adage is that every time you create new innovation to be able to protect the data, the hackers seem to be one step ahead of you and figure out a way to break that. So obviously it's just staying ahead of the hackers, staying ahead of the smart people that are out there creating these intrusions and then making sure that you have an effective way to be able to protect against that. And obviously there's a lot of R&D dollars that are going into making that much more robust than it is today. And you know, I think it comes back to, to that notion we were talking earlier about data protection as a service. Data protection is not a one-size-fits-all, I mean, talk about the discussions that are going on in the customer base and maybe how they should occur as far as we talked about data protection as a service. But what does that mean? I mean, you sit down with the line of business and say, okay, what's the requirement? How much are you willing to spend? Okay, and it's an iterative process. Talk a little bit about that. Yeah, it's a great question because typically what happens is that the data discussion starts around my ERP or my CRM services that are mission critical. And that's where you start to have those discussions around how do I protect that data? But then at the same time, I have the rest of my infrastructure that I want to include as part of that service. So you need the ability to be able to assign and enforce service levels dynamically across the system that allow you the ability to be able to assign priority and quality of service to the data that's being protected within the environment. So again, within RecoverPoint, I could take an application set and I can say that for this data set, this is my mission critical service. It's going to have a certain RPO so I can guarantee a recovery point objective in that environment and I can also guarantee how long it's going to take me to recover the applications once the data's back in line. And the goal of course is to automate that, right? Absolutely. Make it policy based. Is that happening today or are we getting there? Talk a little bit about automation and where that fits. So it's definitely happening and what you end up seeing is tools such as Site Recovery Manager from VMware provides the ability to be able to automate failover and testing within a virtual environment. So tools like this really help build an infrastructure that helps to be able to automate, failover of these virtual machines so that now rather than having to build it out separately, I have the ability to be able to press essentially a single command and then I can orchestrate failover of those virtual entities. So that's definitely helping to be able to automate a lot of failover which is a necessity as you start to grow out the infrastructure. When I'm talking about 10 VMs, it's very easy to be able to fail those over but I want to have thousands or tens of thousands of virtual machines that I want to failover, you need an automation and orchestration framework to be able to do that and VMware has done a very good job with Site Recovery Manager and working with the storage vendors to be able to provide that. They're just humans with a scooter. Yeah, just Suja Patel might be a little bit early. I just got a note from my markers in Hopkins that Suja might be early. So, from my salon, so. Sorry, couldn't hear you, I'm sorry. We have our next guest, could be a little bit early from my salon. Oh, okay, oh great, okay. Just to give you a heads up. Okay, good, sorry. We're having a good time here at EMC World. We're broadcasting live wall-to-wall, silkenangle.com, wikibon.org. Just a reminder, breaking news, Hadoop distribution by EMC. It's big news, changes the culture for EMC. It's a new gambit for them and silkenangle.com has all the coverage, wikibon.org's got the full analysis and I have to say, I think we're pretty right on on this one. So we're talking to Rick Wallsworth and we're talking about data protection, data protection as a service, automation and the spectrum really of services that IT is really delivering to its customers around data protection. Do you see a requirement for near zero data loss or zero data loss as we get to the cloud? Is that becoming more important or is it still, oh that's too expensive, I don't want to do it? It's absolutely becoming a requirement and especially as I move my mission critical applications into the cloud, the ability to be able to guarantee near zero data loss is very, very important and also to protect not only against a site outage or a power outage, but how do I protect against that data corruption that may have impacted data at both my primary and secondary sites? So as you move services, especially these mission critical applications into the cloud, it is very important to be able to deliver that near zero data loss out across there and to be able to tune the application recovery capabilities to the data that's being protected in there. Yeah, so we were talking before about recovery, failover and testing and this is, I mean a lot of the stuff we're talking about falls in, doesn't it, Rick, to that sort of boring but really, really important category? Like if you don't figure this stuff out, your cloud is not going to work. So my last question for you is, what advice would you give to customers out there that are thinking about architecting the cloud, specifically in the context of data protection? Yeah, certainly that the data protection needs to be part of the cloud design from the beginning. You can't bolt it on afterwards because a lot of times when people do that, it becomes an afterthought. You try to fit it into an existing infrastructure. So it needs to be designed in from the beginning. One of the things we've seen, certainly working with the VCE team, is they get that, right? They've taken the Vblock infrastructure and have integrated RecoverPoint and a lot of the capabilities into the Vblock so that now it's a standard offer so that they don't have to go in and try to architect it afterwards. It actually gets designed in as part of the initial deployment so that it's part of my initial rollout. And you want to be able to make sure that as you're doing this, you're protecting at the local site. So I have protection from an operational recovery standpoint and then also that DR disaster recovery fail over at the same time. So that Vblock has that capability inherent to it and essentially if you think about data protection as a service, I can dial up or down depending on my application requirements, how much money I want to spend and the like. Is that right? Exactly, and then I can also automate it. So if you want to automate it, you add SRM into it, now it provides a complete solution around infrastructure, storage infrastructure, networking infrastructure and the server rollout as well. Excellent. All right, Rick.