 Okay, let's get started. Hello, everyone. This is John MacArthur. I'm peer insight moderator at Wikibon and we're joined today by members of the Wikibon community, including Dave Vellante, founder, David Fleuer, also founder of Wikibon, Stu Miniman, Jeff Kelly, Bert Latimore. Thanks for joining us today. Thanks everyone who's online. Today is Tuesday, April 3rd, and we're here to discuss data protection solutions for cloud storage offerings. Without question, public and private clouds, hybrid cloud storage offerings are here with us. They're on the rise. They're being used as a repository for backup, for archive. And they're being used for primary storage offerings too. So each workload has its own unique requirements, characteristics. Today we're going to focus specifically on cloud backup and archive. And we are pleased to have Mike Adams, a storage specialist with Lighthouse Services, is going to walk us through the selection process that Lighthouse went through when they were selecting the infrastructure for their cloud storage offering. Our hope from today's call is that whether you're creating a cloud like storage architecture for your own environment, for internal private use, or you're thinking about building a cloud storage offering as a service to serve your own external customers, you'll find today's information helpful. So welcome, Mike. Thanks for joining us. Thank you very much, Stu. I appreciate being here. That was John here. Before we get started, just a few logistics. If you are not speaking, please press star six to mute your line. And if you want to ask a question or contribute to the discussion, just press star six to unmute your line. So it's the same star six to mute, star six to unmute. But please feel free to chime in, join in on the discussion. So, Mike, again, thanks for... Let people know we're live on SiliconAngle.tv on the watch. Good point. You can tweet us. I'm at D. Volante. That's probably the best handle, or at Stu. And we'll get the questions to John. Great. Thank you. Again, siliconangle.tv, live streaming there. So, Mike, why don't you kick it off for us and tell us a little bit about your role at Lighthouse Services. Yeah, so thank you very much. So I work for Lighthouse Computer Services. I've been with the company 13 years. And predominantly, I work in the storage practice area. There are a bunch of different practices I focus on the storage practice. You know, my kind of my overall role is twofold. You know, I work with a lot of our customers who are typically, you know, some of the decision makers in IT. So I work with them and with their peers, both the technical people as well as the C-level executives in helping architect solutions both on-premise and now in the cloud for their backup and business continuity offerings. It's kind of been my role and, you know, it started traditionally with sand infrastructure and now we just see it leveraging out to cloud technology. So what led you to make a decision to add a cloud storage offering? Well, I think it's a couple of things. Are you guys hearing a lot of feedback? No, we're not. You might want to mute your... I certainly am. You might want to mute your... All right, hold on one second. Let me try to... All callers are muted. Callers are unmuted. Mike, you're still there? Mike, is your line still live? Yeah. I can see him, but... I think Mike may need to dial in again. Mike Adams, you there? Tango? Have you called back into your studio? Just on Skype. So let's talk about this a little bit, John. I mean, you know, you got cloud storage. You got the cloud generally. People looking at it for their own internal uses. It sounds like these guys at Lighthouse are actually doing something that we talk about a lot here. Turning IT costs into profits. In other words, flipping their IT infrastructure to actually monetize it. Yeah, I mean, they started off as... I'm back. Hey, welcome back. Is that any better? You sound fine to us. Yes, so hold on one second. All callers are muted. Let me do this, and then let me unmute Mike. Okay. So... Callers are unmuted. Sorry, Mike. I thought I could unmute you and mute everybody else, but it doesn't look like we can do that. So go ahead, John. Okay. So again, we were just talking a little bit about your decision to add a cloud storage services offering at Lighthouse. So can you just walk us through that decision? Yeah, I think it's a couple of things. I mean, the marketplace clearly was calling for that. You know, you see a lot on the internet and in the trade rags on cloud-based solutions and doing things in the cloud. And, you know, of course that gets the curiosity of our customers, you know, as to, well, I need to do something in the cloud, because sometimes their executives are saying that they should do it. And then it becomes, well, what does the cloud really mean? So from the Lighthouse perspective, we had two-fold. We could wait and, you know, see where things settled. And then sometimes our customers might be enamored by other solutions because people are talking to them about, you know, cloud-based offerings. Or we could kind of take a more proactive approach and, you know, work to solve business problems that our customers are having in the cloud and kind of keep it in our sweet spot, which is storage and disaster recovery. So I think it's a combination of both of those factors. A, it's kind of a natural extension of what our business needs to do, combined with the intersection of customers asking for cloud and what they can do. So we needed to have an answer for that. And so we're taking a proactive approach and we're excited about it. Is the impetus, Mike? This is Dave Vellante. Is the impetus people just want to, you know, try it and learn? Stick their toe in? Or do you see it as more substantive than that? I see it as more substantive. I think that, you know, the success of, like, people backing up their systems with carbonite kind of legitimized the concept. And what we're finding, and, you know, I have one of my technical people that works for Actifio Bill Thorpe is on the call, so we work together. You know, what we're finding is that, you know, not only do customers, when we're talking to them, not only are they wanting to, like, test the waters and understand it if it will work, but then they're going the other extreme, which is something we anticipated was, okay, if we have your data, how far can we take it? In other words, if we're going to start doing something in the cloud, how much can we do in the cloud? So the conversations are much more involved and encompassing than one might think. Did you build out new data centers for your cloud storage offering, or did you take your existing infrastructure and try to leverage that for the cloud service? So what we do is we extend, you know, it's kind of a two-phase process. So one is we, our cloud infrastructure is offsite at a staff 70 secure location, and we have technology there, which is based on Actifio, and I can certainly talk at length about that. That is a receiver, it's a multi-tenancy solution, and it's, I want to say, a DR, a business continuity target for our customers, and then we put a sister box or another appliance on the customer location to solve a problem, and then we married the two for the kind of cloud offering. So we took some proven technology in Actifio and that's the cornerstone and the enabler of our cloud offering. So your cloud offering starts with, does your typical user start with a solution on their premise and then they go to the extending it to the cloud or does it start, or how does that work? Exactly, exactly. You know, there's obviously a communication replication between a customer's production data center and our cloud or any cloud offering, and the Actifio appliance that we integrate into the customers on premise location solves a lot of issues, and that's the real initial conversation we have with the decision makers. And then, you know, one of the things that people are looking to do is, you know, the reduction or elimination of CAPE, which is what made VTLs successful years ago, but now customers are looking to do something more real-time and not have to leverage last night's backup, you know, for that type of a disk-based solution. So we may solve a backup problem with a customer and then if they're looking to do disaster recovery or business continuity, but they don't have a DR site or they don't want to budget for one or they don't have a viable second location, well we can point the solution that solves the local problem to our cloud and that just enables, you know, business continuity and disaster recovery in the cloud, so to speak. I want to come back to that in a second because having the data available is one thing, having the applications available is another, so let's make a point to come back to that in a second. Sure. But I want to stay focused on the backup issue first. Are you finding that most of the folks that you're putting the solution in are replacing tape altogether or are they augmenting their tape backup today? The vision and the desire is always to replace tape, you know, as with the customer yesterday, we want to get rid of tape. The reality, however, is there's a lot of reasons that they can't do that because they may have things in archives, you know, at an Iron Mountain, Sun Guard or wherever and, you know, those things are in the vault and they may need to recall them for compliance or litigation. So we drastically reduce the amount of tape. So customers tend to take off their existing backup environment and use, you know, an integrated, disk-based solution and integrated as a key word, use that and then they'll use tape for, you know, if they have to get something off-site until they leverage, you know, our cloud offering. And is that about the cost of migrating data from, you know, from tape over to some other media or is it about the immutability issue and proving that this is the authoritative copy? Absolutely. Did this actually the context again? Yeah, for the people who are holding on to tape for long-term archive, you know, is it because it's too expensive to migrate the data or is it because they need to be able to authenticate the data with a particular timestamp? I'm just trying to understand why you can't get rid of all of it. It's really because the data in the situations where they're keeping tape, you know, in an archive scenario, it's really because that data no longer readily exists in their production environment because maybe it's hospital images that are 10 years old but the hospital has to keep them for 21 so they only exist on tape. So therefore, you know, you need to have a backup and restore solution that can read that tape to bring it into your environment. So it's really for the long, long-term stuff. The data that is in their environment, you know, there they're using, you know, the solutions we're providing and they're doing, you know, application-aware snapshots and long-term dedupe backups of their data. The off-site stuff, the tape is really, if something is no longer in their production cycle and it really only exists on tape, therefore you need a tape drive and backup software that can read it to bring it back in. So that's a scenario where the use case is the last resort or the, let's call it deep archive. Hopefully you never have to get to it if you do. You know it's there and you're in compliance. But what about, I wonder if I could push on this a little bit, Mike, because there's this concept that we can get rid of tape. We've talked to a lot of practitioners who say, yeah, but as I say, we don't ever want to go to it. But if we have to, we know it's there. And the fastest way to recover from a real disaster is to load a bunch of tapes into a truck and drive it somewhere that's actually faster than restoring over the web in the cloud. So can you talk about that a little bit? What's your experience there? Well, I guess it really kind of depends what type of an environment you have set up. In this solution where we're architecting for customers, the Actifio solution, it has some unique characteristics which is really why we gravitated for it. I mean, if you had a storage repository that could only hold a week or two's worth of data, and the premise in that case was you just can't, it just doesn't make sense to store it locally because it takes up too much disk. Therefore, you know, replicated to your DR site or vaulted to the cloud, you know, you may have some challenges there, but there's some key enablers with Actifio that mitigate those challenges. And that's really the fact that locally they have data deduplication. That's one thing. So what does that mean? That means that I can set up SLAs for my customers and I can say, you know what? We can keep your VMware or your Exchange or your Oracle environment. We can keep that locally in a disk-based dedu-pool for a year, let's say because all of the data is deduplicated across all the backup sets so we're not, every time we do backups, we're not causing, you know, a whole nightly cycle of data and storage. And then the other key piece, you know, when you're talking about replication between sites, you know, you have, typically you have synchronous, which is if you want to keep something up in a DR site or in a cloud, an asynchronous. But one of the unique characteristics of Actifio is that they have deduced async. So that allows us to replicate data back and forth from the cloud or the DR site to local, if the data's not there locally. Bill, is there anything I know you're on the line? Can I jump in here just for a second? When you're trying to get a disaster recovery, and this goes back to this tape issue, trying to recover a large dataset or, you know, a site over a line could take weeks or months to do for any reasonable size, unless you've got an enormous, enormous pipe which you've, you know, pre-defined. I don't, not even if it's deduped, you still have to undedupe it and then send it. So what's your solution for getting the data to another site or getting a data site that has been lost and has to be recovered from the cloud? Yeah, sure. So I think, you know, seeing as I have one of my engineers who works for Actifio, Bill on the line, I mean, I'm sure he'd be happy to address that kind of his sweet spot as well. Yeah, so what about that, Bill? We're trying to help our community understand the sort of merits and drawbacks of cloud-based recovery, particularly in the disaster scenario. So I think we get the local recovery. The RTO is going to be great. If the building blows up, what do I do? How do I get that back? And that was to the point that I wanted to come back to from earlier, which is if it's a restore operation, does the end customer need to think about also having available all of the applications and all of the servers and the network to run the application from the remote location as opposed to thinking about recovering the data over the wire back to some other data source? So help us squint through that, Bill. I mean, we're talking about the time it takes to get moved data around the cloud and, as John's pointing out, all the processes and other people processes, et cetera, and technology you need to recover. So two questions there. Sure. So as Mike said, one of the things that Actifio uses is we use this process that we call dedu-basync. And what that is is, you have an Actifio device at a customer firm that replicates into the cloud. Now, when we do our replication, we do so in a deduplicated format. So as you know, when it gets to the cloud site, it's full in its dedu-state. So what we do with dedu-basync is we can select on a VM or a volume level, say, I want to blow these guys back up in their native format. So the first time this operation goes naturally, you have to do a full read of the data set at the 50-gig VM. So there on out, what we'll do is we'll take our daily backups, we'll replicate them across the wire, and we'll rehydrate them. But what we're doing now is because we have intelligence at the block level between all of the pools, we're really only restoring blocks. And those blocks will be your daily churn rate. So to answer the first question, you know, if side A blows up, what do we do? So we go to the cloud, we can run in the cloud environment, whilst you rebuild your primary site. But once your primary site gets up, you have all your servers, all your storage, everything kind of sitting there, ready to go. What you can do is you can A, you can sync back. One of the ways we can do that is in a deduplicated format. Well, mind you, while this information is moving from side A to back to side B, you can still be running off of site B, okay? Once we get all that information across the wire, we have it rehydrated. We simply say, hey, I want to fail back. At which point, it's going to have a communication base where it says between the two pools or the two systems, what blocks do you have that I don't, okay? So again, it's that cutover point where you say, now I'm ready to do my cutover. You press the button, it has that communication. And if you timed it correctly, hey, look, I've only got, you know, this many blocks that I need to ship over and, you know, then do a full fail back, if you would, to the primary site. Yes, you don't really have the time to ship all that deduplicated information across the wire. Another option that we could have is you could use some sort of USB device with a Drobo or something like that. You can take our Google pool and you can copy it onto a transportable media, at which point you can send it to the other site and then you pick up, you know, just where I pretty much said where you do the restore and then at the very end you say, do that thing back and you're done. Can I jump in here? Sorry, this seems smoke and mirrors to me. If you've got a small system, I can understand how this would work well. But any reasonable fine system, unless you've got a way of taking the data, a pretested design way of taking the data in some sort of physical way, as you say, by putting it onto a disk or putting it onto a tape and taking it to a far site. The laws of physics, the laws of the internet and, you know, the increase in volume of data is driving up at the same rate as the bandwidth is increasing. So there's nothing that's changed over the last 30 years here. The laws of physics are that you just can't get that data up to a site. And it's great for a small system or a subsystem that you want to recover somewhere for a full-scale disaster. This does not seem to be adequate. It doesn't meet any RTO or RTO for any customer that I've worked with. So can you be honest about what you have to do to really recover another site? You can't de-dupe because that doesn't help you at all. You've got to fully restore the data. How do you do this? So when we're talking about recover, are we talking about standing the primary site? A disaster recovery. You know, a disaster. You've got to bring up the system on another site completely different and move the data to that site and move the applications to that site. So what do we do basic? On a daily basis, your customers do a backup. We've got all that. We've got all that. That's any solution for that. I'm a little confused. Are you talking about the building goes down and I need to stand up my business right now? Are you talking about I've already sailed over to the failover site and I've been running in that site and I need, in my buildings now, I need to scale back. Yeah, this is John. I think what's missing here, David, what you're not hearing is that the presumption here, correct me if I'm wrong, is that the servers to restore the application are at the cloud storage site, right? So it's not just a cloud backup. It's a cloud backup with available servers to restore the applications at the remote site. So it's an active active. It's an active passive. It's active active on storage and it's active passive on servers. Am I getting that right? So you're providing the full backup capability on that site and run the whole of the data center from that site. Is that the business model and that's what you test? Yeah, that's the piece that we haven't really discussed on this call. We've focused more on the backup, which is the premise of getting the data. But the other piece of this is that in the cloud or co-location facility, we have hybrid cloud design where we have the data that we've captured for the customer or customers and now that data is mountable to application processes that live in the cloud. Right. Okay, so is that your standard offering or is this an enhanced offering? That the whole thing can be recovered from the cloud? I guess it's an enhanced offering. The standard offering and a lot of people have different, you know, go-to-market models. Our standard offering is, you know, there's one cost for protecting your data off-site and then there's another, you know, cost for actually adding the application processes on top of that. My premise is very simple, that your standard offering does not offer them anything unless you can get that data back to another site. And the way you're suggesting that that data goes back to that site is over the line. So you shouldn't be offering that standard offering. Well, hold on, no, let's see. So this whole thing started when we talked about eliminating tape. But to eliminate tape and really sleep at night, you've got to recognize that you might need to get an enhanced offering and that might offset the savings that you made on the tape side. But David, the standard offering gives you fast recovery locally. It's not necessarily... That's goodness. That's goodness, right. And faster than you're going to get from restoring from tape at a remote site. Yeah, so that's all good. But in a disaster recovery context, so it seems to make a lot of sense from backup. I'm still trying to get over the hurdle of the DR scenario. Now, a lot of... That's the sentiment that I'm getting at, yeah. Yeah, so now here's the question. Is this targeted really toward smaller and mid-sized companies? Thank you. And that came a question from Twitter. Yeah, I think that that's... Yeah, I think the premise is mid-sized companies. And I think what we're seeing, too, is some other customers look for a hybrid model. In other words, they like the solution for certain environments, and maybe they have like a mainframe, let's say, or some other kind of one-off box that they want to put at a co-location facility to do other type of recovery for that environment and use this type of recovery for their open systems and their intel. Right. But I wouldn't... And Bill works with these models all the time, you know, I wouldn't, you know, short sight, I guess, the deduasing capability, because we do... I have seen customers that, you know, even if they have two or three terabytes of data, which really isn't a lot nowadays, you know, but they could actually leverage, you know, something like one, two, or three megabits a second pipe into the cloud to meet their backup needs. If we have the data in the cloud, we can rehydrate it for business continuity needs. I guess there's two points I have here. One is that a lot of small, mid-sized companies don't have a disaster recovery strategy. That's right. So this is clearly better than nothing. The second point is that I think the dedupe piece, it really changes the role of tape and maybe dramatically lowers your cost of tape, lessens your reliance on tape as a recovery mechanism. That's all goodness. I think we're still skeptical about the ability to eliminate tape as a source of last resort. Now maybe if you've got like a removable hard drive, like as you were talking about, that can be a tape replacement, but we just want to make sure our community understands that there's exposures here that you need to figure out and or purchase your enhanced service. Absolutely. Absolutely. I think customers are looking to do more business continuity because those application resources are there in the cloud and it's kind of okay as you go type model. You can either stand it up and have it sitting there waiting for a disaster and already pre-configured or you can use those things to light it up in the time of a disaster. Most customers typically tend to have some hybrid offering. The other thing too is with the Actifio piece, the SLAs can be granular such that for your critical application, maybe your replication schema is something that the data is there every hour or every four hours, but then you have other business units that have SLAs that need that data to get there every day or every two days. So you have the granularity to be able to change your replication and change your SLAs on an application-by-application level. That's pretty dangerous, isn't it? That's pretty dangerous. These days there's so much interaction between the systems. What they find if they try and do that in a practical test, doesn't work because the prerequisites for other critical data are all interlinked. So most advice that's given is that you don't do that. You try and recover the whole of your active data. I guess we can agree to disagree on that. A lot of customers nowadays tend to know what their application interdependencies are and I think the premise and we're finding it gets a little bit simpler by the fact that a lot of people are virtualizing nowadays. So if we're capturing the VMs that contain the application and we're putting proper SLAs on those vSphere ESX type servers, customers are satisfied with that. Another piece of good advice though is really understand your application dependencies because if you can, David, don't you subscribe to the notion that backup shouldn't be one size fits all, that having an application-by-application SLA, again, to the extent that you can understand the interdependencies, is advantageous? Absolutely. You've got to be incredibly careful and do a lot of testing on that because you have some critical report that's required and that requires a whole lot of other sub-reports, sub-systems before it to have done it. So you have to test that and make sure that it's working and a glib statement just like you can have different, just focus on just the key application hides the complexity of recovery in a real situation and now your data is a long way away and it's going to take a long time and you could be weak without it. The cloud introduces a benefit but it also introduces a constraint and that constraint is the time to get that data back from the cloud. When you've got out there infrastructure as a service offerings available from co-location facilities and some of these disaster tolerant cloud server and storage offerings, the ability to have an environment where you can test your recovery procedures on a much more frequent basis than when I was on the customer side, I think exists there. You've got the data sets there. You've got available virtual service to spin things up and test applications much more frequently as sort of an ongoing process than what you would have had 10, 15 years ago. And that's the full-height service and that's great. That's right. If you can have the full service and test it, that's great. And I think to Mike's point, if you've got a small enough environment and again, I'm sort of interested in Mike's perspective on when is it small enough to think about this notion of I'll make a copy, put it on a truck or put it on a plane and fly it to the new facility. Yeah, it is all about the business cases. It's going to be some cases where the advanced service is justified. There's going to be other cases where it may not be, where keeping tape around is a good idea. What are you seeing, Mike, in the field in terms of the business case? I mean, I think the thing that you have to keep in mind is when we use the term, the cloud, I mean, in some cases, it's no different than what a customer would do if they had a second location, right? As long as we can secure the appropriate bandwidth, which would be the same that a customer would have to secure from location A to location B, we're providing a multi-tenancy target for that customer and we're providing integration with an application and an OS stack that they don't necessarily have to build out. But the same type of challenges that we would be faced with, they're faced with in a traditional environment today. They need replication technology. Is it going to be synchronous, asynchronous, and we can provide that with this solution as well. And then they need some customers, and it's very interesting because a lot of customers, even though they've done site-to-site replication, they still struggle with recovery because, depending upon the layer of application integration, maybe the recovery points are only crash-consistent and therefore they have to do a lot of rebuilds, even with the online replication. I don't think we're introducing any new problems into the equation because we're still solving them the same way, making sure we have understood the RTOs, RPOs. I think that one of the uniqueness of this Activio solution is really the fact that it's so application-aware so that when customers are doing replication, it's application-consistent and not just crash-consistent. What does that mean, application-aware? Can you talk about that a little bit? Yeah, so if you take a traditional, we talked about doing backups or doing snapshots or doing replication, and if you take, and you replicate data, you take snapshots of data, and it's done just at the storage level, with the storage controller doing it, there's data in flight, right, and it's just taking a snap of it. So it's better than not having anything, but it may not be completely consistent with the state of the application at the time. So what we find is to properly make sure that your data is available and mountable and usable in the least amount of time, you'd like to have some application integration, whether it's VMware or Oracle or SQL or all the predominant application environments, where there'll be some handshaking going on, where there'll be some quiescing of the application to kind of de-stage the data, and then the storage infrastructure takes the appropriate snapshots or does the appropriate replication so that data is consistent. And that's where, you know, when we were evaluating products and we worked with customers, they had a lot of point solutions and a lot of them aren't application-aware, and that was, I think, you know, one of the unique things of Actifio is that not only did it provide a lot of the replication and de-duplication, but it really spread its arms out into the application stack, such that the data is meaningful. So is this using VSS? Is this a Microsoft application? It's like a good change. Excuse me. I just have to interject for a second. If I may. Your name, please. You're not unique in that. There are other storage array vendors who do exactly the same thing. The same thing as Actifio, for example, you know. Is that speaking? If I said that it was the only one that did that, then I apologize. Yeah. Yeah, I'm very well aware of, you know, the other question I had was around, do you think... Go ahead. The other question I had was around using tape on the back end of the Cloud DR site if you need to have an SLA where you have to, you know, recover at site A rapidly. I mean, isn't that a great use case for tape? Yes. I mean, there's two things there. I mean, in this Cloud infrastructure, we have two soon-to-be-three options. So one is, you know, we can back up the... Because in the Cloud, there's backup components from an application perspective. So we could back up the applications levering the hybrid Cloud services. You know, the other thing is, is we could replicate to a secondary site if we wanted to, you know, protect our Cloud and replicate it to a secondary site. The other is, and Bill, I assume I'm OK with talking about this, is that there is a feature coming with the Actifio component that's going to be kind of like a Cape-out feature to dump right out of the Actifio appliance to tape. Bill, anything you want to add to that? That's perfectly legit. So Actifio will be adding a tape piece to the Actifio stack. So, you know, today we do snapshots, we do backups, we do deduplication, we do replication, we do incremental rehydration. And that's a very valid point. Let's face it, you know, disc subsystems have been out there as backup targets for a while now, but people are still very much holding onto their tape. And it is very valid to say, hey, look, you know, if my data sets big enough and I want to keep a certain frequency on disc, no matter what kind of deduplication subsystem I'm going to use, if you want to keep that stuff around for seven-plus years, you're going to start brushing up against, you know, the cost of this thing very quickly, right, with Native Domain, with, you know, an Exegrid or an Actifio appliance. So tape definitely has its place. And, you know, Actifio sees that and says, hey, look, we can create an SLA that says keep a snapshot around for a week and press it down into our deduptier and replicate it. Maybe we want to rehydrate it, and then maybe we want to take a copy and punch it off to a tape for a mode site for seven-ten infinity. Hey, who asked that great question about the tape use case? Even if it's just your first name, if you're not comfortable giving your whole name. Okay. Why don't we open it up to the rest of the community here? Other people who might have questions for Dave, David, for Mike. This is Scott Lowe. This is sort of a higher level question that may have been addressed somewhat earlier, but I'd be really interested in understanding sort of the decision-making process might be your customers go through who's involved on their side and these sorts of decisions and what kind of a process are you seeing in place for your customers to move forward like this? Okay. So good question, Scott. So typically, you know, the technical people, the directors, and probably more so than usual, you know, the CIO type because of the, you know, cloud aspects of it, but it starts out, you know, solving a local problem so it gets to figuring out what type of use case that we're solving for a customer. You know, we're not replacing their existing production storage. We're complimenting it, and we're typically solving, you know, local backup and snapshot and data protection issues or copy management and cloning. So that's typically the use case where customers will start and then it morphs into, well, what are we doing for off-site for disaster recovery? We're leveraging enhanced technology to minimize the copies of data that we have locally. How can we kind of get rid of tape and do that type of philosophy better than we're doing now? And then, you know, where the executives come in from the top-down saying, you know, if we don't have a DR site, do we need to build one or what can we do with an infrastructure like you're proposing? So it's still typically all of the IT folks. It just, in my opinion, it just goes a little bit higher up the chain because of the cloud initiative. So this is sort of from an executive level, a big build versus buy decision. Are we going to build a DR site or are we going to buy the service? Right. And then, you know, when we look at DR, do we want to have it capex or apex? You know, would we rather pay by the terabyte, pay by the month, you know, and then be able to, you know, dynamically leverage application processes, you know, as kind of adjusting time, or do we want to secure a facility and, you know, put like infrastructure and like application servers in that environment? And this just provides, this is a logical extension for customers that may choose to not build their own DR site and look for DR capabilities. Whereas today, they're kind of, you know, if they're not going to build a DR site, then they're left kind of with the traditional take back up, which they typically tend to want to get off of. So provides a nice middle ground. Other questions in the community? Yes, I have a question. Hold on, David. Go ahead. Yep. Yes, have you considered using object storage in your sites for managing the archives, long-term archives? Can you give us your first name, or whole name if you choose? Yeah, my name is K. Ben Eric. I'm calling from Dell. Thank you, K. Hey, K. No, to date, I haven't, we haven't considered that. We have not. Okay, thank you. Other questions? David, you are about to ask a question. I have a question on the types of, you said, the application integration. Could you expand, obviously, Microsoft with VSS is part of that. Do you have a set of applications, for example, NetApp have a whole lot of applications that do exchange SNAP manager, for example, so they have one for exchange, one for SQL. Do you have a set of those applications? Which ones do you cover? How do you decide what is, can be application-specific and what can't? I think, you know, I'll let Bill address that, that they're rolling out support for more applications as we speak, you know, but it's similar functionality to the SNAP manager products, which I'm aware of, but, you know, Bill, maybe you can rattle off the list other than, you know, or own SQL and VMware. Yeah, sure, so for virtual hosts, the way Actifio really protects its environment is to leverage VSS along with VMware tools to get application- consistent SNAPs of all the underlying applications, so it's basically VSS snapping Microsoft products today. We have, you know, tight integration with SQL Exchange. We also have coming out in 5.0 release Oracle integration with Oracle Harman, so we can do full tight integration with the Harman scripting. So in short, I mean, we can do that today, but in short, what we're going to do in 5.0 of our release of the product is we're going to leverage Oracle's incremental change block tracking to protect Oracle. And then the next step for us is to protect Isilon systems via the Isilon API, so with that rules, we'll be able to select a volume of volumes in which to protect, and then we'll use the Isilon API to keep track of what's going on within the Isilon system, and the next time we look to back up that system, we'll take only the change blocks. And there's a growing amount of Linux in operations today. What do you do for a Linux environment? It really depends on whether the Linux host is in-band or out-of-band. All of our features within Actifio are very versatile, meaning that Linux placed appliance as well, so we have the ability to execute pre and post scripting, so if you have a way in which to get the application into a free-spot state, then we can get application consistent backups as short of it. Okay. We do have a connector coming out for Windows and Linux, and it's basically going to be just like a backup agent where it's going to write on the host. It's going to do a full backup the first time. It's going to do incrementals every time they're on out at a file base and push it over to Actifio. And the other piece about the Actifio is which Mike really didn't say is around recoverability, so the Actifio is basically a block level device. And because of that, we can facilitate mounting directly off of us. So think for a minute, like a VM recovery, if you have a VM that goes down hard, we have a full copy of that, not only in it's native, but in it's de-duplicated format, which means we can present back to VMWare as RDMs. And we can tell VMWare because we use the vStorage API, we can call VMWare and say, the amount is back up as a new VM. You can power it on. We'll start running off of the Actifio sub-system in the background, towards the emotion that blocks back to where they need to be. We have customers using this where they say, hey, if my VM goes down hard, I just run off of Actifio. So my recovery time objective is, you know, however long it takes to do a mount, which in most cases is less than a minute. Is it reasonable to assume that we wouldn't want to restore the same level of density of VMs on a server as what we might be running with a traditional high-performance storage system? Because I know performance is sometimes an issue in VMWare environments when you have a lot of density. Actually, I think we've got a peer insight coming up on that soon about how do you maximize VM density in a cloud offering. So I'm just curious what you get. You can do a question. It really depends on what the performance need of the underlying VM is. The cool part about Actifio is because of our storage virtualization layer, you can put high-performance disks behind us. But when we go out and sell the entire stack as an appliance, we sell 2TB, 7200 RPM, state of desk. So point taken if you've got VMs running on SSB or fiber channel disks and you need that same level of performance, if you power on off the Actifio device, you potentially could see or will see a performance hit. You have to really consider, if this VM is down, how do I recover it? How do I recover it quickly? Is it better to restore the whole thing in the amount of time that it takes to do the restore? Or is it better to fire the thing up, give the user's access and let them suffer a little bit with things operating as your storage and you're motioning it back? Well, I don't have the answer for that for every environment, but it is definitely a deployment consideration. What about the services angle here? Is Lighthouse providing any kind of business impact analysis service up front for clients? Can you talk about that a little bit? Yeah, so we have, to the answers yes, we have a subsidiary company that's called Compass that used to be part of our group and they split off and that's what they'll do. That's what they do for a living. The reason we ended up splitting up is sometimes customers wanted to have the business impact analysis done and then they wanted to have the remediation and some saw it as a conflict of interest. So we have a group separate distinct from a good relationship that goes in and helps customers the RPOs and the RTOs in doing the business impact analysis and then that will ultimately turn into a discussion of what the appropriate technology solution is to meet those requirements and then it will flow from there. Good. A few more minutes left on our call today so I just want to poll again and see if there are any other questions in the community. Anyone? Can you hear me? Go ahead. Hi. I'm just curious about the whole issue of the data that is coming in such huge amounts from the internet and from all different sources and that's only going to increase over time and I'm curious on your feelings on how this affects back up and how this affects your own technology direction going forward. I'm not sure I could I don't know where that question was directed at me. I'm not sure I completely understand the question. Well there's just so much data that's going to be that any enterprise is going to have to manage and then just the cons is struggling to keep up with it. And you know I'm curious on your additional perspective on that when David brought it up a little bit when he was asking some questions and further the thing is the greatest question is about back up overall and how you manage it. I'm just looking for more insight. Yeah no I think it's a good question and ultimately it covers an area that we didn't really discuss because we jumped into the cloud. But one of the premises of the Equipio solution is it eliminates the redundancy of data. So yeah data does grow certainly production data but what happens is your copy data grows 5, 10, 15, 20 percent of growth because you got copies for clones, you got copies for backups, you got copies for staff shots you got copies for replica you're treating VMware as a different beast so you have copies for that. So in a customer environment the data can tend to be out of control and it's managed by a bunch of different tools acting upon that data and creating that data. And the Equipio solution at a customer's location kind of gets a handle on that beast because you're really only using one Dduke storage pool to address all the needs of the stuff we've just spent the last hour talking about, the snapshots the VMware, the copies the replication. So while data is growing the production data this solution independent of the cloud this solution at a customer's location keeps the copy data in line with the production data growth. I think the question the question that was asked from the person from Dell gets at the key issue here which is if you want to use that data for more than one thing which is for example for archive and integrate the archive with the applications you're going to have to go to a different way of doing it which is an object object file object system way of doing things and that's somewhere in the future but that's where the end game here is if you're going to actually take advantage of that. Your solution is really pretty well entirely a backup type solution and it gets used for archive because the data is there but going forward there's got to be a lot of greater smartness in being able to use that data and then be able to extract it much more easily. If you're really going to take a handle on the whole problem of data management do you agree? I know that customers are successful using one common repository to address backups snapshots and clones as well as VMware so that the areas archiving can be a little bit of a different beast depending upon some of your litigation holds and things like that but as far as the traditional multiple copies of data for the reasons I just mentioned, Actifio does really as good of a job as any technology that I've seen in use. Well listen, we've reached the end of our hour so I want to thank everyone who participated today to our speakers. Thank you for joining us. Also to Alex Williams, to K Benrock, Scott Lowe, David Floyer thank you for your questions. We will have six research notes posted on Wikibon within the next 48 hours including summary of today's call, CIO action items, technology integration organizational action items, some recommendations for suppliers and some discussion about what we can get rid of with this sort of an approach. Wikibon is a wiki so we invite everyone to not only read the articles that are posted but also jump in, edit and enhance them. Just a reminder also that we have another peer insight next Tuesday, April 10th it's actually a very timely because we had some discussion around classification or whether or not you should classify applications. Next week we're going to have a VP of IT at Animal Health International talking about how he implemented a zero data loss infrastructure for all of his applications including test and dev. At async distances. So I look forward to having the community join for that discussion. Again, thanks everyone for attending today. Have a great rest of the day. Thank you.