 and they can do on their own. They don't have to guess in capacity. They can move quicker, which enables a lot more innovation. They get to spend their scarce SDE resource on projects that move the business forward, and they can go global with their application presence in minutes. And this is Silicon Angles the Cube, our continuous production of AWS live wall-to-wall coverage, AWS Summit. We're here at Moscone Center in San Francisco, AWS. Amazon has a number of these events, probably like a dozen around the world. Really keying off its big conference, which is the re-invent event in November. It's in Vegas and a big event. They go to these regional events to really help customers. It's much more intimate, help customers understand what they're doing, maybe make some new announcements, bring in the local ecosystem. And I'm here with my co-host, Jeff Frick. Jeff, we've been here all day going at it. We were at the OpenStack Summit, I guess it was last week or the week before. Tom Shields is here. He's the senior director of cloud marketing at NetApp. Tom, welcome to the Cube. Thanks. Thanks for having me. You're welcome. So let's start with NetApp's cloud play. I mean, how would you generalize NetApp's play in the cloud generally? Yeah, I think we're pretty unique. Unlike some of our competitors, we're not trying to be a cloud service provider. We work exclusively through service providers. We treat them as partners, not as direct customers. And so they are our channel. So for our enterprise users who want to use cloud services, we will promote our cloud service providers to our enterprise as solutions. Okay, so let's now talk specifically about your AWS play. You guys made an announcement in November. Talk about that and talk about how that's going, how it's evolved, and where we're at today. Yeah, we got together with AWS. What we saw happening in the enterprise is that many of our customers were starting to use AWS. They started talking to us about that. They want to get in and use the benefits of public cloud. The flexibility, the cost efficiencies are very attractive. At the same time, they appreciate the performance characteristics, the availability, the security of their NetApp storage. And so for NetApp enterprise customers, we were looking to create a way where they could start to get into the public cloud yet keep the benefits of the NetApp storage. And so that's why we developed the solution with Amazon Web Services. Okay, so talk about what that solution is, what it does, maybe traction in the marketplace, who's using it, how they're using it. We'll get into that whole thing. You can take any one of those questions that you like. We like the portfolio question. Talk about the solution itself. What does it do? Yeah, well the solution is NetApp Private Storage for Amazon Web Services. And what it is, it's a combination of Amazon EC2, Amazon Direct Connect, and NetApp Storage and data management. And what enables it is the Direct Connect technology that AWS brought out in 2011. So it's a high bandwidth dark fiber connection that allows you to connect the core Amazon cloud to a third-party data center. So now what you're able to do with that Direct Connect data center is put a NetApp device in there, connect up to Amazon EC2. The data centers are very close to the AWS fleet, literally across the street. So you have a very low latency, high bandwidth connection. So the whole stack now operates as if it's in the same rack. So you consider it a very high performance low latency stack. Parts of it are in the cloud. You rent those. The storage is NetApp and you control that and own that. So it's a synchronous location. It functions as if it were all in the same rack together. So you can tackle a host of jobs that, you know, you would use a FlexPod for normally on-prem. Okay, and that third-party data center is proximate to the Amazon. Correct data center. Is that right? In most cases, it's right across the street. So we use Equinix. Equinix is one of the partners that we've chosen to go to market with. First here, they have more Direct Connect facilities than anybody else around the world. So we're with them here in the United States, down in the Bay Area, out in Virginia, out in Tokyo, Singapore, and now Australia. Soon we'll be in Europe with Equinix. And so these are all Direct Connect facilities that are close to the AWS fleet. Okay, so AWS, obviously. I mean, Equinix has more data centers around the world than AWS, obviously, right? They've been around for a long, long time. Not all of them are Direct Connect facilities. The ones that are close to the AWS fleet are. So how did you guys come to this? You just sort of one day hit somebody, hey, we've got this relationship with Equinix. We've got AWS growing like crazy globally. We could maybe start talking together and put together this solution. Yeah, I know. I think we found each other through our enterprise customers. As Amazon started to become more focused on the enterprise space, moving from Web 2.0 up into the enterprise, they've encountered a large and loyal enterprise installed base with NetApp. And those customers love NetApp. At the same time, we have fielders all over the place, and we understand that our enterprise customers are using Amazon today. They're doing dev tests with Amazon. IT departments want to do more with it. And they want to find a way to do it and still control the data and have some visibility to what's happening in the organization. So our customers are asking us to work together. That's the bottom line. So is more of the data than going to the Direct Connect data center across the street then? Because it seems like you're still going to have that long leg. Yeah, remember, we're talking mainly to NetApp customers that are moving from doing things on-prem and starting to want to get into the public cloud. So they're moving from an on-prem world, and they're looking to start to use public cloud. For them, it's a nice first step to be able to keep their familiar storage, data management on NetApp, and reach up and grab all the efficiencies and flexibility of the Amazon EC2 cloud. So that's primarily where we see the use cases as people get started. Can S3 play in this equation? S3 can play in this equation. We do have a use case there. Many customers want to replace tape. They want a more reliable, cost-effective alternative to tape. And they can get that in S3 Glacier. And so we do have customers that have one large oil company that wants to consolidate backup and recovery and replace tape. What they'll do is they'll use NetApp snapshots locally in their multiple data centers. They'll consolidate two weeks of backup in the Direct Connect Colo facility. So they get two weeks of restores. That's just happening in real time over time. Yeah, they'll use NetApp replication to move data over to that colo. And then that data that's older than two weeks, they want to park that up in S3 Glacier. So Glacier is a penny per gigabyte per month. Very cost-effective. It's spinning disk, so it's more reliable than tape. And people like that story. It's easier to end-the-life data on that solution. So it provides some nice value there. So there's a lot of interest in that. Is spinning disk really more reliable than tape? I didn't know that. So our customers believe that. Do you have data on that? That sounds like a good sound bite. I didn't know that. But I would think that most customers would want data on spinning disk is the point, because it's so much more convenient. Although my guess is if you put it into Glacier, you hope you never have to go get it. But it's cheap. You know it's there if you do have to go get it. That's right. It seems like a key to your solution is, again, this notion of being proximate to the Amazon data centers, especially from a performance standpoint. If I'm moving data around, I don't want to move it long distances. It takes a long time to move a terabyte. Is that right? And can you talk about the importance of that? Yeah, that's right. Data has gravity, so players here at Amazon have said that. It's not like you move it around willy-nilly, and people like to know where it is. With NetApp, a lot of times what we'll do to get started is make a big data transfer right to that Colo facility. But then once we've got that in place, we can make incremental data changes over the wide area network using native NetApp replication, which just sends change block data over the wire. So you can get a relatively efficient transfer process going. It doesn't cost too much. So you can literally think about the Direct Connect facility and your NetApp on-prem creating a hybrid cloud with the data moving back and forth. You might develop up in the NetApp Private Storage for AWS environment, and then want to move that data back on-prem and run the whole thing on your own stack. You can do that because you control the data flow across-prem and NetApp Private Storage for AWS. So you guys have nice snapshot technology to allow you to do that CDP. Absolutely. That's a core value for NetApp. It's a very efficient data transfer. So the key there for a customer, though, is seeding that installation initially. It's big data, you've got to seed it. What's best practice? How do you do that? Do you have tools to help me throttle? Can you talk about that a little bit? Yeah, there's tools. We'd have to have a longer conversation. Everybody hates this conversation. I know, but for customers out there, it's a big consideration. A lot of times people will just move a filer over from their- Put it on a truck. From their-Prem and move it over to their- Literally put it on a truck. Yeah, yeah. That's probably the best move, right? Yeah, sometimes. When you move a whole lot of atoms or bits, they become like atoms. You know what that's called? That's the CTAM. It's the sneaker net. CTAM. It's a Chevy Truck Access Method. So for all you old main framers out there. Okay. So that's interesting. All right. So that's something that they have to think about. That's probably the best way to do it is if they don't, then they can trickle it over and it takes whatever- Once you've got your data, you know, the bulk of the data over there, then you can really take advantage of the efficient replication data movement that NetApp has. Yeah. It's very inexpensive. Yeah, yeah. Once it's there, it's really some everyday operations. Talk about some of the customers that you have. Name names if you can, if not I understand. But maybe the types of customers and what they're doing. You're right. Yeah, the fun thing about this is that we're discovering new use cases every day. But if I look across our pipeline and the customers that are using it today, several use cases really stand out. DR is really the first one that I'd like to talk about. The best way to talk about it is to use a customer example. So a lot of customers have a hard time figuring out how to afford a full redundant stack in a remote location to use as a DR site. So for mid-sized business, this solution is really nice because they can put a NetApp device in the co-location facility and then they can use reserved instances up in the EC2 cloud, right? And they're very cheap discounted instances, but you're guaranteed to get them if you need them for failover purposes or for testing purposes. And so only when you test DR or failover are you really paying for using the Amazon cloud, you know, paying big. And so it's a much more cost-effective way to go for DR. So that's a primary use case that we see. Another, and it's very common, is just wanting to rationalize compute infrastructure. We have many customers coming from using VMware on-prem today, you know, on their servers, getting a lot of efficiency there. Now they can move stuff over to Amazon. We have customers that estimate they can save 70% of their compute infrastructure costs by moving over and using EC2. Now, why are they using NetApp? Why do they still want to use NetApp? It's for performance reasons. They've come to rely on the performance profile of NetApp for their applications. And then some of them have concerns around security availability and they've come to appreciate the NetApp feature set there. And so they're very comfortable staying with the NetApp storage model and then tapping into EC2 to rationalize compute and save money there. They like NetApp, okay. Yup, so there's two use cases. I've got more for you. The DR one, I'm interested in more but I want to double-click into the DR one. A lot of customers tell us in the Wikibon community that they don't adequately, they don't tell us this in public, not in private but my inferences. They don't adequately test DR because it's too risky and too expensive. So what you're saying, I'm inferring from your comments that you enable more facile, more cost-effective testing so that I can actually go to my board and say, yes, we have a disaster recovery plan. We have tested it. We are in compliance and they can sleep at night. Absolutely. You're familiar with NetApp. We have some technology that enables you to do very low overhead clones of data. So you can literally use that technology, flex clones is what we call it, off that replicated data set. Without breaking the mirror, you can clone that production data and then use that with EC2 to test your DR readiness. You don't need to break the mirror. That's a huge benefit for DR for our customers and it's only available in NetApp technology. NetApp has a lot of inherent advantages because of its architecture and there are probably dozens of examples like that. Other use cases you want to share with us? Sure, sure, I got a bunch. Big data analytics. These are giant data sets. Some of the data is sensitive. Our customers want to keep that on NetApp storage. In some cases, and then buy thousands of cores and use them when they do their analytics runs pay as they go. It's a nice model. We've got an insurance company that's doing actuarial analytics using that model. So that's one. We've got another interesting one. Data center consolidation and migration. We've got a Fortune 50 company. They have a very active M&A portfolio. So they're bringing on, you know, buying new companies. They need to integrate those assets, retire some assets, move the data onto new data center resources. So what they're doing is they're using NetApp Privates Storage for AWS as a migration hub. So they can literally move this data over very quickly and then some of the data will stay there because it's well suited to taking advantage of the benefits of VC2. Maybe a spiky workload that has seasonality in it. Other data they want to move back on-prem into the new data center. So you've got this notion of transient workloads that move back on-prem into a new data center and then you've got permanent workloads that stay within NetApp Privates Storage for AWS. So this whole thing is like a transition hub for M&A activity, for retiring old data centers that are no longer efficient. So that's a pretty cool one. And then you've just got, you know, talked about the backup use case. There's a lot of people that are interested in that one. Excellent. Hi, Tom. Listen, we really appreciate you coming on NetApp. Make it moves in the cloud. NetApp is a company that has reinvented itself many, many times. And you're in that. Going through that process again, actually, with, you know, clustered on tap. And we wish you luck with the cloud initiative. Tom, thanks very much for coming on theCUBE. Okay, thanks for having me. All right, you're welcome. All right, keep it right there, everybody. We'll be right back with more coverage from AWS. Jeff Frick and myself. You can tweet us questions if you want. I'm at Dave Vellante. He's at Jeff Frick. This is theCUBE. Keep it right there. We're right back. So if you're like most businesses, you need lots of data storage to allow your applications to run smoothly and ensure your data is backed up and secure. That means that every time you add an application or your business grows, you need to add more storage. You keep buying storage and hiring more people to manage it.