 from Miami Beach, Florida. It's theCUBE. Covering VeeamON 2019, brought to you by Veeam. Welcome back to Miami, everybody. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante. I'm here with my co-host, Peter Burris, two days of wall-to-wall coverage of VeeamON 2019. They selected the Fontainebleau Hotel in hip, swanky Miami. Tad Brockway is here. He's the corporate VP of Azure Storage. Good to see you. Yeah, great to see you. Thanks for having me. You know, it worked for a pretty hip company, Microsoft, Azure is where all the growth is, 70 plus percent growth, and you're doing some cool stuff with storage. So let's get into it. Sure. Let's start with your role and kind of your swim lane, if you will. Sure. So our team is responsible for our storage platform that includes our disk service for IaaS virtual machines, our scale-out storage. We call Azure Blob Storage. We have support for files as well with a product called Azure Files. We support SMB-based files, NFS-based files. We have a partnership with NetApp. We're bringing Azure NetApp files as what we call it. We're bringing NetApp on tap into our data centers, delivering that as a first-party service. We're pretty excited about that. And then a number of other services around those core capabilities. And that's really grown over the last several years. Optionality is really the watchword there, right? Given customers as many options, file, block, object, et cetera. How would you summarize the Azure Storage strategy? Yeah, so, and I like that point, optionality and really flexibility for customers to approach storage in whatever way makes sense. So there may be customers, there are customers who are developing brand new cloud-based apps. Maybe they'll go straight to object storage or Blobs. There are many customers who have data sets and workloads on-prem that are NFS-based and SMB-based. They can bring those assets to our cloud as well. We also have, we're the only vendor in the industry that has a server-side implementation of HDFS. So for analytics workloads, we bring file system semantics for those large-scale HDFS workloads. We bring them into our storage environment so that the customer can do all of the things that are possible with the file system, create hierarchy keys for organizing their data, use ACLs to protect their data assets, and that's a pretty revolutionary thing that we've done. But to your point, though, optionality is the key in being able to do all of those things for all of those different access types, and then being able to do that for multiple economic tiers as well, from hot storage all the way down to our archive storage tier. And, Todd, I sure changed you on your title because you're also responsible for media and edge, right? So that includes Azure Stack, is that right? Right, so we have Azure Stack as well within our area and DataBox and DataBox Edge. Those are, DataBox Edge and Azure Stack are our Edge portfolio platforms so the customers can bring cloud-based applications right into their on-prem environments. Peter, you were making a point this morning about the cloud and its distributed nature. Can you make that point? I'd love to hear Todd's reaction and response. Yes, so, Todd, we've been arguing in our research here at Wikibon SiliconANGLE for quite some time that the common parlance, the common concept of cloud, move everything to the center was wrong. And we've been saying this for probably four or five years. And we believe very strongly that the cloud really is a technology for further distributing data, further distributing computing, so that you can locate data approximate to the activity that it's going to support. But do so in a way that's coherent, comprehensive, and quite frankly, competent. And that's what's been missing in the industry for a long time. So if you look at it that way, tell us a little bit about how that approach or that thinking informs what you're doing with Adger and specifically one of the other challenges how does then data services impact that? So we'll come to that in a second. Great insight, by the way. I agree that the assumption had been that everything is going to move to these large data centers in the cloud. And I think that is happening for sure. But what we're seeing now is that there's a greater understanding of the longer term requirements for compute and that there are a bunch of workloads that need to be in proximity to where the data is being generated and to where it's being acted upon. And there are tons of scenarios here. Manufacturing is an example where we have one of our customers who's using our data box edge product to monitor an assembly line. As parts come out of the assembly line, our data box edge device is used with a camera system attached to it. AI inferencing to detect defects in the assembly line and then stop the assembly line with very low latency where a round trip back to the cloud and back to do all the AI inferencing and then do the command and control to stop the assembly line, that would just be too much round trip time. So in many different verticals, we're seeing this awareness that there are very good reasons to have compute and storage on-prem. And so that's why we're investing in Azure Stack and data box edge in particular. Now you asked, well, how does data factor into that? Because it turns out in a world of IoT and basically an infinite number of devices over time, more and more data is going to be generated. That data needs to be archived somewhere. So that's where public cloud comes in and all the elasticity and the scale economies of cloud. But in terms of processing that data, you need to be able to have a nice, a strong connection between what's going on in the public cloud and then what's going on on-prem. So the killer scenario here is AI, right? Being able to grab data as it's being generated on-prem routed into a product like data box edge. Data box edge is a storage gateway device so you can map your cameras in the use case I mentioned or for other scenarios, you can route the data directly into a file share and NFS, blob or SMB file share, dropping into data box edge. Then data box edge will automatically copy it over to the cloud but allow for local processing to local applications as if it were, in fact it is local, running in a hot SSD and VME tier and the beautiful thing about data box edge, it includes an FPGA device to do AI inference offloading. So this is a very modern device that intersects a whole bunch of things all in one very simple self-contained unit. Then the data flows into the cloud where it can be archived for, you know, permanently in the cloud and then AI models can be updated using the elastic scale of cloud compute. Then those models can be brought back on-prem for enhanced processing over time. So you can sort of see this virtuous cycle happening over time where the edge is getting smarter and smarter and smarter. It's pretty cool stuff. Okay, so that's what you mean kind of, when you talked about the intelligent cloud and the intelligent edge, I was going to ask you, you just kind of explained it. That's right. And you can automate this, use machine intelligence to actually determine where the data should land. That's right. And minimize human involvement. You talked about driving marginal cost of storing your data to zero, which we've always talked about doing that from the standpoint of reducing or even eliminating labor costs through automation. But you've also got some cool projects to reduce the cost of storing a bit. Maybe you could talk about some of those projects a little bit. That's right. And that was mentioned in the keynote this morning. And so our vision is that we want for our customers to be able to keep their artifacts that they store in our cloud platform for thousands of years. And if you think about, you know, sort of the history of humanity, that's not outside the question at all. In fact, wouldn't it be great to have everything that was ever generated by humankind for the thousands of years of modern or human history? We'll be able to do that with technology that we're developing. So we're investing in technology to store data virtually and definitely on glass, as well as even in DNA. And by investing in those advanced types of storage, that is going to allow us to drive that marginal cost down to zero over time. Epigenetic storage systems. I want to come back to this notion of services though and where data is located. So again, from our research, what we see is we see, as you said, data being proximate or being housed, proximate to us created and acted upon. That's right. But that increasingly businesses want the options to be able to replicate that or replicates wrong words. That's a loaded word. To be able to do something similar in some other location if the action is taking place in that location too. That's what Kubernetes is kind of about and serverless computing and some of these other things are about. But it's more than just the data. It's also the data, it's the data services. It's the metadata associated with that. How do you foresee at Microsoft and what role might be in play in this notion of a greater federation of data services that are made, that make possible like a policy driven backup, restore data protection architecture that's really driven by what the business needs and where the action is taking place. Is that something you're seeing in a direction that you see it going? Yeah, absolutely. And so I'll talk conceptually about our strategy in that regard and where we see that going for customers and then maybe we can come back to the Veeam partnership as well because I think this is all connected up. Our approach to storage, our view is that storage should be, you should be able to drop all of your data assets into a single storage system like we talked about that supports all the different protocols that are required can automatically tear from very hot storage all the way down to overtime, glass and DNA. And we do all of that within one storage system and then the movement across those different vertical and horizontal slices that can all be done programmatically or via policy. And so customers can make a choice in the near term about how they drop their data into the cloud but then they have a lot of flexibility to do all kinds of things with it over time. And then with that we layer on the Microsoft whole set of analytics services. So all of our analytics, all of our data and analytics products they layer on top of this disaggregated storage system so that there can be late binding of the type or processing that's used, including AI, to reason over that data relative to where and how and when the data entered into the platform. So that sort of modularity, it really future-proofs the use of data over the long haul, we're really excited about that. And then that those data assets can then be replicated to use your term to other regions around the globe as well using our backbone, right? So the customers can use our network, our network is a customer's network. And then the way that docks into the partnership with Veeam is that just as I mentioned in the keynote this morning, data protection is a use case that is just fundamental to enterprise IT. We can make together with customers and with Veeam, we can make data protection better today using the cloud and with the work that Veeam has done in integrating with O365, the integration from there into Azure Storage. And then over time, customers can start down this path of something that feels sort of mundane and this has been a part of daily life in enterprise IT. And then that becomes an entry point into our broader long-term data strategy in the cloud. So pretty cool. But following up on this, so if we agree that data is not going to be entirely centralized, it's going to be more broadly distributed and that there is a need for a common set of capabilities around data protection, which is a very narrowly defined term today and it's probably going to evolve over the next few years. I agree with that. We think that, and this is what I want to test, we think you're going to have a federated model for data protection that provides for local autonomous data protection activities that is consistent with the needs of those local data assets, but under a common policy-based framework that a company like Veeam's going to be able to provide. What do you think? So first of all, a core principle of ours is that while we're creating these platforms for large data sets to move into Azure, the most important thing is that customers own their own data. So it's kind of this, there's this balance that has to be reached in terms of cloud scale and cloud the federated nature of cloud and these common platforms and ways of approaching data, while simultaneously making sure that customers and users are in charge of their own data assets. So those are the principles that we'll use to guide our innovation moving forward. And then I agree, I think we're going to see a lot of innovation when it comes to taking advantage of cloud scale, cloud flexibility and economics, but then also empowering customers to take advantage of these things, but do it on their terms. I think there's like the future's pretty bright in that regard. And the operative term there is their terms. I mean, you obviously, Microsoft has always had a large on-prem install base and software estate. And so you've embraced, you know, the hybrid to use that term with your strategies. You've never sort of run away from it. You never said everything's going to go into the clouds. Right. And that's now evolving to the edge. And so my question is, what are the big gaps? Not necessarily organizationally or process-wise, but from a technology standpoint that the industry, generally in Microsoft specifically, have to fill to make that sort of federated vision a reality. Well, there's, I mean, we're just at the early stages of all this for sure. In fact, as we talked about this morning, the notion of hybrid, which started out with use cases like backup, is rapidly evolving toward a more sort of modern, enduring view. I think in a lot of ways, hybrid was used as this kind of temporary stop along a path to cloud. And back to our earlier discussion, by some I guess, maybe there's a debate you all are having there. But what we're seeing is the emergence of edge as being an enduring location for compute and for data. And that's where the concept of intelligent edge comes in. So the model that I talked about earlier today is hybrid is about extending on-prem data center assets into the cloud, whereas intelligent edge is taking cloud concepts and bringing them back to the edge in an enduring way. So it's pretty neat stuff. And a big part of that is that much of the data, if not most of the data, the vast majority even might stay at the edge permanently and of course you want to run your models up in the cloud. But at least for real time processing, yes. Right, right. You just don't have the time to do the round trip. So, cool. All right, Ted, I'll give you a last word on Azure, direction, your relationship with Veeam, the conference, take your pick. Yeah, well, I thank you. Thanks, thanks. Great to be here. As I mentioned earlier today, the partnership with Veeam and then this conference in particular is great because I really love the idea of solving a very real and urgent problem for customers today and then helping them along that journey to the cloud. So that's one of the things that makes my job a great one. Well, we talk about digital transformation all the time in theCUBE. It's real. It's not just a buzzword. It can't happen without the cloud. But it's not all in the central location. It's extending now to other locations. Your data assets. And where your data wants to live. So, Ted, thanks very much for coming on theCUBE. It's great to have you. Okay, thanks guys. All right, keep it right there. Everybody will be back with our next guest. This is Veeam on 2019 and you're watching theCUBE.