 I'm Peter Burns, welcome to another CUBE conversation from our beautiful studios here in Palo Alto, California. Got another great guest today, Jagane Sundar is the CTO of WAN Disco. Jagane, welcome back to the CUBE. Good morning, Peter. So Jagane, I want to talk about something that I want to talk about, and I want you to help explicate for our clients what this actually means. So there's two topics that I want to discuss. We've done some steps of research in both of them. One is this notion that we call plastic infrastructure, and the other one related to something we call networks of data. Let's start with networks of data, because I think that that's perhaps foundational for plastic infrastructure. If we look back at the history of computing, we've seen increasing decentralization of data. Yet today, many people talk about data gravity and how the cloud is going to bring all data into the cloud. Our belief, however, is that there's a relationship between where data is located and the actions that have to be taken and data locality has a technical reality to it. We think we're going to see more distribution of data, but in a way that nonetheless allows us to federate, to bring that data into structures that nonetheless can ensure that the data is valuable wherever it needs to be. When you think of the notion of networks of data, what does that make you think about? That's a very interesting concept, Peter. When you consider the cloud and you talk about S3, for example, and buckets of objects, people automatically assume that it's a global storage system for objects. But if you scratch a little deeper under the surface, you'll find that each bucket is located in one region. If you want it available in other regions, you've got to set up something called cross-region replication, which replicates in an eventual consistent fashion. It may or may not get there in time. So even in the cloud storage systems, there is a notion of locality of data. It's something that you have to pay attention to. Now, you hit the nail on the head when you said networks of data. What does that mean? Where does the data go? How it is used? Our own platform, the fusion platform for replication of data, is a strongly consistent platform which helps you confirm to legal requirements and locality of data and many such things. It was built with such a thing in mind. Of course, we didn't quite call it that way, but I like your way of describing it. So as we think then about where this is, the idea is we, you know, 40 years ago, we ARPANET allowed us to create networks of devices in a relatively open, application-oriented way. Then the web allowed us to create networks of pages of content. But again, that content was highly stylized. More recently, social media has allowed us to create networks of identities. All very important stuff. Now, as we start talking about digital business and the fact that we want to be able to rearrange our data assets very quickly in response to new business opportunities, whether it's customer experience or operational oriented, this notion of networks of data allows us to think about an approach to doing that so that we can have the data be in service to existing business opportunities, new business opportunities, and even available for future activities. So we're talking about creating networks out of these data sources, but as you said, to do that properly, we need to worry about consistency, we need to worry about cost. The platform for doing this at Fusion is a good one. It's going to require over time, however, we think some additional types of capabilities, the ability to understand patterns of data usage, the ability to stage data in advance and predictably, et cetera. Where do you think this goes as we start conceiving of networks of data as a fundamental value proposition for technology and business? Sure. One of the first things that occurs to me when you talk about a network of data, if you consider that as parallel to a network of computers, you don't have a notion of things like read only computers, whereas read write computers. That's just silly. You want all computers to be roughly equal in the world. If you have a network of servers and a network of computers, any of them can read, any of them can write, and any of them can store. Now, our Fusion platform brings about that capability to your definition of a network of data. What we call live data is the ability for you to store replicas of the data in different data centers around the world with the ability to write to any of those locations. If one of the locations happens to go down, it's a non-event. You can continue writing and reading from the other locations. That truly makes the first step towards building this network of data that you're talking about feasible. But I want to build upon that notion a little bit because we are seeing increased specialization, for example, AI, GPUs, AI-specific processors. So even though we are still looking forward to general purpose, nonetheless we see some degree of specialization. But let me also take that notion of live data and say I expect that we're going to see something similar. So for example, the same data set can be applied to multiple different classes of applications where each application may take advantage of underlying hardware advantages, but you don't have a restriction on how you deploy it built into the data. Have I got that right? Absolutely. Our Fusion platform includes the capability to replicate across cloud vendors. You can replicate your storage between Amazon S3 and Azure Blob Store. Now, this is interesting because suddenly you may discover that Redshift is great for certain applications, while Azure Rescue LDW is better for others. We give you the freedom to invent new applications based on what location is best suited for that purpose. You've taken this concept of network of data. You've applied a consistent replication platform. Now you have the ability to build applications in different worlds, completely different worlds. And that's very interesting to us because if we look at data as the primary asset of any company, consider a company like Netflix, their data and the way they manage the data is the most important thing to that company. We bring the capability to distribute that data across different cloud vendors, different storage systems and run different applications. Perhaps you have a GPU heavy cloud that maybe a GPU vendor offers, replicate your data into that cloud and run your AI applications against that particular replica. We give you truly the freedom to invent new applications for your purpose. But very importantly, you are also providing, and I think this is essential, a certainty that there's consistency no matter how you do it. I think that's the basis of the whole, the Paxus algorithms you guys are using. Exactly. I mean, the fundamental fact is that data scientists hate to deal with outdated data because all the work they're doing maybe for no use if the data that they're applying it to is outdated, invalid or partially consistent. We give you guarantees that the data is constantly updated, live data. It's completely consistent. If you ask the same question of two replicas of your data, you will get exactly the same answer. There is no other product in the industry today that can offer that guarantee. That's important for our customers. Now, building on the foundation, we're going to have to add some additional things to it. So, pad of recognition, ML inside the tool. Is that on the drawing board? I don't want you to go too far into futures, but is that kind of the future that you see too? We are a platform company with an excellent plug-in API. And one of the uses of our plug-in API, I'll give you a simple example. We have banking customers and they need to prevent credit card numbers from flying over the wire under certain circumstances. Our plug-in API enables them to do that. Applying an ML intelligence program into the plug-in API, again, a very simple development effort to do that. We are facilitating such capabilities. We expect third-party developers. We already have a host of third-party developers and companies building to our plug-in API. We expect that to be the vehicle for this. We won't claim expertise in ML, but there are plenty of companies that will do that on our platform. All right, so that leads to the second set of questions that I want to ask you about. We've defined what we call plastic infrastructure as a future for the industry. And to make sense of that, what we've done is we've said, let's take a look at three phases of infrastructure, not based on the nature of the hardware, but based on the fundamental capabilities of the infrastructure. Static infrastructure is when we took an application, we wired it to a particular class of infrastructure. New load hits it, often you broke the infrastructure. Elastic infrastructure is the ability to be able to take a set of workloads and have it very up and down so that you can consume more and release the infrastructure. So it has kind of a rubber orientation. It hit it with new load, it will deform for as long as you get it to then it snaps back into shape. So you have predictability about where your costs are. We think that increasingly digital business is going to have to think about plastic infrastructure. The ability to very rapidly have the infrastructure deform in response to new loads, but persist that new shape, that new structure in response to how the load has impacted the business. If in fact that is a source of value for the business. What do you think about that notion of plastic infrastructure? I love the way you describe it. In our own internal terminology, we have this notion of live data and freedom to invent. What you've described is exactly that. The plastic infrastructure matches exactly with our notion of freedom to invent. Once you've solved the problem of making your data consistently available in different clouds, different regions, different data centers, the next step of course is the freedom to invent new applications. You're going to throw experimental things at it. You're going to find that there are specific business intelligence that you can draw from this by virtue of a new application. Use it to make some critical decisions, improve profitability perhaps. That results in what you describe as plastic infrastructure. I really love that description by the way, because we've gone from, you know the cloud brought us elastic infrastructure. We've replicated, we've built a system that enables innovation and invention of new ideas. That's plastic infrastructure. I really like the idea that you're proposing. So as you think about this concept of plastic infrastructure, obviously there's a lot of changes that are going to take place in the industry. But fusion in particular, by providing consistency, by increasing the availability more importantly, even the delivery of data where it's required facilitates at that data level, that notion of plasticity. Absolutely. The notion that you can throw brand new applications at it in a cloud vendor of your choice. The fact that we can replicate across different clouds is important for plastic infrastructure. Perhaps there are certain applications that work better in one cloud versus the other. You definitely want to try it out there. And if that results in some real valuable applications, continue running it. So your definition that elastic becomes plastic infrastructure matches perfectly with that. We love this notion that we take the CIO's problems of mundane data management away and introduce the capability to invent and innovate in their space. So let me give you a very, or let me ask you a very practical, simple question. Historically, the backup and restore people and the application development people didn't spend a lot of time with each other and that has created some tension. Are we now, because of our ability to do this live data, are we able to bring those two worlds more closely together so that developers can now think about building increasingly complex, increasing rich applications, and at the same time ensure that the data that they're building and testing with is in fact very close to the live data that they're actually going to use. Absolutely. We do bridge that gap. We enable the application developers to think of more complex, more sophisticated applications without actually worrying about the availability or the consistency of data. And the IT administrators and the CIO run operations that need to deliver that have the confidence that they can in fact deliver it with the levels of consistency and availability that they need. So I'm going to give you the last word in this. I talked about a fair amount now about this notion of networks of data and infrastructure plasticity. Where do you think this kind of matures over the course of the next four or five years? And what should your peers, CTOs, the large businesses that are thinking about these challenges that data management be focusing on? So the first thing that you have to acknowledge is that people need to stop thinking about machines and servers and consider this as infrastructure that they acquire from different cloud vendors. Different cloud vendors because in fact there is going to be a few, a handful of good cloud vendors that will give you different capabilities. Once you get to that conclusion you need your data available in all of these different cloud vendors perhaps on your on-prem location as well with strong consistency. Our platform enables you to do that. Once you get to that point you have the freedom to build new applications, build business critical systems that can depend on the consistency and the availability of data. That is your definition of plasticity and networks of data. I truly like that. Yeah, so great summary. We would agree with you that recently the CIO or the CDO, whoever it's going to be has to focus on how do I increase returns on my business's data. And to do that they need to start thinking differently about their data, about their data assets both now and in the future. Very, very important stuff. Jagain, thank you very much for being on theCUBE. Thank you, Peter. And once again, I'm Peter Burris and this has been a CUBE conversation with Jagain Sundar, CTO of WAN Disco. Thanks again.