 Hi, I'm Peter Burris, welcome to another CUBE Conversation. Brought to you by theCUBE from our beautiful studios in Palo Alto, California. Great conversation today. We're going to be speaking with DeTera about some of the new trends and how we're going to utilize data within the business with greater success, generating more value to superior customer objectives. To do that, we've got Mark Fleischman, who's the CEO and founder of DeTera. Mark, welcome to theCUBE. Thank you. And Guy Churchward, who's the executive chairman at DeTera. Thank you, Peter. So guys, this is a great topic, great conversation, very, very timely industry. One of the reasons is we've heard a lot about the cloud-native stack. Now, the cloud-native stack is increasingly going to reach into the enterprise and not just demand that everything come back to the cloud, but bring the cloud more to the enterprise. Well, one of the things that's still something of a challenge is, and how do we bring data, given its native attributes, into that model more successfully. Mark, what are the issues? So look, ultimately we believe it's all about data freedom, the capability to extract the value of data across the enterprise. However, as long as we continue to think about proprietary systems silos, where data is trapped, where it can't move freely across the enterprise, we're not going to be able to get there. So ultimately what it requires is changing our thinking of infrastructure from a hardware-centric perspective to a service-centric perspective, where the applications drive the needs from the data, where it's an application-centric perspective that automatically drives how data is actually consumed across the enterprise. But we've been thinking about that through software-defined, HCI, and other hyper-converged infrastructure and other things, but at the end of the day, we really have to make sure that we're doing so in a way that marries to the realities of data. Talk to us a little bit about how Deterra is providing that substrate that is native to data, but also native to the cloud. Absolutely. So I would describe Deterra as, Deterra is to data what Kubernetes is to compute. What do I mean by that? First of all, it's all about data orchestration. We orchestrate the data, just like Kubernetes would orchestrate compute. That's the foundation of our platform. Now, if we don't deliver enterprise performance so that we can actually replace existing storage, we wouldn't be able to actually broadly deploy. So we have enterprise performance as well. And lastly, to get away from a hardware-centric model, we offer wide variety, wide choice, future-ready choice of hardware. Those are the three key tenants that we actually see as getting to that vision. So Guy, you've been in this business a long time. You've looked at a lot of changes in technology, arrays where we were mainly focused on persisting data. So now some of the new technologies were more focusing on delivering data to new classes of applications. From your perspective, how does this message and Mark's springing line up with customer needs? Yeah, I know, I appreciate it. I mean, that was one of the reasons that when I had the opportunity to work closely with data, I kind of jumped into it. Because part of this is, as Mark said, data freedom. Unlocking, in other words, unlocking from the boundaries of basically a physical location. I think we always espouse and believe that we want to move towards a cloud, a pure cloud model. But we're going to be in this transition for five, six, seven years where we have on premise, a bit of hybrid and a bit of distributed and things like intelligent edge. So in other words, the whole concept is to say, how do I utilize data no matter where it is into a fabric or a mesh? And I think that the industry that we all live in sort of by accident tries to own the data. You know, it doesn't matter if we own it in a physical construct of a data center or we own it in a physical construct of a piece of hardware or a proprietary format. But in essence, you have these data silos absolutely everywhere. And so for me, to move to a cloud, you've got the simplicity you need, you've got the orchestration that you actually need, but you need this freedom outside of the bounds of a physical location or a piece of tin. I want to return back to the issue of performance and the need for performance because the world that you just laid out guys makes an enormous amount of sense to me and the Wikibon community. But it does mean that this data generated by that application in this location may have value to some other applications somewhere else. It may have completely different performance attributes. Absolutely. So let's talk about that need for ensuring that again, this notion of a native data approach to incorporating data into the cloud, how does the performance angle really work? I would argue where traditional software defined storage SDS fell short was exactly on the promise of performance. We saw that we contributed a significant part of the Linux data path itself. The way we architected the system, we deliver true primary application performance. So that in combination with the ability to orchestrate data across the data center, across multiple data centers, and ultimately across the data center and the cloud gives you the best of both worlds. It gives you primary workloads, the ability to actually serve primary workloads across multiple protocols, but to serve them location-independent, wherever you like, because we orchestrate the data to those places. So that's quite good. I'm sorry. It's the coffee it's going to kick in. So I mean, part of it is not just that, but it's also the life cycle. Very true. I mean, this is the thing that kind of attracts me is and you mentioned what you learn with the amount of hair I don't have now on the gray beard I got, is there's one thing about this sort of data boundaries and things getting locked in. The other one is the speed of which people want to build an application. They need it to have the enterpriseilities and then they'll take the application down. If you kind of think when we started in the industry, an app would last 20 years and then 10 years and then five years. And now you're looking at saying somebody wants an enterpriseility application up and running within two or three months, which is preposterous but needs to be done. And then it might be down within a month. Oh, 15 years ago it took us two or three months to create the test data required for the application. Well, and how many people would ever use to tell you never use an application if it's a 1.0? But we're talking about in a 1.0 period, they're actually going to serve their community. It's the most critical thing. Data is it for a company. If your analytics don't run as fast as your company's competitive space, you're behind. So if you're going to analyze something, the application that you bring up to analyze has to be critical to your business. And that's going to go up and it's going to go down. So in other words, it's going to go from test and dev up into production, tier zero, then tier one, tier two, tier three, and then out into an archive in a period of time that normally a 1.0 would just date. And so you need a platform that has that ultimate agility. And again, it can't be bound by anything. And this is something that, you know, data has as unique. This was why I like software defined and why I believe that this market's place is now for this space. Everything prior to SDS is basically what I call new legacy. You know, it doesn't matter whether it's array or it's hyperconversion and they're great and they've got their place, but each one of them has this fixed boundary that allows you to flex but inside of its own control. Businesses aren't like that. They can't be done like that. And applications can't be done like that now. So it's all multi-cloud. It's all going to be burst. Well, let's build on that. So the Kubernetes describes, as you said, a cluster of compute. When you pull away the, it's really a network of compute. So it's a network of compute resources that Kubernetes has visibility into so it can move resources or move elements where they need to be to be optimally utilized. Let's build on that. So what, where is the Terra in this relationship between resources as it starts to build an orchestrator, a manager, a network of data elements and pull that into something that makes it easy for developers to do what they need to do, operators to do what they need to do, and the business to do what it needs to do. Yeah, so you could call Kubernetes a network of compute or a swarm of compute, right? So the power of Kubernetes is that it abstracts the infrastructure to a level where it gets delivered continuously to the application on demand. We do exactly the same thing for data, for the ability to store, manage, and ultimately life cycle data, okay? So simply label-based like Kubernetes is, you specify the service level objectives for every individual application and Kubernetes pretty much does all the rest of the job completely independent of the hardware underneath. Again, we do that for data, okay? You have certain access requirements, protocols, authentication, security, you have certain performance requirements, you have certain reliability requirements. You articulate them simply in similar SLO service level objectives. The Terra does all the actual implementation automatically across the data center. So now you get to a point where in a modern data center, in a soft defined data center, I would argue we are the data foundation in those kinds of scenarios. We can co-orchestrate data along since you said Kubernetes, specifically with Kubernetes, with its compute. Obviously we work in other environments as well. We work equally well for VMware. We work for some other, a number of other cloud orchestration frameworks, but Kubernetes is a really good example here. So who's going to buy it? I mean, because going back to this issue of the orchestrator, developers clearly need this because they want access to real data, but they typically don't think in terms of underlying data structures, if it's available, that's all they care about. Data administrators, business people, who do you find in your customers today are really making that, not the initial contact, but actually driving the adoption of this new data fabric? So Mark, I know you'll answer it more accurately, I will, but just from a higher level to step down, there seems to be two types of people inside of large companies. One is a project owner. So for instance, I've been blessed with the job inside of BMW that I have to do autonomic cars and I'm tying together a very complicated pipeline that has to be extremely agile. So that's the type of person that we're basically look to buy and move this forward. And the other one is an internal service provider to the enterprise. So in other words, instead of being a group that has a physical job, what I'm actually doing is I'm saying, I'm now going to be a service provider, or a cloud provider, or a resource provider to an organization that now has complexity that's moving into and embracing a digital economy or a digital transformation. So those are the two types of person inside an organization. I think if you get a tire kicker, the places that we struggle with, I think it would be fair to say, is there's always going to be a geek somewhere that wants to kick the ladies cool technology. So we get involved with that. And then by the time you go all the way through it, there's no project there. They just really enjoyed themselves and so have we. But in essence, there's enough people now who recognize my business is going through this transformation. I need to get out of my technical debt. I'm throwing business into this economy. It's normally around machine learning applications, Kubernetes, things that are fast moving. And they need that level of illity that they're used to getting through fixed bounded technology. And so we're actually seeing that as a service provider, both external and internal. But internally inside the enterprises is something which we're very key on. And let me give you perhaps a few examples. We're looking at Fortune 2000 companies. A good example, for instance, would be one of the top airlines in the world that is replatforming from a more rigid siloed IT to really deliver all their applications to internal and external customers as a service. It would also be digital businesses where their currency really is speed, agility, and obviously data is their currency. So we're looking here at one of the top travel fare aggregators. That's one of the customers. Actually interestingly, we are in their tier zero as starch. That's quite an endorsement of the performance aspect. We are also in one of, I would say, the leading service providers outside of the typical crowd. You think those are one of the up and coming guys. So those are typical markets and customers. We're looking at really Fortune 2000 companies that are replatforming to cloud, hybrid cloud, and digital service businesses, digital businesses. But it is most people who are basically going from they're transforming their data center into a metadata center. They're embracing the distribution and then cloud. But they're not going wholesale and just saying, we're over. They have this practicality of first thing I need to do is to free up my data, make my data center agile, and then decide how I want to distribute it across. Mark Fleischman, Guy Churchward to Tara. Thank you very much for being on theCUBE. Thank you very much, Peter. Pleasure, thank you. And once again, this is Peter Burris from our CUBE studios in Palo Alto, California. Thanks very much for participating in this CUBE conversation with the Tara.