 Hi, I'm Peter Burris of Wikibon. Welcome to another CUBE Conversation. We're going to have a great time talking over the next few minutes about the role that performance in the data plane is going to play at making possible both the options provided by the cloud, but at the same time in a way that allows us to actually run the applications at the speed and the scale that the business requires. And to do that, we've got Mark Fleischman, who's a CEO and founder at the Terra, and Guy Churchwold, who's the executive chairman at the Terra. Welcome back to the CUBE guys. Thank you for having us, Peter. So Mark, I want to start with kind of what I started with, right, that at the end of the day we've got this enormous agility that we're provided within the cloud stack, but you still have to run on real computers that have real constraints, and everybody knows that there is no greater constraint than maintaining the state of data and moving data. So how does the Terra address those issues? So as part of data freedom obviously is not only about automation, the promise of software defined storage is seamless automation, but unfortunately in many cases with an impressive performance. In our case, we've engineered the whole data path down to the physical devices ourselves. And so the levels of performance we can deliver is millions of IOPS across the data center as at less than 200 microseconds latency. And most importantly, on standard servers over standard protocols. So nothing fancy in terms of hardware required. And that's the true promise of software defined storage. Now you mentioned automation. That kind of performance has got to open up new classes of automation potential so that the storage or the data resources are that much easier to envision, that much easier to apply, that much easier to exploit by the development community. Tell us a little bit about how automation plays into this. Absolutely. Once you made data delivery frictionless and you made data orchestration and data automation frictionless, you do unlock new classes of applications. And what we're specifically seeing is that folks who traditionally run an array of databases on very dedicated proprietary hardware. And then again, they get the data trapped in those silos and they have a real hard time to extract the value of that data. We see a lot of database farms coming on our unified platform across the data center basically being able to really extract the value of the data across a range of applications. Now we've been in the last few years investing pretty heavily in storage area networks and arrays and those types of resources. Flash is changing that. But it sounds as though you guys are actually making it easier to bring servers into the mix of this. What's the real direction you see? Where's this resource going to be managed by and what's the opportunity? So ultimately the resource should be managed by the applications. It should be driven by the applications and managed by machine learning. So as we understand the requirements of the applications every individual application, it should be managed by machine learning in terms of the physical resources on the servers, the server capabilities you put underneath it. And then obviously start rolling the server hardware as sort of technology actually improves as well over time. So it's really being driven by the server. That's where the market opportunity is coming from. That's right, yes. So the last question I have here is when we think about new technology, new classes of automation, new trends in the industry, people always immediately go, yeah, but new companies. Where does the tariff fit in its life cycle as it works with customers and as it delivers value out? Yeah, if you look at the market today, server-based storage is already larger than traditional array-based storage. It's growing at 5x year by year. Since we've been on theCUBE the last time about two years ago, we are now looking at a 240% cagger every year. So the market has clearly come our way. This is the time for this kind of product. So the market's good, company's good, trends are good. Yes. As we think ultimately about where this ends up in a few years, what role will the tariff play within kind of the evolved computing industry? What do you see from it? Yeah, given that we have the broad data orchestration, enterprise performance and choice on hardware, we really do see ourselves as the data foundation for the software-defined data center. And what I mean by that again, just in an operational model, we are to data about Kubernetes is to compute across a number of operating environments. So it's a really broad data foundation for everyone who wants to deliver IT as a service. So Guy, I have a very simple question for you, very complex answer. One of the places where this seems to be especially important or the need is especially great is in that world of analytics, especially as we try to close the loop between the analytical systems and the operating systems, the operational systems. How does Deterra and analytics come together? Not just in the use of analytics to make Deterra better, but Deterra in making analytics applications run better? Yeah, and as you said, an easy question, complicated answer. In reality, what companies are trying to do is to run the analytics at the speed of which they're competing in their market space, which means that it has to get a lot faster. Today's classic environment is an ETL with a data lake. So parking, stale data and analyzing it post event. And tomorrow environment and where people are using AI and ML is now in stream and it's in real time. And so part of that is you actually have very, very fast applications, both from a performance perspective, but also how long their life cycle is. Because people are doing A-B testing on the web, they're doing analytics on the fly and it really is kind of a different world. It's a different pace. I mean, when I started this business or when I was in business early and I had hair, we used to look at organizations that had applications that were lasting in 10 or 20 years. And now we're looking at enterprise applications that are up and down within a period of months, if not weeks. And so managing that life cycle and not having to invest in infrastructure to support something, that age-old adage of you don't buy an application if it's in 1.0 has gone. Because by the time you're into 1.1, that opportunity's disappeared as well. And so part of what I saw in the attraction with Datera is because it's absolutely software defined and all of the resilience handles in the software, not the hardware. There's not the infrastructure burden and it has much more agility to get up. It can provide tier zero, tier one. So again, you land and expand. So in test and dev you have the same environment and by a matter of flipping a few switches you now can have tier one illities and then you can drop down in that life cycle. And it doesn't matter whether it's on premise, whether it's a distributed environment or on cloud, it's the same infrastructure, same architecture. So back to what Mark said, you have data freedom. So we're trying to tie the physical realities of data to the virtual realities of machine resources in IT to the cloud realities of the new wave of applications. That's exactly right. Mark Fleischman, CEO and co-founder of Datera. Guy Churchwald, executive chairman of Datera. Thanks very much for being on theCUBE. Thanks for having us, Peter. And once again, this is Peter Burris, Wikibon, thanks for watching theCUBE.