 Hi, I'm Peter Burris and welcome to another CUBE Conversation from our super-duper studios here in beautiful Palo Alto, California. Today, we're going to get a chance to talk about database migration and database technology evolution as it pertains to cloud computing. And to have that conversation, we've got Datometry here. Mike Wasis, the CEO and founder of Datometry. Mike, welcome back to theCUBE. Thanks for having us, excited to be here. So, let's get the company update out of the way. What's going on with theometry first? All right, well, quickly for everybody. We are eventually back to startup in San Francisco and we're taking on, if you will, the database market this $40 billion behemoth from a very unique angle. And that is not yet another database but we give IT leaders for the first time that opportunity to actually liberate themselves from the database vendor lock-in, take their applications the way they are today written for a particular database, usually one of these legacy data warehouses, take them to the cloud just the way they are without the hassle of rewriting and reinventing their business. So, keep the applications that are creating value in place, move the data and the structure associated with the data so that it can be re-platformed to a new database manager but still serve those applications and generate business value through those applications. Exactly, because the value really is in the application, not in the database. This is what enterprises have been curating and investing in for the last 10, 20, sometimes even more years. And so for them going to the cloud suddenly poses this huge problem of do I really want to rewrite these applications? Just to end up with something that looks exactly like that in the cloud now. So we allow them to do this at a fraction of the cost, the time and the risk. So you're out in the road talking to a lot of customers these days about this challenge. What do you encounter? So it's very interesting to see how the global 2000 and we're speaking primarily to the global 2000, Fortune 500 are coming around now to really address this problem. And people are in kind of different trajectories and differently far along on this. And so there are a number of enterprises that are ready to pull the trigger that have maybe another nine, 12 months of a runway on their data center right now. And they are really kind of looking for a tactical solution to get there. It's all or nothing, very excited, very great customer to work with. Then there are customers who are kind of still in the process of figuring it out. They know they have a little more of a runway and they kind of get into the point of, well, I understand what workloads probably would work in the cloud. Let's start with those. And so it's great for us to kind of work with these customers to help them understand which are these workloads? How can you actually come up with a prioritized order of this goes first, this goes next and so on and so forth. And then to some extent, my favorite are the ones that are really in the early stages of the journey because for them, it's really about getting kind of a familiarity, getting acquainted with all this. And so for them, it's kind of an exploratory phase. It's kind of a lot of research and trying to understand what they have today and how it would map to the cloud. And obviously, as you can imagine, they have a lot of questions in their minds. And so as a company, we work with all three of these and our product naturally allows all of them kind of to get a value out of this at that very moment, right away, especially without having to wait for two, three years of planning and rewriting and remodeling applications. Well, let's leave that third group aside because one can spend about 20 years just trying to discover what they have. And instead, look at the workloads. What types of workloads are candidates today for this type of an approach? Again, keeping the application relatively extent, relatively unchanged, moving the data and replatforming it to a more modern technology. What kinds of workloads are especially open to this approach? So first off, I have good news and that is it is way more than you actually would think. And coming from a database background myself for the last 20 years, 25 years implementing databases. We database people have always very much looked at operations and optimizing the last IOTA out of the machine, et cetera. What we're seeing in the cloud is really a fundamental change. It's no longer about millisecond parity, but it's about getting your data into the cloud and unlocking scalability, performance, price performance ratio, the benefit that you get brand new hardware every couple of months as a refresh, continuous software updates and improvements. And so all of this suddenly changes the migration from kind of one to one or one for one and millisecond parity to a real quantum leap from on-premise into the cloud is a much different story. Well, let me unpack that a little bit because really if I have a right, what you're saying is that in the old way of thinking about database management with enormous amounts of tuning, the point was to try to get as much performance out of whatever hardware platform you were running on as you possibly could. Now when we go to the cloud, that constraint starts to go away. So we're not focused on getting the last IOTA performance out of the hardware, we're focused on getting the last stretch or last stream of value out of the data in the application because the hardware constraint no longer obtains in the same way. Have I got that right? Absolutely, yes. So it's really about IT running faster figuratively rather than running a particular workload faster on a piece of metal. And that really changes the equation fundamentally because then a lot of workloads coming back to your previous question, a lot of workloads suddenly benefit from going to the cloud. And initially when we started tackling the problem of data warehousing, I personally thought it's probably gonna be mostly about the analytics about the downstream consumers of the data, but then very quickly it turned out that ETL is very often such a, let's call it a complex, very involved process, that moving that whole sale to the cloud without having to undo it and then reinvent it and rewrite it is just such a godsend for the enterprise. And so back to your original question, it's really workloads across the entire board. So if I were to then think through the process, let's say that I'm the CIO and I'm thinking this process through. Number one, by not having to focus on like to like, I'm changing my thought process because historically it's been, oh, you can't move that database manager because you're focused on that like to like kind of a move and you never get, as you said that, no second parity. So if you relax that constraint, now I can focus on, look, I'm just gonna get it up there and have it run utilizing the more modern technology. I can go back over time and improve and tune the performance if I want to. But day one, I'm not focused on the underlying hardware being the same and the stack being the same. I'm just focused on the output and the outcome of using the application being the same. Exactly. And there is something very critical you just pointed on that is in a migration, in a kind of classic migration, what happens is the moment somebody opens the hood on the current system and says, hey, we're replatforming this to the cloud. There is a merit of people coming out of the woodworks and say, hey, I have all these janitorial tasks I've been sitting on for the last five years. And so migration then very quickly has that scope creep and turns into a huge furball of all sorts of unmanageable stuff. And that is why I believe Gartner put it at 60 to 70% of migrations failing because it's just spiraling out of control. What we give IT leaders is the ability to take these two things apart, move, move first, move everything, move right now, move at an incredibly short timeline. And then afterwards, look at your application. And there's usually three categories. One of them is, run these forever. That's kind of your mission critical but fairly established applications. Then there's a second category of well, we always wanted to rewrite this thing. We just never found time. And for all sorts of business reasons, we want to rewrite this. So modernize this at your own kind of pace on your own dime. But on that one, just to interrupt for a second, your focus on modernizing maybe the user interface, how the integration, how it gathers data from other places, making it faster, cheaper, simpler. You're not focused on modernizing the underlying hardware. Right, yeah. You're really looking at the business at this point. That's a critical piece. And then the last category is stuff that, well, we should probably deprecate these applications anyways. And it's usually a much smaller group. But that separation of migration and modernization, that is what really resonates well with IT leaders. Yeah, because I mean, at the end of the day, as you said, a lot of the people that come out of the woodwork are the people who have built their careers on tuning the database manager to a particular set of targets. And when you say, well, the target no longer is the operating constraint, the constraint now is can we achieve the same outcome? And we'll focus on improving it or adjusting it or changing it later based on the business needs. Exactly. Okay, so let's get back now to the last one. This leads to a different way of thinking about your database manager. Whereas historically, database professionals have thought in terms of to get this outcome, how much do I have to pay? When we think about the notion of digital business, data being an asset that could be combined and recombined and applied and copied in a lot of different ways to create potentially derivative value. It sounds as though you're proposing that we can unlock a potential unlimited streams of future value out of data once we get it to a place where we're not worried about the impact on the underlying hardware. Right at that, right? Yeah, think of it as doing away with the data silos. It's not about getting your database into the cloud and then having it sit in a database but really getting the data into the cloud and making it available to all the myriad of processing techniques and applications that cloud service providers and third parties put out there. And having the ability to process your data with AI, ML, you name it, advanced analytics, et cetera, without having to shovel it out of the database and back in every time you do that. And so getting the data really there is kind of the holy grail for enterprises in the next five to 10 years. That's what they need to solve. Now, we've seen a lot of our clients are talking about 2019 being in the year that they actually put their first strategic stake in the ground about how they're going to use cloud. But that, I mean, not the emergent green or the optimistic green field stuff not moving personal productivity but actually starting to think about those high value, what we call HVTA, high value traditional applications and starting to think about what role they're going to play. So as you look forward, where do you think this technology is in a couple of years in terms of simplifying this whole process for enterprises? So first off, 2019 I believe is going to be the year of the cloud for the enterprise. It's been a long time coming, but finally I think we've reached critical threshold and it's wildfire out there right now. It's fantastic to watch. Now, taking this kind of technology that we're building and kind of spinning this quick forward, think of it a little bit like VMware for databases is what one of our first prospects called it once I explained what it is. And we first scratched our heads like, I'm not quite sure how that kind of fits the description. But then we did some archeology and realized the parallels between these approaches. First off, what we do is virtualization. So naturally there's a technical parallel there. But then when you look at today, VMware doesn't make the money with the hypervisor. What they make the money with is all the functionality that they were able to layer on top of it. Dozens and dozens of v-products. That's where the real value comes from. And similar in our environment, building that hypervisor today is great for that immediate shift to the cloud, et cetera. But then long-term, there's a much larger value proposition and that's really about functionality that can be layered on top of this. And think about it that way. We're creating a new geography that didn't exist before where you had either functionality sitting in the application and then copied across thousands of applications or tried to shoehorn it into the database. And that usually didn't go so well either. So we give people long-term that whole vision of there's functionality in the space in between that is much richer than actually a database or the application itself. So that's where we go. All right, so I have one last question, Mike, and then I'll let you go. So you mentioned earlier that there are certain workloads that people may be willing, more willing to move, but it's not necessarily limited by technology. But I'm a CIO, I get the datometry divining rod and I walk into my shop and I start moving around. What class of applications is that stick looking at? Or what is our particular environment or particular machine or particular database manager that the stick keeps pointing towards? We give you something way better than that stick. And that is, since we sit so low in the stack, we're agnostic to the application. And it really depends on what are the parameters of an application, et cetera, which we have no visibility into it. So what we give you is a system that we call Qinsight that allows you to run your workload logs from your existing data warehouse, for example, and simulate it through our system. And we actually tell you what would happen if you ran on one or the other database. And at the end of this process, we give you a scorecard that lets you tease apart what are the applications that do one or the other thing? What are the features? What's the complexity that's in there? And we give people that at a high resolution that they've never seen before. And that is the kind of the stepping stone at the beginning. That's the rod, really, that allows people to select, okay, this application first, this next, and so on and so forth. And that is why I said at the beginning, I love talking to enterprises that at the beginning of their journey, because this is where we already bring a huge benefit to the table that they were really struggling with not having. And so this is where the kind of circle closes. Wow, interesting stuff, Mike. So once again, Mike was the CEO of Datometry. We've been talking about database migration and new tooling and technologies for facilitating that and simplifying that in large organizations. Mike, thank you very much for being on theCUBE. It's been a great pleasure. Thanks for having us. And once again, I'm Peter Burris and this has been another CUBE Conversation. Until next time.