 from Las Vegas, it's theCUBE, covering AWS re-invent 2018, brought to you by Amazon Web Services, Intel, and their ecosystem partners. Well, welcome back here to The Sands. We continue our coverage here on theCUBE of AWS re-invent as the day starts to wind down here. Still a lot of energy out there on that show floor as we are packed with all kinds of great exhibits here and a lot of interested folks here and still making themselves at home. Tom Murphy's with us now, along with Justin Warren, John Walls. He's the Chief Marketing Officer at Turbinomic, and Tom, glad to have you here on theCUBE. Thank you for joining us. Yeah, great to be here. All right, so just tell us a little bit about Turbinomic, first off, and then we'll drill down a little bit there. Yeah, sure. That's why you're here at AWS, but what do you do for the folks at home? Yeah, ultimately what we're doing is we have workload automation for Hybrid Cloud, and workload automation for us is ultimately where we go out, we discover the workloads, we optimize performance, cost, and compliance simultaneously in real time. So what that gives for the customers is, we call them smart workloads, self-managing anywhere in real time. The outcome of that for the customers is they get guaranteed performance, assured performance, assured compliance, and they eliminate a lot of the complexity that they're experiencing today. So you're trying to grease the skids more or less, right? Grease the skids, make sure that their life's easier and they can actually get accomplished the outcomes they want. Complexity's been a theme that we've been covering in the last couple of days. It's come up quite a bit. Customers are struggling with the amount of choices. We had Andy Jassy on stage today again, announcing another zillion products that AWS has created, and that gives you a lot of flexibility. It means that you can optimize for particular choices that suit you very, very well, but being able to choose between them can be a pretty daunting task. How does Terminomic help customers decide which of these choices is right for them? Yeah, what we see from our customers is they're actually looking at typically three platforms. They're still running on-prem with VMware. They're looking at AWS, of course, that's why we're all here, and they're looking at Azure as well. So when it comes to choices, they want that flexibility to decide where the workloads can run. By looking at the workloads versus the infrastructure, we abstract the work that's running, and we can model for them, for example, a VM workload, let's say, that's running on-prem. We can pick it up and say, what will that look like when it runs on AWS? What will that look like when it runs on Azure? So for us, by abstracting the work from the underlying infrastructure, that gives the customers the flexibility and the simplicity to understand and de-risk any migration projects that they have. Yeah, so you identify where something could go based on what the workload is. Do you also, do you just tell customers what that is, or are you able to automate that decision-making process for them? Yeah, so when a customer decides to actually deploy workloads, they don't know necessarily how big the resources should be. They don't know where, should they be placed. But our analytics engine, what it does is it goes out and says, we understand the complete infrastructure, we're agentless, we plug into vCenter, we plug into Azure, we plug into AWS, we pull the configurations back, and when we decide where to place it, we know best where to place it, we decide how big it is, we know best how big it should be at that time. Should the demand change, what we can do is actually dynamically adjust in real time, automatically, the size of the workload, where it lives, we can scale out on-prem, we can multiply the number of instances, or we can actually scale up, which means add more resources to the actual workloads. And what about that decision, and because you talk about on-prem, and a lot of people, as we've heard a lot of you, making the move over, going to public, there's a little bit of pain there for some people, right? There's some barriers there. What are you seeing, and what are you trying, I guess, to lead people through that consideration so they get it. All right, it's going to hurt a little bit, maybe. Yeah, for sure. You know, we're going to be a lot better off at the end of the day. Yeah, so ultimately what we see is, especially on a lift and shift, where customers take existing workloads and they move them to the cloud. A lot of times when you think about how utilized they are on-prem, they tend to run under 50% utilization. So if you actually take that box of, let's say the resources that you've defined on-prem and just pick it up and drop it into the cloud, you're 50% over provision. So part of the pain point is knowing exactly the resources they need. But understanding not just what you allocated, but what you consume, which is a smaller view, looking at the consumption and using consumption in the cloud versus the allocation. That's a quick, efficient use case for making sure that they use exactly what they need when they get there, but they're not compromising performance when they get there, they have the right performance at the right cost. Yeah, and this isn't a static decision either, because as we're seeing, there's new announcements every day, so we get new instance types that have been announced at this show, but also workloads and the demands for what customers need to do with those workloads is constantly changing. So you need to be able to react to that and to change what the right option is from moment to moment. And then when you add on top of that, like reserved instances, right? So the complexity of what instance type to use, let's say there's millions of choices when you look at the combinations, and then ultimately, there's new ones introduced regularly, so how do I take advantage of that? There's discounting that's applied, there's discounts that come out, bundles that come out, and also the RI. So our eyes, think of the two metrics for our eyes, but one would be utilization of the RI, so I want to make sure, if I actually buy our eyes and invest in our eyes, I want to make sure I use them. And the second is, out of all the instances that I have, what percentage of my environment actually is RI? So think of that as coverage. So the two metrics that we look at closely with our eyes as an example is coverage, which means I'm taking advantage of our eyes, and I'm not just doing 1%, I'm doing more than that. And the second is utilization, which means as of the ones I've purchased, I'm making sure that I'm using those, they're not going to waste. Yeah, so customers who are doing this well, clearly you've got plenty of customers who've done this successfully. So what is doing this well look like? Sure, well they start with an assessment, which they look at their on-prem in many cases, they right size that, they run models, they run plans as to when they look at workloads that are picking up and moving to the cloud, what do I need when I get there? Once they're in the cloud, that's just the beginning of the journey really, and what it is is they continually optimize. And the continual optimization means that we're constantly meeting, I started by talking about supply and demand earlier, but constantly making sure that the demand of the workloads is matched with the underlying supply at all times for the benefit of performance, for cost, and also making sure we're compliant with business policies at all times as well. So if you have a customer who maybe comes to the show and they catch the bug, right, they got the fever, we're going to go back, hey, Tom, they're on the phone tomorrow, we got to go now. Do you ever have to tell people, just slow your roll, we're going to do this in a methodical way, we're going to do this in a responsible way, we're not going to go nuts, I know you want to go, but people get excited, right? So I mean, how do you handle that? Well, I think what I hear from customers today is you guys talked about complexity, right? So if you think about complexity, and then on top of that, you think about a little bit of a skills gap that's happening because there's really not the maturity of the expertise to manage a lot of the transitions that are taking place. And then lastly, once they're in the cloud, again, I said it's not done, right? So to address really that, how do they get through that? What do they use? We don't necessarily say slow down because we can actually get people to the cloud very, very quickly in a responsible way. The thing that we like to say is we take the guesswork out, so what we're doing is we're taking the analytics, we're giving them intelligence, so that they can actually make very rapid decisions. Our solution can probably make decisions about what to do faster than people are ready to make the progress. So ultimately, we want to go at the pace they want to go. We've had customers call us on a weekend, deploy the software, actually go live, like in a couple of days. So it's up to the customer. We feel confident in our decision-making. You can't automate decisions and actions if you don't feel confident about it. We've got the customer points that give us the proof. Based on what you've seen so far at the show and your experience with customers who've been moving to cloud, figuring out where they're going to put these workloads, what's next? What do you think people are going to be doing next? Yeah, so it's a great question because as a company, I'm really proud that we started as VM Turbo. Many people still know us as VM Turbo. And what happened was virtual machines was where we started. We plugged into vCenter, we pulled the information back, and all of a sudden we're making decisions and actions about what to do on a virtualized environment on-prem. Well, we started doing the cloud. All of a sudden it's like, well, it's more than just VMs. It's cloud too. We literally had to change the name of the company to accommodate the capabilities. So by having this economic supply and demand model, it allowed us to really just apply it to not just VM on-prem, not just cloud. So to answer your question, containers and microservices is steamrolling. We hear that in all of our customers now. When I came here three years ago was we're thinking about cloud. Last year, actually we're testing, right? This year, we're live. Next year, it's going to be containers, containers. That's what I think is containers are going to be just coming in, cloud-native applications. We're past a little bit of lift and shift. We're moving into cloud-native. So that's what I think is going to happen next year. Yeah. Well, I think you can stick with Turbinomic. I think you're okay for a while now, all right? Yeah, sure. All right, thanks for being with us, Tom. Absolutely, guys. We appreciate the time and you bet. Have a great show the rest of the time here in Las Vegas. Thanks very much. Tom Murphy joining us from Turbinomic. And that will be it for this day here on theCUBE, AWS re-invent, back with you tomorrow, Thursday. For Justin Moore and I'm John Walls. Thank you for watching.