 From the SiliconANGLE Media Office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Stu Miniman. Hi, I'm Stu Miniman and welcome to theCUBE's Boston Area Studio, this is theCUBE Conversation. Happy to welcome to the program, first time guest, Benjamin Nye, CEO of Turbanomic, Boston based company. Ben, thanks so much for joining us. Stu, thanks for having me. All right, Ben, so as we say, we are fortunate to live in interesting times in our industry. Distributed architectures are what we're all working on, but at the same day, there's a lot of consolidation going on. Just put this in context, just in the recent past, IBM spent $34 billion to buy Red Hat and the reason I bring that up is a lot of people talk about, it's a hybrid, multi-cloud world, what's going on. The thing I've been saying for a couple of years is as users, two things you need to watch. Care about their data, an awful lot. That's what drives businesses and what drives the data really is their applications and that's where Turbanomic sits. Workload automation is where you are and that's really the important piece of multi-cloud. Maybe give our audience a little bit of context as to why this really, IBM buying Red Hat fits into the general premise of why Turbanomic exists. Super, so the IBM Red Hat combination I think is really all about managing workloads. Turbanomic has always been about managing workloads and actually Red Hat was an investor, is an investor in Turbanomic, particularly for OpenStack but more importantly OpenShift now. When you think about the plethora of workloads, we're going to have 10 to one number of workloads relative to VMs and so forth when you look at microservices and containers. So when you think about that combination, it's really, it's an important move for IBM and their opportunity to play in hybrid and multi-cloud. They just announced the IBM multi-cloud manager and then they said, wait a minute, we got to get this thing to scale. Obviously OpenShift and Red Hat is scale, 8.9 million developers in their community and the opportunity to manage those workloads across on-prem and off in a cloud native format is critical. So relate that to Turbo. Turbo is really about managing any workload in any environment anywhere at all times and so we make workloads smart which is self-managing anywhere real-time which allows the workloads themselves to care for their own performance assurance, policy adherence and cost effectiveness. And when you can do that, then they can run anywhere. That's what we do. Yeah, Ben, bring us inside of customers. When people hear applications in multi-cloud, there was the original thing, oh, well, I'm going to be able to burst to the cloud. I'm going to be moving things all the time. Applications usually have data behind them. There's gravity. It's not easy to move them but I want to be able to have that flexibility of if I choose a platform, if I move things around. I think back to the storage world, migration was one of the toughest things out there and something that I spent the most time and energy to constantly deal with. What do you see today when it comes to those applications? How do they think about them? Do they build them one place and they're static? Is it a little bit more modular now when you go to microservices? What do you see in here? Great, so we have over 2100 accounts today including 20% of the Fortune 500. So a pretty good sample set to be able to describe this. What I find is that CIOs today and meet with many of them, I want either born in the cloud, migrate to the cloud or run my infrastructure as cloud. And what they mean is they're seeking greater agility and elasticity than they've ever had and workloads thrive in that environment. So as we decompose the applications and decompose the infrastructure and open it up, there's now more places to run those different workloads and they seek the flexibility to be able to create applications much more quickly, set up environments a lot faster and then they're more than happy to pay for what they use but they're tired of the waste candidly of the traditional legacy environments. And so there's a constant evolution for how do I take those workloads and distribute them to the proper location for them to run most performantly, most cost effectively and obviously with all the compliance requirements of security and data today. Yeah, I wonder if you could help connect the dots for us. In the industry, we've been talking a lot about digital transformation. As we said, two or three years ago was a lot of buzz around this. When I talk to end users today, it's reality. It's absolutely, it's not just, oh, I need to be mobile and online and everything. What do you hear and how do my workloads fit into that discussion? So it's an awesome subject. When you think about what's going on in the industry today, it's the largest and fastest replatforming of IT ever. Okay, so when you think about, for example, at the end of 2017, take away dollars and focus on workloads. There were 220 million workloads. 80% were still on-prem. For all the growth in the cloud, it was still principally an on-prem market. When you look now forward, the differential growth rates, 63% average growth across the cloud vendors in the IaaS market. I'm principally focused on AWS and Azure. And only 3% growth rate in the on-premise market down from five years ago and continuing a decline because of the expense, fragility, and poor performance that customers are receiving. So the replatforming is going on and customer's number one question is, can you help me run my workloads in each of these three environments? So to your point, we're not yet where people are bursting these workloads in between one environment and another. My belief is that will come. But in today's world, you basically replatform those workloads. You put them in a certain environment, but now you gotta make sure that you run them well, performantly, and cost-effectively in those environments, and that's the digital transformation. Okay, so Ben, I think back to my career, if I turn back the clock even two decades, intelligence, automation, things we were talking about, it's different today. When I talk to the people about building software, replatforming, doing these things today, machine learning, and AI, whatever favorite buzzword you have in that space is really driving significant changes into this automation space. I think back to early days of Turbinomic, I think about the virtualization environments and the like. How does automation and intelligence, how is it different today than it was, say, when the company was founded? Well, so for one, we've had to expand to this hybrid and multi-cloud world, so we've taken our data model, which is AI ops, and driven it out to include Azure and AWS. But the reason would say why, why is that important? And ultimately, when people talk about AI ops, what they really mean, whether it's on-prem or off, is resource-aware applications. I can no longer affect performance by manually running around and doing the care and feeding and taking these actions. It's just wasteful. And in the days where people got around that by over-provisioning on-prem, sometimes as much as 70 or 80%, and if you look at the resource actually used, it was far too expensive. Now take that to the cloud, to the public cloud, which is a variable cost environment, and I pay for that over-provisioning every second of the rest of my life, and it's just prohibitive. So if I want to leverage the elasticity and agility of the cloud, I have to do it in a smarter measure, and that requires analytics, and that's what Turbinomic provides. Yeah, and actually, I really like the term AI ops. I wonder if you can put a little bit of a point on that, because there are many admins and architects out there that they hear automation and AI and say, oh my gosh, am I going to be put out of a job? I'm doing a lot of these things. Most people we know in IT, they're probably doing way more than they'd like to, and not necessarily being as smart with it. So how does the technology plus the people, how does that dynamic change? So what's fascinating is, if you think about the role of tech, it was to remove some of the labor intensity in business. But when you then looked inside of IT, it's the most labor-intensive business you could find, right? So the whole idea was, let's not have people doing low-value things, let's do them high-value. So today, when we virtualize an on-premise estate, we know that we can share it, run two workloads side by side. But when a workload spikes or a noisy neighbor, we congest the physical infrastructure. What happens then is that it gets so bad that the application SLA breaks, alerts go off, and we take super-expensive engineers to go find, hopefully troubleshoot and find root cause, and then do a non-disruptive action to move a workload from one host to another. Imagine if you could do that through pure analytics and software, and that's what our AIOps does. What we're allowing is the workloads themselves will pick the resources that are least congested on which to run. And when they do that, rather than waiting it for it to break, and then try and fix it with people, we just let it take that action on its own and trigger a vMotion, and put it into a much happier state. That's how we can assure performance. We'll also check all the compliance and policies that govern those workloads before we make a move, so you can always know that you're in keeping with your affinity and affinity rules, your HADR policies, your data sovereignty, all these different myriad of regulations. Oh, and by the way, it'll be a lot more cost-effective. All right, Ben, you mentioned vMotion. So people that know virtualization, this was kind of magic when we first saw it, to be able to give me mobility with my workloads, help modernize us with Kubernetes. Where does that fit in your environment? How does multi-cloud world, as far as I see, Kubernetes does not break the laws of physics and allow me to do vMotion to cross multi-clouds. So where does Kubernetes fit in your environment? And maybe you can give us a little bit of compare contrast of kind of the virtualization world in the Kubernetes where that fits. Sure, so we look at containers or the pods, a grouping of containers as just another form of liquidity that allows workloads to move, all right? And so again, we're decomposing applications down to the level of microservices. And now the question you have to ask yourself is when demand increases on an application or on indeed a container, am I to scale up that container or should I clone it and effectively scale it out? And that seems like a simple question, but when you're looking at it at huge amounts of scale, hundreds of containers or pods per workload or per VM. Now the question is, okay, whichever way I choose, it can't be right unless I've also factored the imposition I'm putting on the VM in which that container and or pods sits. Because if I'm adding memory in one, I have to add it to the other because I'm stressing the VM differentially, right? Or should I actually clone the VM as well and run that separately? And then there's another layer, the IS layer. Where should that VM run in the same host and cluster and data center if it's on-prem or in the same availability zone and region if it's off-prem? Those questions all the way down the stack are what need to be answered. And no one else has an answer for that. So what we do is we instrument a Kubernetes or an OpenShift or even on the other side, Cloud Foundry. And we actually make the scheduler live and what we call autonomic, able to interrelate the demand all the way down through the various levels of the stack to assure performance, check the policy and make sure it's cost effective. And that's what we're doing. So we actually allow the interrelationship between the containers and their schedulers all the way down through the virtual layer and into the physical layer. Yeah, that's impressive. You really just did a good job of explaining all of those pieces. One of the challenges when I talk to users, they're having a real hard time keeping up. We said, I've started to figure out my cloud environment I need to do things with containers, a wait, I hear about the serverless thing. What are some of the big challenges you're hearing from customers? Who do they turn to to help them stay on top of the things that are important for their business? So I think finding the sources of information now in the information age, when everything's gone to software or virtual or cloud has become harder. You don't get it all from the same one or two monolithic vendors, strategic vendors. I think they have to come to theCUBE as an example of where to find this information and it's why we're here. But I think in thinking about this, there's some interesting data points. First on the skills gap, Accenture did a poll of their customer base and found that only 14% of their customers thought they had the requisite skills on staff to warrant their moves to the cloud. Think about that number, so 86% don't. And here's another one. When you get this wrong, there's some fascinating data that says 80% of customers receive a cloud bill north of three times what they expected to spend. Just think about that. Now I don't know which number is bigger, frankly, Stu. Is it the 80% or the three times? But there's the conversation. Hey, boss, I just spent the entire annual budget in a little over a quarter. You still want to get that cup of coffee? So the costs of being wrong are enormously expensive. And then imagine if I'm not governing the policies and my workloads wind up in a country that they're not meant to per data sovereignty. And then we get breached, we have a significant problem there from a compliance standpoint. And the beauty is software can manage all this and automation can help alleviate the constrain of the skills gap that's going on. Yeah, you're totally right. I think back to five years ago, I was at Amazon re-invent and they had a tool that started to monitor a little bit of, are you actually using the stuff that you're paying for? And there were customers walking out and saying, I can save 60 to 70% over what I was doing. Thank you, Amazon, for helping to point that out. When I lived on the data center side and vendors that sold stuff, I couldn't imagine if your sales rep came and said, hey, we deployed this stuff and we know you spent millions of dollars. Seems like we over provisioned you by two to three X what you expected. You'd be fired. So it was like, you know, Wall Street treats Amazon a little bit differently than they do everybody else. So on the one hand, we're making progress. There's lots of software companies like yourself. There's lots of companies helping people to optimize their cost on there. But still, it seems like there's a long way to go to get, you know, multi-cloud and the costs of what's going on there under control. Remember the early days, they said cloud was supposed to be simple and cheap and turned out to be neither of those. So Ben, I want to give you the opportunity. What do you see both as an industry and for Turbinomic? What's the next kind of six to 12 months spring? Good, can I hit your cloud point first? It's just when you think of Amazon, just to see how that changes. If I go and provision a workload in Amazon EC2 alone, there's 1.7 million different combinations from which I can choose across all the availability zones, all the regions and all the services. There's 17 families of compute service alone at just one example. So what Amazon looks at Turbinomic and says, you're almost a customer control plane for us. You're going to understand the demand on the workload and then you can help the customer advise the customer which service, which instance types all the way down through not just compute and memory, but down into network and storage, are the ones that we should do. And the reason we can do this so cost effective is we're doing it on the basis of a consumption plan, not an allocation plan. And Amazon as a retailer in their origin has cut prices 62 times. So they're very interested in using us as a means of making their customers more cost effective so that they're indeed paying for what they use but not paying for what they don't use. They've recognized us as giving us the migration tools competency as well as the third party cloud management competencies that frankly are very rare in the marketplace and recognize that those are because production apps are now running in Amazon like never before. Azure, Microsoft Azure is not to be missed on this one. So they've said we too want to make sure that we have cost effective operations and what they've described is when a customer moves to Azure, that's a Azure customer ad, ACA. But then they need to make sure that they're growing inside of Azure and there's a magic number of $5,000 a month. If they exceed that, then they're Azure for life. The problem becomes if they pause and they say, wow, this is expensive or this isn't quite right. Now they just lost a year of growth. And so the whole opportunity with Azure and they actually resell our assessment products for migration planning as well as the optimization thereafter. And the whole idea is to make sure again, customers are only paying for what they use. So both of these platforms in the cloud are super aggressive with one another but also relative to the on-prem legacy environments to make sure that the workloads are coming into their arena. And if you look at the value of that, they round numbers about $3,000 to $6,000 a year per workload. We have three million smart workloads that we managed today at Turbinomic. Think what that's worth in the realm of the prize of the public cloud vendors. And it's a really interesting thing. And we'll help the customers get there most cost-effectively as they can. All right, so back to looking forward, would love to hear your thoughts on just what customers need broadly and then some of the areas that we should look for Turbinomic in the future. Okay, so I think you're going to continue to see customers look for outlets for this decomposed application as we've described it. So microservices, containers and VMs running in multiple different environments. We believe that the next one, so today in market we have STDC, the Software Defined Data Center in Virtualization. We have IS and PASS in the public and hybrid cloud worlds. The next one we believe will be as applications at the edge become less pedestrian, more strategic and more operationally intensive. Then you're talking about Amazon Prime delivery or your driverless cars or things along those lines. You're going to see that the edge really is going to require the cell tower to become the next generation data center. You're going to see compute memory and storage and networking on the cell tower because I need to process and I can't take the latency of going back to the core, be it cloud core or on-premise core. And so you'll do both but you'll need that edge processing. Okay, what we look at is if that's the modern data center and you have processing needs there that are critical for those applications that are yet to be born, right? Then our belief is you're going to need workload automation software because you can't put people on every single cell tower in America or the rest of the world. So this is sort of a confirming trend to us that we know we're in the right direction. Always focus on the workloads, not the infrastructure. If you make the application workloads performed then the business will run well regardless of where they perform. And in some environments like a modern day cell tower they're just not going to be the opportunity to put people in manual response to a break fix problem set at the edge. So that's kind of where we see these things headed. All right, Ben and I, pleasure to catch up with you. Thanks so much for giving us the update on where the industry is and in Turbinomics specifically. Thank you so much for watching. Be sure to check out thecube.net for all of our coverage. Of course we're at all the big cloud shows including AWS re-invent in KubeCon and Seattle later this year. So thank you so much for watching theCUBE.