 are rolling. Hello everybody. Welcome to another edition of the OpenShift Commons Briefings Operator Hours. I'm Michael Waite from Red Hat. And today, I'm really, really excited to have Eric Wright, the director of technical marketing at Turbonomic. Join us. Hello, Eric. How are you? Very good, Mike. And thanks for for letting me on the show here. This is going to be a lot of fun. Yes, yes, it will. I and we can take the front slide down if you want. Perfect. Yeah, I really enjoyed doing our dry run the other day. I mean, like Turbonomic is probably a company that I have worked very closely with over the last six and a half plus years or so. And you guys are like terrific to work with. And specifically you, Eric, and your marketing people and everyone else. It's like, probably one of the more fun companies out there. Would you agree? I first thank you. That is like incredible. And yes, it is. I wholeheartedly agree. I say I'm living the dream because my job is my hobby. And my hobby is my job. So it's kind of a beautiful pairing. So Turbonomic, let's get the table stakes taken care of here. What is Turbonomic? That's a great question. Yeah, the if you walk away from this with nothing else other than this statement to understand what it is we do, we're the only system that makes sure that applications get the resources they need, when they need them continuously and automatically. So we are an application resource management platform. And what we do is we understand the demand and requirements of applications. And then we are able to automate and instruments the supply of infrastructure, whether it's on prem in the cloud, containers, you name it, in order to make sure that those applications get better performance, and are doing so as efficiently as possible. Very good. That's my that's as good of a pitch as I can get. Beyond that, I'm just a nerd who loves all the reason how we do it is pretty fantastic. But that's the the why which is important. And I believe that you have a 185 slide deck that you're going to be sharing. Let me prepare my pitch deck. But no, it's like, honestly, I, you know, my team works with a lot of various different software companies that test and certify their products on on OpenShift and Red Hat Enterprise Linux and Ansible. And, and I would have to say that Turbonomic is hands down one of the coolest, funnest companies to work with. We have done various different scavenger hunts when member member book member before these challenging times, Eric, when we actually used to be able to travel around the world and go to really interesting places. Remember what that was like? It was it's hard to imagine. I've I've like, literally for the last year, when I see a commercial with people in a crowd, you start to get a little skived out. And like, I'm like, it feels like we're we're going back. But yeah, I remembered like, standing on top of a booth, literally with a Millennium Falcon, a Lego Millennium Falcon over my head with about 500 people around it yelling out, when I say turbo, you say NOMIC and completely, completely destroying any productivity on a show floor at every possible end. It was so fun. I miss, I don't miss the travel. I miss the people. And that's really what I really hope that we can all kind of get back to you. Yeah, I literally remember, you know, every time we've done a KubeCon, right, you guys are there, you have probably one of the most professional setups, you've got a Senna, well, it used to be a Senna Woodward. Now it's a Senna Hertz, basically just terrific job. And like that the huge crowds that were always gathered around your booth and stuff. And, and maybe it was because of the fidget spinners socks, I'm not sure. But I hope that when we get to go back to doing these types of in person events, the whole fidget spinner fad will be gone. Yeah, and it's what will be next, right? Whatever. I hope we get past the the logoed masks fast, because I kind of don't want a continuous reminder of what the last year has been for everybody. Actually, I would highly doubt anyone's going to do logoed masks. I know when the when the pandemic was first getting going, you know, the there were people inside Red Hat who were asking corporate like, Hey, we, you know, we won't, you know, other companies are making logoed masks. And we, we tended to stay away from that. And I think, I don't think we're going to see any of that. What do you think? Kubcon, Los Angeles in October, right? LA? Yeah. And it's I like the questionnaire for the call for papers, which I submitted one as a community presenter as well, we've got a turbo one thing we've got submitted. It was, you know, if given the opportunity, would you be there in person? And so it's, I predict that we'll have a bit of a hybrid face for the next few months. But yeah, Kubcon LA, that'd be kind of cool. Yeah. So it's turbo nomic. But it hasn't always been turbo nomic. Look, when I first started working with the Senate and the people, the other people at your company, it was VM turbo. It was. Were you there when it was VM turbo? I am. I guess I'd be considered OG because I was pre rename and rebrand. So yeah, we were VM turbo. That was almost 12 years ago now that the company was founded. I've been with the company for seven years, which is in itself, probably longer than most people would imagine to survive it a startup. It's been, I mean, I guess we can't even call ourselves a startup anymore. We have 700 people and pretty significant revenue. We've just, of course, anybody that reads the news announced the acquisition by IBM. Lots of craziness in the ecosystem over my career, for sure. But yeah, VM turbo, that was, that was old school. Let me ask you this. Lots and lots of companies changed their logos. Red Hat changed our logo four years ago, or I guess we decided that we needed to come up with something that was more modern. And you know, we went from the shadow man to just a hat. Other companies changed, you know, you spend millions of dollars in legal reviews, brand reviews and consulting agencies and you end up coming up with another logo that's like a slightly different shade of red. But you folks changed the entire name of the company. And we're able to survive that. What was that like? And why? Like, was VM Turbo a bad name? And people were like, Oh, we don't like the founders of the company for giving us VM turbo, we need to make it like, why was that? Yeah, that's actually one of the core questions. That's the one, you know, why would we choose to rename the first one was, you know, VM turbo was obviously it wasn't even virtual machine, it was virtualization management. So our platform was built to be able to manage virtual resources using economic principles. So virtual management was the VM. But of course, a little company started by someone named Diane Green, you may have heard of them called VMWare. They've obviously kind of co-opted the VM as a virtual machine name. And you know, I mean, there's it technically is virtual management where as well. But people started to say, you know, Hey, you know, Oh, VMware Turbo. And you started to kind of just like wince a little when you heard it because we were synonymous because we did so much in the VMware ecosystem. But at the same time, we were also doing stuff with Citrix, and with Microsoft, and then early stuff with with Docker and ultimately became what we're doing with OpenShift and the Kubernetes ecosystem now, and public cloud and IBM cloud, we were, we had all these other partners, but people saw VM. And so we thought it's going to be difficult for us to go beyond the construct of a virtual machine. And so we said, as a company, we need to rebrand to more represent what we do. And Turbo, Turbonomic became sort of the combination of Turbo and economic principles. And you know, there's a, I'd love to say there's like a kiss naming story of three guys in the back of a limo that just said, we'll call it kiss. And everybody just said, Yeah, let's do it. Like, it took a while. And then pro tip, it's really hard to rename your company, like logistically, let alone to agree as a whole company that we need to change our brand. We renamed our company right before VMworld, which is a fundamentally terrible idea. I was going to say like, I hope you're going to get to the point part of like, what are the actual implications of doing this? And I think you're going there right now. Yeah, I mean, just the I won't even account for the fact of like the legal team having to deal with patents and trademarks and all of this stuff. And you want to do this kind of you want to launch like it's like a stealth thing, right? So you try and do it as covertly as you can, but it's hard to do it. But all of the stuff has to be legally registered and it takes time. So we were actually deciding in, you know, the middle of that particular year to rename the company. And we were already booking our presence at VMworld, which could be the largest one ever. It was there like the diamond booth was huge. And then you've got to submit all your marketing materials months in advance. And we're like, we're going to change the name of the company. But we can't like show our hands just yet, because it's going out into their materials. So we show up at the event and literally like a week before VM where VMworld that year was when we did the rename of the company. So it was fun. But you know, a lot of people were kind of like, you know, looking around at the booth going, this is you guys used to be VM Turbo, right? Aren't you still VM Turbo? Like it was news to so many people and all of a sudden we're there with 24,000 of our closest friends and trying to explain the story made it was fun. But it was not a not a simple task for sure. Okay, and you guys are out of Burlington, right? Or no, wait, Boston now, right? You were in Burlington, but then you moved down to what Boylston Street? I think is your headquarters? Yeah, headquarters 500 Boylston. And we've got we have a presence in in white planes. That's where engineering office is. We've got some folks that are in Tel Aviv that are with an engineering team out there. I'm the roaming weird Canadian guy that goes back and forth. I live in New Jersey. So I'm the New Jersey office, I guess as it were. Okay. Well, that's good. So the name of your product? Is it Turbonomic? Yes, Turbonomic was originally called operations manager was the product. And so we we chose, you know, it's, it's arm is what we actually call it application resource management. But the most common thing if you look at it, of course, it just says Turbonomic in the logo. And that also an interesting problem, right? It seems like a very simple thing. We have one platform, we, you know, it's easy, it's going to have the same name as the company. Well, then the interesting thing was a couple of years ago, we acquired Park My Cloud, which was a fantastic partnership. They're a great team. And it's been really, really cool to bring them on board. So now we had, you know, Park My Cloud a Turbonomic company because it was really hard to really quickly turn them around. And then we, about a year and a half ago, now bought a company called Sev one network performance monitoring, really amazing team and amazing firm, lots of really cool stuff. So we all of a sudden we had a couple hundred extra new new staff members. And now we had, what do we call it? You know, so we actually started down the road of Turbonomic NPM Turbonomic ARM. And then Park My Cloud kind of gets rolled into the cloud portion of ARM. And then of course, you know, we'll soon to be, you know, Turbonomic and IBM company, which will make the rebranding even more wild. Yeah, that's definitely going to be interesting. So how, from my perspective, there's tons of companies out there that in the world, according to Mike Waite, seem to do the same thing. Is application resource management the same as PRM performance? You know, like this company's like, like app D does something similar, like in in Stana, IBM actually just bought in Stana. So like, are those overlapping technologies? Does everyone do a little bit of the same thing? How are you folks different than all the others? Yeah, it's a great question. And you know, we say that the M in almost all of the descriptions, whether it's NPM, AR, APM, it's generally was monitoring was what that what that stood for. Our Turbonomic story was that we are an an automation platform, and an application resource management platform. So we manage the resources, monitoring is a side effect, like we have to monitor to do it. So the APMs of the world, you know, obviously, like, if you look at in Stana, they're they're solving this observability challenge on Kubernetes, specifically, well, and then they're expanding even further at the their primary audiences developers. So they're like, hey, I'm building Mike and I are building an app, and this app is going to do amazing conferencing. So I'm going to build a tool that just monitors it. I have 300 other business applications that run Mike and my business. But I'm not going to instrument that with APM, you know, number one. So we've got this beautiful area where there's all these other non monitored resources that we can bring better performance to. But on top of that, the difference between ARM and APM is that APM says, Hey, Mike, looks like we bumped into the threshold on this thing, or it looks like we've got, you know, your, your SLOs are going a little weird on, you know, your, your, your mate, your banking app. Well, now it's Mike's fault, right? You can hand it to the team said, Hey, folks, look over here. And Stana says we're in trouble. And that's super important, right? Operationally, you need that. What it doesn't do though, is move towards the problem solving, which is where we come in where we actually allocate and assign resources and reallocate resources in order to prevent that problem from occurring. We use monitoring and all that instrumentation to drive that decision using our, our analytics platform. But then we will go and say, like, add CPU, you know, expands horizontally scale a pod, add a new Kubernetes node, add change the flavor of this particular EC2 instance, whatever it is, we then not just tell you what to do, but we can actually automate most of those actions so that your team can just be, you know, actually building new things instead of just trying to keep the lights on, so to speak. So I have a difficult question for you. Are you just a pretty face, Eric? I am a nerd at heart with, as you can tell by the frown lines and the, and the bags into my eyes, I've done 20 plus years in data center operations and management. I worked for Raymond James and financial and sunlight before this for a couple of decades, one decade each, I did a couple of tours of duty, so to speak. So yeah, I actually came out of the space. I was a blogger, which is how I got sort of discovered by the turbo team. And they brought me on to do help with technical marketing. So I get the fun where, you know, I get to talk about it mostly because I have an expensive microphone. And but at the same time, I'm actually a nerd at heart and I spin the stuff up and actually run the platform as well. Yeah, there was a there was a reason I wanted to ask that because, you know, we have, we've done a lot of these TV shows and generally someone, you know, they offer up, you know, a product marketing manager and they come on and they time and let me tell you about the features of my product, but they're really not like super interesting. They don't have a $900 microphone in front of them. They don't, you know, spend the time to make sure that the lighting in the background like you have that what is that a bicycle? What is that the design? Yes, my is a patent application for a bicycle and for a microphone. The two things that are I make money at one and I like the other. You can choose which one is which. All right, well, where I was going was we are streaming live on Twitch. We are streaming live on YouTube. We are streaming live on Facebook. And I want to offer so people can ask questions. I can put them into the chat, you know, question things on any of those platforms and our our bots will magically pick that up and transfer any questions over here into the chat. So I'm going to offer up a $200 Amazon gift card for the first person who can stump Eric right and have him basically look uncomfortable on on live TV. So that's an offer. Basically, of course, it can't be anything about calculus or I don't know, you probably do pretty well with that as well, I would imagine. I'm a little late in my calculus lately. I'm good with spinner theory and some stuff on advanced physics. But my background to getting in technologies. I was a shoe repairman. I was a cobbler and a landscaper. So it's a like I said, natural for it is technology. I'll use that as my out if I can't answer a question, it's because I'm a cobbler. But if I can, I got lucky. So cobbler, landscaper, you basically pushing a lawnmower and running a weed whacker. That's that's pretty that's pretty cool. 20 years in the in the data center. And now here you are to run on it. So what do you got? Like, can you show us something about your technology and, you know, how it works? And when hopefully, hopefully, hopefully, hopefully, you can pull up a terminal window and show us all kinds of like, verbose output of, you know, running commands and editing config files that'd be really exciting. Every underneath it all, you know, we're all just YAML operators. That's really what it is. It's, well, the funny thing is, when when we show this, I'm used to doing, you know, I do a lot of analyst demos, I use a platform. I mean, I've got people that I can see that are on the chat with us and that are on this. I, I stand beside amazing people that build this technology. And it's, it's been great to learn with them how we can do this. So like I said, I, I ran the stuff that these circles are made of virtualization platforms, application building, help to do DevOps implementations in, you know, before it was called DevOps. As a shout out to Chris short, you know, I like the DevOps ish. I always think of, I was always a bit DevOps ish in the way that I was able to work. And when we get to, you know, was our platform do, you know, we can go to the bits, like what's underneath it, right to the operator, to the developer, they see open shift, they see cloud platform. Now, what happens when they want to use Turbonomic? And at its core, like I said, you know, we make sure that applications get the resources they need when they need them continuously and automatically. So this is your environment, right? Yeah, let me ask you a question. I'm sorry for barging in, but actually I'm not. Why, like, how would people know when they need to use Turbonomic? Like, like, you, you just basically started by saying, all right, so when people know they want to use Turbonomic, like, do they need this? Can't they can't people roll their own or, or, you know, do the same type of functionality with in house technologies? If they can, I'm going to hire them as engineers. Because it's an intractable problem. In effect, right, we can, we can get fairly good at certain things. But when you talk about the complexity of like any scale is pretty difficult. And the reason especially is like just at that the virtual machine layer, just at the Kubernetes layer, right, there's enough difficulty that it spawned all these other different startups to solve specific problems across within them. Now imagine that the technology we use is platform and application agnostic, so that my resource management is for any application on any infrastructure. And so when I when you say like who's who's a great, you know, consumer for Turbo, every environment that I deploy into, we can generally see about 30% improvement in performance. And we can do it on about 20 to 50% less infrastructure, which is kind of weird. So that whole thing of like doing more with less doesn't mean that you can like throw away your hosts necessarily, because that's hard to do. But what I can do is I can defer needing to acquire new infrastructure without impacting performance. And on the cloud side, of course, that's huge because you can literally realize it right away. Yeah, that's important with that. So I was talking with Elon, who's the city's vice president of product marketing over at Datadog, and they do a container survey every year, which is really insightful. And you know, we were talking about the other day, and he was basically saying that customers putting workloads into the cloud, or for the most part over provisioning resources by like 80% because they don't really know. So when you're talking about, you know, running workloads in public cloud, over allocating resources means you could be artificially inflating your costs by 80% or more. Is this a problem that you folks can solve? Absolutely. It's it's a I'll say the beautiful side effects because the the real problem that ultimately needs to be solved is why did we over provision? It's not because we wanted to pay more and the benefit while monetary is pretty significant. What if I could tell you that I could give you just the resources you need without having to over provision? Net result, it's a lot cheaper. But the real benefit is like, why do we over provision? Because we had to guess, right? The top capacity management tool in the industry is Microsoft Excel. People just kind of ballpark. Yeah, they lick the air, they check the wind and they say, Okay, I'm building a SharePoint farm. We've already got one. We'll just make it the same size. But I want to make sure I don't run out of resources. So I'm going to like add 50% and we generally ballpark. But why would you do that when now you can look and say like at the application layer? And we can see deeper into what makes up because when I say SharePoint farm as an IT operations person, I see five virtual machines running on physical hosts. So that's my measurement of how do I guess what it is? But it doesn't account for shared resources. It certainly doesn't talk about the cloud. So Elon from Datadog 100% correct that people are are guessing. And if you're going to guess, I mean, ask yourself this one, Mike, right? If you're going to guess, which side are you going to guess on? You're definitely going to go over, right? And the old thing of like you, the funny thing is, we've generally accepted that that's we call it the cost of doing business, right? Like the cloud, forget my friend Randy bias says, the cloud is cheaper as long as you're willing to pay more. And what the real benefit of cloud was was that you could suddenly put an application up without having to buy servers and wait four months, right? So I can immediately spin it up. I can use adjacent services like platform as a service and SAS stuff like that's fantastic. That's the real benefit of cloud. But the dominant number of resources today in cloud, I think AWS even alluded to a number. I won't quote it. But it was they say it was like it was a high percentage of their real resources and revenue still coming from EC to like old school virtual machines running on the cloud. And those are going to be run using the traditional it ops pattern of I got a virtual machine here, I'm going to give it four gigs of RAM and eight CPUs. Because I think of, you know, back in the day, that's how I used to buy my servers. But meanwhile, like when we look at OpenShift, when I'm looking at my OpenShift environment, I've been picking my app, like I'm not thinking about number of CPUs, I'm talking about millicores now I'm talking about megabytes of RAM kilobytes of RAM for a process. So I can't just think of 24812, like, which is the it ops sort of methodology, let's count like we're we live in binary, and we'll think in gigabytes and terabytes. And then we're going to guess. It's a pretty fantastic, you know, formula for error if you really think about it. So if I I'll just quickly say, say like, the real reason why we do all this right when I look at this, what is this? This is an application. What is that virtual machine running on AWS? It's an application that's running on a virtual machine. What's my VMware environment? It's a bunch of applications running on virtual machines on physical servers. So what we want to do is we want to say like, number one, how do you understand and connect the needs of that application? And you'll see all these different resources, whether it's the different services that make up the application, whether it's containerized, whether it's running on a virtual machine, and literally down to the metal, out to the internet, and being able to first of all do this without agents, which is pretty wild, that we can gather this right from all the native platforms, we're gathering all this instrumentation and these analytics. And then we understand the relationships and we're building these relationships so that I know that my mobile banking application, this case, it's actually instrumented by app D, you know, we talked about that before. So I'm able to see now, the application itself, all the components that it's made up of, the real true application response time, because this is being instrumented right from an APM partner, we're doing this with Instana, with Dynatrace, with New Relic and others. We can pull it from Prometheus, which is amazing. We got a lot more people that are diving in Prometheus. And so we're able to see at each layer of the stack, what the different issues are, potentially, that can be relieved before they occur. That's the real goal is not just like wait till it goes wrong and then, you know, tell Mike, hey, Mike, you need to reboot the server or you need to add heap, because it broke. So what we want to do is be able to go into each layer and not just understand its impact of resources, but everything that makes it up, right? Because this application is sitting on, it's spanning across two physical hosts with four virtual machines on a shared virtual machine environment. So everything that's affecting this host is affecting this application. But APM will only know this host. It's like, I don't like my myopic sounds a bit negative, but it's very single focused in that it's saying what does this application need right now without an understanding of the adjacent effect? Sorry, Mike, go ahead. You said two minutes ago, I think what I heard you say was you don't this doesn't use agents. So what is it magic? Meaning like I'm pretty familiar with things like Instana where you install agents on a on a server host, whatever, and it, you know, basically phones home and radios in and you know, provides, you know, information and stuff. But I think I heard you said that that you don't use agents for this. It is a little bit of magic. If I hope you brought your card deck with you because I do want to get to that at some point. You'll figure it. I don't have them on my on my desk. I should I should have them here by today. That would actually qualify for me stumping Eric right because when we did the dry run, we spent probably 10 minutes talking about why you were fumbling around with your card deck. And it was actually pretty cool. It was actually a pretty cool story. Tell me about how you do this without agents because I don't understand. But you know, you just you actually answered in a way yourself, right? So in Stona has agents at D as agents, Dynatrace has agents. So if they're already gathering all this data, I only need to talk to at D and Dynatrace and in Stona, because they're already gathering this. So then I just need to consume their instrumentation, their analytic, their data, and then use my analytics engine. Everything in the stack, if I were to look at, you know, the different targets that we support, which, you know, number one, you know, we support every hypervisor. So if we look at at the the cloud layer at the private cloud side, we've got, you know, traditional private cloud stuff we got, we've got at the hypervisor layer, the major hypervisor providers here as a demo system. So you don't actually get to see the red hat enterprise virtualization. You don't see some of the other all of the different players are available. We see down underneath the physical infrastructure, stuff that's abstracted away, like converged and hyper converged. So what we do is each one of those layers has its own management API and its own set of data. So we take all of that in through their native platforms. And then we use our economic scheduling engine in order to relate the different relationships between resources. And then because of the application layer, I can tap into the APM or I can even go right to the guest operating system itself and like specifically talk to resources. Now what I'm able to do is do what none of those platforms can do, which is take all of that data, build a continuous real time topology of the dependencies, and then not just show you what's going on. But you know, when I looked at my my banking transfers app as an example, I can look and I see actionable responses, right? So I can, I need to provision additional storage because it could be by the performance or it could be just that it's literally physically running out of space. I could be having to move virtual machines around in order to get better access to different CPUs and different memory access and different physical hosts. I need to be able to smooth virtual machines storage between resources, change instance types and sizes or SKUs in the cloud. When you get to the Kubernetes, you know, in the open shift side of the world, it gets even more exciting because now I can look at this cluster layer and here's an example, right? I actually spun up just for fun. So I spun up this, I open shift on IBM cloud this morning. I tied it in, I deployed a few applications and I'm already now able to see that set of resources, right? So I'm pulling that data in. This is pretty unexciting because they're literally empty resources. But if I look at a buildable or an active one, what I look at is here where now I can see, again, applications all the way down. But now I'm thinking instead of in the context of virtual machines, I'm thinking pods and container sizing. And the container sizing is going to be much more granular, right? It's going to be adding 12 megabytes to a container because it's a smaller process. But 12 megabytes could be the difference between hitting 95th percentile response time or going over. When I look at the controller layer, at the namespace layer, now I can give context not just in what I can do to the applications and the platform, but my audience, right? So if this is my view as a developer, then in Turbo, okay, no problem. I'll give you your namespace. So I can then go to my namespace layer and I can see all the actions that are there and not just see these actions. But if I were to click into them, now I have the option to actually, you know, take them in the platform or most importantly, you can automate them so that if you've got, you know, midday resizes that you need to do because they're non-disruptive on virtual machines, then you can say go for it. Add resources as needed in real time to meet demand. And then set a change window so that I can say, hey, scale down because it's disruptive, you know, or let's just say they're using OpenShift for what it's really designed for, right? They're using Kubernetes, they've got stateless applications. So now I could go to my stateless application and I could say, yeah, go for it. Just I can scale the pod, I can scale the container. But most importantly, remember the life cycle of containers is shorter. So they're going to say, hey, well, this is great, but it's completely dynamic. All those APMs, they're going to say, okay, cool, I've got my namespace, I've got my cluster, they can show you this. But your application patterns have changed over time. And what's real time matters. Now when I look at the different actions, I'm not just looking at what can affect the node itself. What can affect the application. But now I can talk about the container specification. Where when I'm setting CPU memory, different specifications at that container layer, I can now define them as a spec. Because let's just say I scale out the pod, if I scale out the application, I wanted to scale based on what it actually needs. So we're tracking that both with historical and real time information. So that when I look at that layer, now what I'm going to see is what are the applications, the dependencies, what are the real, you know, like 95th, 99th, what are the observation periods, the percentiles that you want to see to show you why we're making this downsize recommendation. And what's the net result once you do it. But on top of this, you know, here's that I'll say that the simplest possible example, which is for the virtualization kids, they may know this well, right? I've got an application and it's struggling. My application is targeted by APM. The APM says you need to size up memory. Okay, cool, makes sense. I'm going to listen to the APM. Now I'm going to go and look for memory, add it, reboot the machine, maybe not, if it's non disruptive, whatever I'm going to do, right? So I've gone to two different places, potentially more to do this thing. But here's the real story. So that application that we see has a struggling response time. It's sitting on top of a application. Well, that application has heap settings. So if I look at the different application components, it could be heap. It could be other things that are affecting, you know, setting CPU measures, setting memory measures, I can't size it down or up because it could affect the application. So now I'm application aware. But then on top of that, it's sitting on a virtual machine, which is struggling for virtual resources. It's struggling for memory from the physical host. As physical hosts do, they share resources. Now that physical host is running out of resources. So it's doing what it's supposed to do. And it's swapping out to disk. But what we see here now is red circles, right? So the disk is hitting throttling limits at the controller layer because of high latency. So what looks like a memory problem to my application is actually a storage problem. And the only way we could have known that is by understanding the entirety of the stack. And then the difference is we can do it. Sorry, go ahead. Yeah, Mike. So I got a lot. I know this is like people like this is crazy. There's no way you can do this. It's all a sea of lies. I can prove it. I got I mean, I do want to ask you and I'm sorry if I'm completely interrupting your demo and then taking you way off track. I want to ask about predictive analytics and the ability to learn to be able to make recommendations to the DevOps team. But I'm not going to ask that one right now. What I want to know is you just said that the that the APM and you kind of refer to it as sort of like the the infrastructure czar, if you will, at least that's what I have now deemed it. You were like, okay, so we're going to listen to the czar and the czar says go get more memory. Is that like going to the store and being like, Hey, man, we need like eight more gig of watts of memory. Go to the go to the convenience store and go get how does the APM go get more memory? Where does it go and get it from? And what debit card does it use? Like, they just go somewhere like, Hey, man, I need some more memory. Okay, great. Here you go. Thanks. Well, now this is the fun part. We're using a sharing economy. So imagine you're you've got what most people have. Let's say you go to data center and has four clusters, 12 hosts a piece, whatever you pick the numbers, right? But what happens when that cluster is highly utilized? All you're going to get is a bunch of red on your monitoring tool saying, Hey, guess what, Mike, you got a problem. You got to go to the gigawatts store and buy a bunch more memory. When you say that's great, but it's Saturday night, and I can't find any more memory. And it's physical. And like, we can't get the stuff on demand. This is really what the core of what we did is. So imagine that you could knowing the applications that you could say, Well, I've got this development cluster that's sitting inside here, where I can actually take resources, because it's over provisioned. And so now when I look at my different virtual machines, I can see a bunch that are yellow and they're yellow because they've got too many resources assigned to them. So now if I go scrolling way down, because this is lots of performance problems, right? So now you're going to see stuff. And ultimately, if you scroll down far enough, what you'd end up seeing, and I'll make it a little easier is that you can say, whether it's an efficiency or a performance problem. So if I just nail down to efficiency, well, I can move stuff around just to free up resources, I could potentially downsize a host. So I can take virtual memory or virtual CPU from underutilized resources and applications and give them to under provisioned applications. So I can actually better manage. It's effectively like they call it bin packing in the cube world, right? So I can do this where I can do this automatically. I already I know what needs to be done. I can size up and size down. I just go and say, All right, cool, let's do it. Let's take it away, right? Boom. I just gave up these resources back to the cluster without having to go to the gigawatt store and buy more memory, which is pretty cool. Well, then on top of it, then you got the Kubernetes story. And people have Prometheus and they've got Instana and they've got a bunch of things and they've got maybe they're using they've got Ansible for initial provisioning, they're dabbling with Terraform, they've they've got a home built bash script that goes in and checks for thresholding and it runs with a couple of health checks. And I mean, I call this the cube Goldberg machine, right? They've got a bunch of different tools that are trying to come together and call and be automated. But for us like being able to see the resources that serve virtual physical memory to the cube cluster and the cube nodes, I can know that not just current allocation, but I can reallocate resources to get better performance without busting through the threshold and waiting for the pod to fail, which is kind of cool. So we are literally creating this like autonomous level five self driving infrastructure. Okay, speaking of Elon, I just when he said Elon before, I was like, yeah, great, you know, SpaceX Tesla big fan. No, I'm determined to win my own Amazon $200 gift card here. So this sounds, I mean, this is cool. I had assumed that yellow meant like warming will Robinson danger danger as opposed to under provision. So so green is good, yellow is under provision red. So yellow is not a progression towards red. It's kind of like the other way around. Yeah, there can be there can be basically think of it as like risk levels and minor risk levels can include like, hey, I can reclaim resources. But ultimately the real red critical risk is going to be a performance, a real performance problem that can be affected. Generally, you know, look, if if the whole thing was running at 92% utilization across the board, it would be a lot of red, you know, even the reclaim resources there wouldn't be any yellows because there's no free space to kind of steal back. But if it's at 92%, we know how to keep it, bring it back down. Because even with there, we're going to be able to, you know, dodge thrust perry all those different resources because of the way that our engine works. But doesn't this so I mean, it sounds to me like so all the all the eight all the instantas and data dogs and dining traces of the world do all the phone home, they send all the information in and the world according to Mike wait what you folks then have is something that just makes it easy for the people running the IT infrastructure to affect changes in a distributed multi cloud world. But they still have to do the work. They still have to go in here and they still have to say, Oh, I need to go look for that. I got to click on the red thing and tell the thing to go to the gigawatt store and go get some more memory. How come there's no predictive analytics capabilities here, which would allow the overall czar system, if you will, to start making recommendations for these poor people that have to manage their IT infrastructure? Why do they have to sit there and go looking for problems? Why can't Turbonomic just predict them in advance and allow people to work smarter? Well, you're not going to win the gift card on this one, sir. But I'll tell you that. Yeah. So number one, yeah, like I unfortunately, I have to be red and yellow all over the place because it's a fairly uncompelling demo if they're all green circles. But like I said, instead of just having a set of actions that that need to be done, what I can do is I show you these actions and they have a checkbox for a reason, because I can actually take a set of actions and I can take them right here in the UI. That's generally the first comfort level, right? It's like, okay, cool. I believe what you're doing. You're telling me why you're going to do this. You're showing me the results. I'm going to go for that. I'll check that checkbox and I'll say it. But then you just go over at the same time and we create a simple automation policy. We'll call it a virtual machine policy. This is the equivalent of me diving in and doing a cube cuddle. And as I'm going to call it cube cuddle, because I hate that phrase. But I'm going to say it because I know other people hate it as much as I do. So as imagine using cube CTL or cube cuddle, whatever you want to call it. This is my VM version of the demo. So I'm going to call it resize, bikes, gift card. And I could choose whether it's any set of virtual machines. It can be a dynamic list. We'll just grab a couple just for funsies. I can set up a schedule in which I want to be able to do this like I can give it a change window, whatever I want it to be. By the way, even though you were a cobbler and a landscaper, apparently you didn't do so well in sixth grade English grammar and smell. Not so good. Yeah, yeah. Card would have a D in it as opposed to a T. It does. And and how do you know this is a live demo because I just froze my browser. There you go. Oops. That's how you know we're in a live demo. But the beauty part is the machine. So let me see if I can do this side. There we go. Man, I need a new keyboard or you sort of fingers. I'm the last person in the world. Stoned glass house. I just felt I like as a person. No, it's a great catch. I think you get a gift card just for that. So I imagine that I want to do like a size up, you know, CPU, I want to size up, you know, let's go with memory size up. And these are non disruptive changes. So I can literally say, OK, I'm going to take these. I can give it a schedule in which we're allowed to do it. I can say it's, you know, give it a schedule or just let it run anytime. And then I can say, OK, cool, let's make these automated. And I know that this particular schedule is it's non disruptive. So I can just say go for it. Right. Completely automated. I can use our native, you know, platform scheduling. I can also trigger like a service now workflow. I can write a ticket to service now. I can go and use actually service now automation and approvals so I can create a ticket, wait for an approval. The approval comes back to turbo turbo says, all right, Mike said cool, let's make this change. So we set the change. So instead of just like showing and making it your problem, I can literally automate almost all of the actions that we have in turbo. But then you talked about one more thing. Right. There's going to be sometimes I got to go to the gigawatts store. I got no choice. Right. So if I've got a physical cluster and it's physically out of resources. Well, every, you know, I'll say v center vrops, whatever, whatever the tools are, whatever the v tools are you got, they're just going to tell you, hey, you know, it's going to have a little graph going up into the right, like every startup revenue should. And it's going to say, you ran out of resources here, and you're on path to run out of resources, even more so. And you're like, well, I'm already out of resources. What do I do? All it does, it shows you that you've got a problem. Well, the difference for us is that we show you how to fix that problem. We show you when you're going to run out of resources at the host level and what you need to provision. And this could include things like you'd say, Hey, this is a rev environment with rail servers on it. So they're licensed. But I can create a policy where I can say I need to scale my rail cluster because it's licensed assets. I don't want to scale. I want to steal a host from another machine and put it into this cluster because it's underutilized over there. And I can take that cluster host and attach it to an existing cluster so we can literally move all of the bits underneath in this beautiful sort of Jenga data center cloud consolidation type of scenario. So that's the big difference. Again, APM, like I hope I never sound like I'm detracting from what they're doing. They do fantastic stuff that's very solving a very specific problem. But there's also times when you don't. I've got Instana, but I don't have Instana on every application or at D on every application. So now I can go through and here's an example. So this application is a neat little one called Turbinoc. And I've actually instrumented my own system. And I've done it with my own platform. We call it Apex or application performance extensibility. It's really hard to say fast. So I can actually understand this topology and I can define these resources in this relationship. And then it shows me the risk at the top. So that now what I'm doing is I'm getting this. No, I'm not getting response time out of here necessarily because I may not be pulling that data across. But what I am getting is the relationship. Because like when it I ran it operations for a long time. No one on the help desk says, Hey Eric, SQL 429 is running slow and it's affecting these three applications, right? They're just like something's not working right. Yeah, they say the banking app is down. You're like, Okay, now you got a team of 12 people going hunting around. Okay, cool. Let me take a look at the banking app. Well, it's not down, but it's it's there's something going on. Well, what's going on? Well, it looks like it's not just the app itself, but it looks like, Ah, okay. So statement, you know, download is is running slow. But oh, look at this quick pay. I see a change in the SLO is right. I see a change in those those thresholds. And so now that I can in quick, literally I said, help this calls banking apps down. I go take a look as it is not down. But here's the problem. Look, I can't physically get storage right now. But what I can do is I can maybe reallocate resources and ultimately get back to health. But now as an operator, whether it's a cube operator, an IT operator, an app operator. I can just say at the application layer, all of this risk is affecting my application. And here's what I can do about it. And I can take these actions. So it's quick context, instead of me having to go, okay, the banking app, right, is four VMs, it's run across two clusters, which clusters it in? Well, it's the whole idea, we can do cross cluster moves. So you've got to go to the center, you got to go to Microsoft, you know, SCV, man, you got to go to like nine different places. Or you go here. And then you say, Hey, guess what? I looked at the app, everything looks clean. But there's still a problem. Well, let's head over to Instana. Right. And maybe there's like, slow code. There's long running SQL queries. There may be there like, we call it is it the code or is it the node? But the whole goal is that immediately risk is propagated, actionable intelligence to be able to do something about it, even better to automate it. And then a shortened MTTR, which is like, I'm sure a buzzword phrase that people use a lot. But I call it meantime to not me, like, hey, resources are fine. I think there's an app problem. And then you can head on over to app D or Instana or whatever it's going to be and say, Ah, coolio. You're right. There is a problem going on. So I sent, we sent an email to your marketing people their day, we're like, Hey, make sure you send us a whole bunch of softball questions. So if we run out of things to talk about, we'll, you know, we'll you know, please tell us about the blah, blah, blah. I don't think we're even going to get there. I hope they're not going to be mad. I'm just looking at the clock and it's like, we've got one minute left. So Oh, no, no, I burned our whole time in product. No, I didn't sound pity either. I'm like, I'm just excited about the stuff that we can do. And like, it was completely my fault for interrupting. But you know what? It's my show. So too bad. The good thing when you're the host, I think we should have you back again or like, I find this really super interesting. But we are running out of time. And so just again, and I'm sure I'm going to get all kinds of hate mail from your marketers and so forth. And next time, we'll make sure but Oh, look at this, we have a little call to action slide that was prepared. I would be remiss if I didn't send food. So you know, you talked about the data clouds report. And like, that's one of the ones that we've been running our we call the state of multi cloud report, which is we've I sent is going to kill me for this talk about being a bad marketing team member. I've come here with other numbers either four years or I think it's four years that we've been running this. And we've got like 800 plus respondents that contribute. And it's a really great report talks about, you know, where multi cloud is, like I'm the funniest person because I'm like, multi cloud isn't a strategy is a thing you're stuck with. There are strategic ways you can leverage it. But this is a cool report that it kind of unpacks a lot of stuff. If you want to dig in on the Kubernetes side, we've made it is turbine art.com forward slash Kubernetes super easy to get to. And of course, just reach out and I'm at disco posse everywhere you go. That's the fastest way to find me. But you can always reach out through anybody at the team at Red Hat. Now they they know how to get ahold of us. Yeah, same here. I mean, people can send me an email like, Hey, I really, you know, want to learn more about Eric right and his patents. You can shoot me an email. It's just wait at red dot com. Don't forget the E. Speaking of that, I am Mike Wait. And this has been the OpenShift Commons briefings operator hours Eric right. I really think we should have you guys back. I think this was really fun. Hopefully it was interesting. I have about 12, at least 12 things that I wanted to talk about around, you know, Kubernetes and OpenShift is that synonymous and, you know, is your operator written in Golang or is it one of those fake Helm charts? We just didn't have a chance to get there. So hopefully you can come back next time and we can we can talk some more. That'd be fantastic. And I appreciate Mike and and everybody on the on the back channel who makes this all happen. And thanks for all the folks for interacting. It's been a real pleasure. Alrighty, for short, it's a wrap.