 Just waiting on Chris Short to give us the go and chat. There we go. Okay. Welcome to this bumpy start here for this Wednesday's edition of the OpenShift Commons Briefings, Operator Hours. I'm Michael Waite from Red Hat. And today we are fortunate enough to have with us a whole team from Densify. We have Andrew Hillier, the CTO and co-founder of Densify. And we also have Chuck Tatham, their chief marketing officer. And today they're going to talk about three important areas to think about in container resource management. Andrew and Chuck, how are you today? Well, I'm great. It's very nice to be with you today, Michael. And I'm great as well, except for my camera. But thanks for having us, Michael and Red Hat. You know, we had, we did a dry run of this a couple of weeks ago. We talked about what we're, you know, what you folks wanted to talk about. And we tested the audio. We tested the video. We made sure that the screen sharing button was familiar because we use blue jeans for us. And not, you know, most people are familiar with Zoom and, you know, WebEx and stuff like that. But, you know, as much as we try to make sure there'd be no glitches to getting started, we have just one little video glitch. Anyways, thanks for being here today. We have our hour here. We are live here on our bridge. We have some people on the, oh, look at that Chuck Tatham. Welcome. There he is. Welcome. I thought it was actually just a bad hair day and maybe you were just kind of faking it. It's more a no hair day. Right. Yeah. Yeah, I'm going to throw stones in that glass house. Anyway, so you folks work for Densify. I detect a Northern exposure accent. Where's the company headquartered out of. Sorry, go ahead. Go ahead and say my base just north of Toronto in Markham and I'm sitting in Toronto right now and Chuck, you're just north of the sea, right? Okay. Well, tell us, tell us about Densify. You know, you folks have been working with us. To build and test your, your red hat certified containers for the red hat platform. You folks have an operator that works to help improve day to support ability of your software running on open shift. You folks are members. Long time standing members of the open shift commons. Community and, you know, tell us about Densify. Well, I'll start and Chuck, you can chime in if I miss anything where it's, it's a, we focus on resource optimization. It's really analytics software. You can think of it as pretty deep analytics that looks at the workloads, the patterns of activity, what the workloads are running on and does a bunch of different types of optimization. And if you step way back, I think if you look at the progression of the industry, you know, we've gone from heavy use of virtual machines. Of course, people still have these virtual machines where the style of optimization is more around workload placement, sizing VMs in the cloud. People focus a lot on the cloud bill, but in our view, a lot of that cost comes from the resources themselves. So if you're using the wrong types of instances in the cloud, there's a big problem there. So we've seen the progression to cloud and of course containers is a whole new challenge because again, resource optimization is important. And what we're finding is quite overlooked, especially initially in these container deployments. So resource optimization, kind of deep analytics, pattern recognition, that type of thing to basically optimize environments. Okay. How did he do Chuck? Did he pass? I'll give him an A. He covered it very nicely. Okay. And Chuck, you're the chief marketing officer. Andrew is a CTO and co-founder of the company. How long has the company been around? Andrew, you're as a founder. You should answer that one too. Yeah, well, we've been around a while. I'd say if you look at where we really, really got focused on this problem space was around 2005 time frame. So I think that's quite relevant because as we know containers aren't new. So we were actually working with very early Solaris zones, that type of thing. So we've been around a while and a lot of the core foundational analytics grew out of that. But of course we've seen some major shifts, like I said, from the rise of x86 virtualization cloud, of course. And I think this transition right now is probably the most exciting because it applies to everything between containers, kind of cloud and on-prem. Right. Now, I don't know if everyone knows that containers aren't new. I know Docker, the technology became pretty, excuse me, Docker, the technology came pretty mainstream starting maybe about seven years ago and Docker the company and Docker con was all the rage for quite some time. Why are containers all of a sudden so mainstream or becoming mainstream when they've been around since the days of Solaris zones and even quite possibly earlier than that? Well, I think it comes down to the way I view containers is that they do two different things, modern containers that they're an app delivery construct and they're a virtualization construct. So I think in the early days there was no delivery construct. If you go way back to these zones or WPARS or all these different structures, they didn't provide a way for you to deploy new versions easily. They're not like what Docker provided. So I think that's what really precipitated their eyes is that developers can now use these, I think one of our customers said it's almost like copy and paste for apps. I can now build apps out of these building blocks and they're very easy to deploy and I think that's really fueled the growth of containers. They also of course let you run multiple workloads on one node, which is kind of the virtual side of it, which has been around a while. And that's obviously another big part of it, but that's it's kind of that one to punch of having developers adopt them for their purposes and then be able to use them to host workloads and operate them. It's really kind of a winning combination now. Sure. Okay. So, the densify is a predict so in looking at your website, you know, you type in densify brings you to, you know, whatever Wikipedia or your, your, your main page basically says, densify as a predictive analytics analytics engine. How does that work. What does it mean. Why is that important for customers. So what we mean by that is that it's it's where if you look at the solutions in the market, there's there's ones that kind of react quickly to kind of short term demands almost think of like monitoring systems or or something getting hot or in the VMware world, you have DRS that moves VMs on a short term basis. And then there's longer term operators. So we kind of look at the longer patterns and say that thing gets busy every month end or every morning at 8am that thing's busy. So it lets us do a much deeper level of analysis. So what it means is when we give recommendations, they're usually ahead of something going wrong as opposed to reacting and both are useful and both are necessary. You want to react to unplanned situations, but we feel that if you get ahead of it and actually do the right analytics. Again, a simple case would be, you know, if I have a workload that's busy every morning and another one that's busy every night. It's possible to know that by analyzing the history and use that to optimize the resources and say maybe those two should go on the same note. As opposed to reacting when things bump into each other and then trying to respond to that. So, and that applies to all types of infrastructure and containers, you know, especially so because as we see, it's kind of a mystery to a lot of people what their containers are doing like what what they're so complicated and so many moving parts it's hard to understand. So being predictive is a key part of that. Yeah. And for the people on the bridge and those that are watching on Twitch and YouTube and other places, if you have questions for either Andrew or Chuck, just drop them in the chat down over there. And we'll make sure to get your questions addressed. So, you know, you said VMs. Are VMs dead? I mean, why did why did I mean this isn't really something that you need to respond to if you don't feel comfortable but like why did why did Dell just why did why did Dell just announce they're going to spin off VMware? Seems like VMware just keeps bouncing around from one owner to another owner for the last, you know, 10 or 15 years. Are they are they getting out of that business? Is the market getting away from VMs? Well, it's hard to say. We don't see VMs going away. I think the way it characterizes that if you go to like one of our bigger customers, the majority of everything is still running on VMs. The majority of the conversations are around containers. So it's kind of where things are heading. But I don't see it going away very quickly. But it's kind of interesting because you go back to the progression going back to these kind of zones and that type of older technology. And you replay history. The way the way I feel it kind of went down was that there is these container like structures that were good technology, but then VMware kind of eclipsed that with this, you know, heavyweight VMs, x86 virtualization kind of riding on the back of x86 gear and really took over the market for 10 or 15 years. Huge success, obviously. And now it's kind of coming back around to containers. And I think it's partly because containers are much more efficient. You know, they don't virtualize the device drivers and they're a nicer thing to run in. But it's almost like that was a necessary phase of the market so people get used to sharing. So VMs cause people to actually start to share resources and they didn't like it at first, but they got used to it. And now we're at a place where people are used to sharing resources. We're kind of seeing a switch back to containers, which are probably more efficient way to share resources. But I think the VMs are going to stick around. I don't think VMware is about to go under. It's just, I think the attention is moving away from, you know, the on-prem and the cost associated with running that type of environment. Yeah. So being able to do predictive analytics about apps, containers, presumably microservices, which was just lots of, lots of tiny little containers, what about knowing where workloads run? I mean, as people adopt more and more multi-cloud, you know, you've got clouds all over the place, various different public clouds, you know, hybrid models. How do people know where their apps are running and how do we, how do they not get in trouble from a compliance perspective around having their sensitive information running in some system somewhere that it's not supposed to? And how does Densify help with that? I mean, it's definitely made it much more complicated. Even before knowing where things run, just doing inventory now is really complicated. So, I mean, it used to be, you could say I had 1,000 servers or 5,000 VMs. Now, if you look at a cloud bill, there's so many things in it. Like, it's just, it's become really complicated as far as what the entities are that you actually even are owning or renting. And you mentioned microservices. Now, these things are coming and going, you know, very rapidly. How do you even report that? How do you even tell someone how much stuff they have? It's, you know, even containers or even just, you know, elasticity in the cloud, if things are coming and going, I can't tell you you have 1,000 that doesn't mean anything. You could have had a million microservices run over the course of a day for 10 seconds each. So, even just being able to quantify what you have has become a big challenge. You know, it almost becomes the area under the curve. It doesn't matter how many things you have, you use this much CPU hours or this much horsepower is what you're consuming. So even that, you know, even before we get to where things are running, being able to describe to someone what they have and what they're buying in the cloud or what's running is really, really become, it's not impossible. It just needs to take a big shift. And then to your point, once I can get the handle on that, okay, I have things in two different clouds and I have different container environments, you know, the placement of those workloads is critical. Like it's just one thing that we, for example, like to talk about is licensed software. You thought about compliance. If you have to pay for software, wherever you use it, well, you can't just start spraying workloads around that are using SQL server instances. That's, that's, you're going to get a huge bill. So there's many practical things that impact exactly where you run from a cost perspective or like you said, compliance, you know, data residency, all that type of stuff. It's usually challenging. And so, you know, with your part about how we help, we do have a rule engine in our product that dictates that type of thing. So, and what we call fit for purpose. So where things land, it has to be based on what it requires, you know, the capabilities it requires. Do I need a GPU? And do I need PCI compliance or SOC 3? All that stuff doesn't go away just because you're running containers. And to your point, it becomes more complicated because you're running stuff everywhere. So I think people think that some problems go away when you start to migrate to these newer technologies, but they don't. Some problems do, some problems don't. And keeping tabs on where you're running things is definitely important. And what about, what about resource utilization in the cloud? I think I was, I forget if I was talking with someone at Datadog or one of those types of companies the other day, and I was reading some study that was basically saying that like 75% of all containerized workloads are over allocated from a memory and, you know, process, you know, CPU, GPU perspective is, what do you folks see? I mean, and, and how can densify help, you know, narrow that resource allocation to make sure that various different containers and workloads are getting what they need without having to pay for excessive amounts of, you know, resources from any of the, of the providers. Yeah, it's a huge focus for us and we absolutely see that exact same trend. And it's, it's, there's, there's several reasons for it, but I'll go back to the technical reason and I think there's an organizational reason why it happens. But technically, if we go back again with rewind history of virtualization over commit is a very important concept in all of this. So the fact that in a virtual environment, I can give you two CPUs and I can give Chuck two CPUs and they can be the same two CPUs. And so what I'm playing on is the fact that you don't get busy at the same time. In fact, like I mentioned, if you get busy in the morning and Chuck gets busy at night, I can give you the same CPUs because we're not using them at the same time. So that is a very powerful construct to drive efficiency, which you lose as soon as you go into something like EC2, like an Amazon instance. If you just, if you just rent an instance, and I run your workload in one and Chuck's and the other, I have to size them to your peak of activity. And I decide Chuck's to his peak of activity and I might end up buying twice as much capacity as I would have because I'm building these little islands of capacity. So that's the kind of when you go to virtual to cloud, you lose over commit. You know, the organizational way to look at is that I can't I can't give you two CPUs and Chuck's two CPUs and then under the covers make them the same to I actually have to rent two different systems. So that's kind of created the first wave of resource challenge that then carries over into containers because containers do kind of over commit. But what you're asking for is not virtual resources. They're physical resources. So, so when you ask for 1000 millicores and Chuck asked for 1000, I can't give you both the same ones. I have to dedicate them to you. That sounds like a major drawback of public cloud. Was that by design? Was that just an oversight? Or is it, is it, I mean, it almost sounds like, you know, people looking at public cloud like, oh, this is great until you start thinking about, you know, Mike wanting for CPUs and Chuck wanting for CPUs and having to pay for 100% of those resources. Does that make people like slow down and say, wow, maybe this really isn't going to be as cost efficient as we thought it was going to be based on the way that when we used to just be in the data center and we could, you know, we could virtualize our environments and share those resources. Yeah, I mean, definitely, I think that's a big cause of the bills being big when you first get a bill. But what it comes down to, if you step back is I don't think the cloud providers were pretending otherwise. I think that's where you get into cloud native versus legacy apps. So if you build an app to run the cloud, it doesn't work that way. It just uses 100% of a resource until it's done with it and gets rid of it. Ideally. You know, you know, things like server lists or like small microservices, they're architected differently. So the cloud is very cost effective if you use it well when you're using it and then don't use it when you don't need it. That's kind of the paradigm. So if you architect an app to work that way, it's going to be quite efficient. If you take a legacy app, like some banking app that has transactions all day long and it ebbs and flows. That can be very expensive in the cloud if you just put it in as it is because again, you have the size to the peak, like the month end or something. And then the rest of the time you're buying resources you're not using. So I don't think you can criticize the cloud providers. It's more the fact that it's a cloud. It's the architecture of the app and a cloud native app is going to be very efficient in the cloud. And a legacy app may not be and that's I think where you see some shock where you will start putting stuff in the cloud. But I think about that and then all of a sudden they get this big bill because they're running legacy apps in the cloud. What about what about apps running on open shift though and you folks have an operator can't. How does that operator help with configuration management of workloads. Being, you know, predictively analyzed by densify. So the way that works is a container environment. Again, it's kind of sits in between the virtual in the cloud. So you do ask for resources. And they are dedicated to you, but you can also run multiple workloads on one node. So it does over commit. But how well it over commits depends on the settings, the requests and the limited values in the containers. So if you look at the way the way the operator works is what it does is it just automatically connects to the container environment and pulls back all the data for the pods and the deployments in the nodes and all the supply and all the demand data. And then analyze the patterns and says, well, you don't need to give your your container, you know, 2000 millicores like that. It just doesn't need that to operate property. If you look at the whole system, you look at all the containers that are running. Because that gets back to the organizational problem is that if you're an app owner and you're deploying an app, you want to be safe. So you're going to ask for 2000 millicores for your web server. And we'll look at it and say, you know what, we watched the scene over time. It doesn't need to be perfectly safe with 500 millicores. And so that's where that comes in is to say people will make decisions based on risk, risk mitigation, which is part of the right decision for them. We would say people shouldn't be making those decisions at all. The resourcing decisions should just be done through machine learning and through predictive analytics and the humans shouldn't be touching these values at all. They should just focus on the apps. And just to add to that, the dynamic of DevOps and the way functionality is built today, you may have hundreds of individuals in a large entity that are specifying their desires for resources. So there's a much more distributed problem versus, you know, the older way of managing it in a more central fashion. So it's another element. Yeah, that's a great point. And Michael, we use the phrase micro purchasing sometimes to describe that where you've gone from a legacy environment where you buy or lease stuff every three to five years. So now a junior engineer can put a value in a terraform file and something is purchased. And in the Chuck's point, it can happen. There can be thousands of these files floating around made by different people. And so the actual specification of resources has become very, very distributed. And even if everybody's making the right decision, which they probably are, but even if they are, you still end up with tremendous inefficiency because all the decisions are made in isolation. They're not, they're not, they're not one kind of answer saying, this is how all this should work together. And this is how the resources should be, should be used. Okay, so in your, in your title and I kind of am not following a script here. I was just actually kind of just bombarding you with questions because I was curious, but in your title, it basically says, you know, three important areas to think about container resource management. And when we were going through our, our dry run, the three different areas you wanted to make sure we address during this call today. I'm going to just look at my notes here was containers behaving like VMs in some way. The second one was leaving resource specifications to manual and human efforts. And then the third major area was DevOps, FinOps gap. Let's talk about the first one, you know, containers behaving like VMs in some way, but there are critical differences when it comes to setting resources. So, you know, what is the impact of incorrectly setting resources. I think I know the answer, but, but what's the answer? What's the answer from, from you folks being the experts in the space? Well, it's kind of like we just talked about. So the, the, again, in a virtual environment, you can set resources. You can kind of make them wrong and you can overcome it your way out of it. I can, I can give you a lot, but I can give those same resources to somebody else. So there's a, you know, the, the app owners are asking for certain things. The central group can kind of turn the crank and say, I'm going to drive up the densities environment by, by sharing those resources with other people. So that's, that's what happened in a virtual environment. Virtual environments also have a VMware environments, for example, have what's called a reservation where you can say this VM needs to get a CPU or two CPUs, but people rarely use it because it's so draconian. It actually then locks up that resource for only that VM. So, you know, it's a feature. It's very rarely used because of what it does to efficiency. Now, containers have requests and limits and the request is like a hard reservation. If you ask for 1000 millicores, you get a CPU. If Chuck asked for 1000 millicores, he gets a CPU. Once all the CPUs are given out on a node, you know, Kubernetes starts scheduling to a new node. That one's full. It doesn't matter if you're using them. You're, you own the CPU and Chuck owns a CPU and there's only two CPUs in the box. So we move on. So that directly impacts the, the, the, the resourcing. Once you've given it out and you move on, it'll just keep on consuming nodes without them actually being utilized. They're just, you've allocated them out to workloads but not use them. And we see, we see this all the time we do analysis. A lot of times where there'd be a cluster, like an, an open shift cluster where the, the resources, 80% of the resources are given out to the containers, but they're only 7% utilized. I'm thinking of one recent example. So by making these numbers too big, you directly drive down utilization and cause your organization to buy more gear. That it's, it's pretty much a direct relationship. And again, it comes back to the fact that, that is, they're not quite like a VM in the way over commit works to get it to over commit. You have to set the resource request lower. And the human behavior of, of a developer or someone that's responsible for an app will always be to err on the side of generous. It's just, just the way it goes. Yeah. Yeah. So, so Michael, I mean, there's an interesting, I did this thought of exercise at one point where I went through my laptop and I said, if I were going to put all my, you know, outlook and word, all these things in containers, what would I give them for resources? So I said, you know, oh, my email, well, let's give that a half a CPU because it doesn't use much, but it gets busy sometimes and zoom. Well, let's give zoom for CPUs worth of, you know, it gets really busy. So if you go through, you know, PowerPoint and explore and Chrome, all these things, the exercise, you know, just me personally, giving the resources, what I think they should get based on how I use them, add it up to about eight laptops worth of capacity. It's just, it's kind of that simple, like, like, you know, PowerPoint, I think we talked about this in our dry run, you know, how many millicores you get PowerPoint, I don't know, I've been using it for 15 years. I have no idea how many millicores I would get PowerPoint, but I've seen it get pretty busy. So let's give it at least a CPU worth, right? But that's, that's the rational thought process to do this. But it's not based on the numbers. When you take all those numbers then and run it in real world scenario, none of those need anywhere near that amount of earmarked resources. And that's, that's kind of the heart of the problem is that it leads to one of the other points is that this shouldn't be left to humans. It's not humans doing their best job rationally, we'll still not get it right because it the answer is some is in the analytics, not in opinion. Okay, so how, so how does it work? How, how, how does your magic work? Is there an agent that sits there on every, you know, virtual instance or note out there and phones home information to the mother ship or is this a SAS offering with with an agent or people deploy it inside their infrastructure? How does it all work? Given that we only have 26 minutes left on our call, probably not can have a detailed response to that. But yeah, well, so it's agent list. I mean, it's, we, we piggyback off existing data collection. OpenShift is nice because it bundles Prometheus. So our operator just, it's just seamless. You just run the operator, it gets the data. So there's one component that runs gets the data from Prometheus. It is a SAS offering. So it all goes up into the cloud where it gets analyzed. And, and again, there's multiple levels. So there's the, the container request a node level data and the cube state metrics and all that stuff goes up. And then it also depends on what it's running on. So if you're running in the cloud, you're probably running on a scale group. And that data goes up as well, because those have to be married together. The containers are running on nodes that are scaling in and out. So we get that side from things like cloud watch as an example. So in containers or like say OpenShift on cloud, we get those two levels. If it's on-prem then of course it might be coming from VMware data collection because the nodes are VMs, you know, or those might be bare metals. So it depends on the container deployment scenario, but it's all agent lists. We basically interface with Prometheus or interface with the APIs that are available, pull it all up into the cloud, the lights dim, we chew all the numbers, and then we basically come up with recommendations that come back down. And they make all kinds of nice reports like app owner reports and they can go to Slack or Teams. And then it also goes to things like Terraform, which actually make changes. So if you want to automate that, it will go up to the cloud and answer disgenerated. And then it comes back down and it can actually happen fully automatically if you wanted to. It can just take over the settings, for example, in a Terraform file to automatically optimize the containers. We did a webinar last week on integrating with Helm and related frameworks to drive those changes back into infrastructure that's available as a recording on our site if people are interested. And this is where we get back to Michael to the predictive part because all of that is predicated on having the right answer. So one of the things that we really heavily focus on is we don't say, you might want to make that bigger or you might want to make that smaller. We say, no, we've looked at all of this data and all of these rules and all of these different things and this is exactly what it should be. And we've taken into account the fact that you don't use burstable instances and you don't use that and you can't do this and you have to run that. So there's a lot of policy involved to say when we say do this, it's actually an answer that you can automate. And so we have, for example, customers on our website where they, you know, there's videos where they've been taking open shift environments using our APIs to get the recommendations out, bring it through Groovy and Jenkins and just automating the whole process of optimizing it. And so everything we're talking about just happens automatically. You don't have to worry about it. You just deploy the containers and they get optimized seamlessly. Now you've used the word open shift specifically at least six times since we started here today. Am I paying you for that? Or like clearly there must be other container orchestration platforms that you folks work with or run on. Is that right? Yeah, for sure. I mean, I think, you know, Kubernetes obviously has risen as the kind of the facto standard. So there's lots of variants of Kubernetes and so there's, you know, kind of roll your own Kubernetes. There's EKS, AKS. So all those variants, it's the same. There's also ECS, which isn't Kubernetes, but we also support that. But we find is that the Kubernetes, there's a lot of variations on it that people are using either just kind of grabbing their own versions or as a service in the cloud or open shift. And they're all kind of cut from the same cloth. So those are all supported pretty much the same way. Again, the major variation is the supply side, what they're running on. They will look different depending on if you're running it on-prem versus in the cloud. The different optimization answers will come out because again, one is very dynamic. One is more of a capacity playing answer like I need to buy more gear because I'm running low on-prem. So, but all these different variants are supported the same way. Okay. You could have just said OpenShift is the only platform with support period, if you wanted to. I'm not going to give you the chance to say that, but that's fine. Well, we do love OpenShift. What's that? I see we certainly see it most commonly in our enterprise customers. We do like it because it does bundle per meter. It's a much simpler thing. It's much more known quality. So we find it is a great solution because it moves a lot of the variables for the customer and for us. So we really like working with it because it's, again, that operator is a clear example of how fast that is to install and get up and running. You can't do that on any other platform that easily. It's great. I don't recall and I'm completely off script. So I apologize, Heidi, for you had provided some here. Make sure we talk about the work we'll get there. But I'm just kind of, I'm just kind of curious. I forget it was on a webinar that we did with you. Or maybe I think, no, actually might have been with Kong and we were talking about API gateways and service mesh and. And the whole concept of containers getting smaller and smaller and smaller and becoming micro services and importance of service mesh in that space. And then somebody chimed in on chat saying, we're seeing just the opposite. We're seeing the size of our containers getting bigger and bigger and bigger and measured in like, forget if he said gigabytes in size or maybe even larger. How would you, how would you talk about what you folks know, based on your analytics engines about, you know, containers turning into micro services? When, when can we not live without service mesh to manage anything? Or is it going the other way and containers are actually just getting bigger inside production environments? Well, that's a very interesting observation because certainly we see a lot of customers running things like Java and containers and they're not small. Like they're not, they're not blipping in and out. Those are if you wait. And the way I view it is that it's, if you write an application, say a big application like you're describing, it's usually as a thread pool inside it that is dispatching the work and doing the work. And micro services are just taking that thread pool and making it outside the app where everything's running kind of for shorter burst schedule by, by, you know, by the container schedule or instead of inside the app. So it's really whether you expose the innards out or not. And some apps benefit from having all the threads have access to the same memory. So again, it's basically an application architecture question when you blow things out of the microservices is really good. If all those pieces are independent, they run completely independently and maybe, you know, Google Maps serving requests runs very efficiently that way. It may be that a banking app that needs access to, you know, common data is better to have in the same memory space. So I think it all comes down to the kind of all the same thing just exploded out to a different level, whether your threads become microservices or whether they stay inside the app. And microservices are a funny thing from a resource perspective, because what we find is that just because something runs for a very short period of time doesn't mean you can be sloppy with it. So, so, so, for example, we see, you know, we see a microservice that runs for, let's just say even for like a minute, something that runs for a minute to do something. And it's given a lot of resources and everybody said, well, who cares because it only runs for a minute. What's it really going to impact? It only runs for a minute. Well, if a million of those things run, it really impacts things. If you see what I'm saying, so there's a false sense of security that something running for a short period of time doesn't need to be correct. But what we find is the opposite. If you, if the microservices are incorrect, then it multiplies out very quickly and you need a lot of hardware to run it. It's just the same story. It's like it just isn't intuitive to people. And in that sense, running things as one bigger element is kind of easier because you can measure it and you can, you can actually specify it more easily than a million small things that are running for short periods of time. So it's, you know, it's, it's, we see kind of both things happening. I think people will probably find that the big things might be easier to manage and then trying to manage all the small ones and aggregate, but it really comes down to the app that you're deploying. Okay. Well, I invite the people that are watching on Twitch and YouTube and even here on the bridge or, you know, Patrick or project or whatever. Share your thoughts if you have a question or care to comment about how you feel about predictive analytics and resource management in a multi cloud world, put them in here. I get the sense that, that Andrew and Chuck can just about address any type of question that may come up. So let's play, let's play stump the Andrew. Let's play, let's play stump the Chuck here today. So let me have a video, a camera problem if the question is too hard here. Yeah, that was, that was very, that was very convenient earlier Chuck. So dev ops and fin ops gaps. What kind of gaps who's responsible for setting resources and in the container environments and, and is this a return to old school capacity management functions. So it's interesting that the capacity part because, you know, we see a lot of organizations that they had developed very mature capacity management capacity planning processes when they're buying a lot of on-prem year. So we see some extremely sophisticated stuff being done on that front, you know, predictively forecasting that type of stuff. Back to the cloud, you know, the cloud disruption, what we found happen was people kind of just drop that entirely and just focus on the bill. There's almost this gap that, that resources, whatever it's magic, it just, we're just buying it in the cloud as we need it. We're going to focus on the bill and understanding the bill and giving charge back reports and all that kind of thing. And I think, and no planning whatsoever, because how can you possibly plan? This is all dynamic, right? So what we found is that that went on for long enough that people started to realize, okay, maybe we do need to pay attention to this for the reasons I mentioned that, you know, I have this huge spend and I need to look at the resources and then containers, of course, make that even more complicated. So we do see a return to the need for that type of function, although it may not look the same. Again, it may not look like people trying to plan out three-year purchasing strategies. It's different. It's more of an operational, and we joke about cap ops. Is there a cap ops kind of function? And we don't want to coin that phrase. It's not a great phrase, but is there, is there some more operationalized capacity function needs to occur? Because you clearly need to pay attention to this. And it's falling through the cracks right now. Nobody, there's people looking at the bill and there's people that are looking at, you know, writing the abacode. Who's looking at this? That's the big gap. And Chuck, you work a lot with the FinOps, you know, foundations like that. And I think, you know, some really, really great stuff being there. But does it get to the level of specific resources? I think it stays kind of elevated at the financials and not necessarily getting into the details there. Yeah, the initial wave seems to be more around how do we allocate out the cost of this infrastructure to the various consumers inside an enterprise. But those generally don't focus on the act of getting those actual resource specifications correct, either from a risk or efficiency perspective. It's more about, you know, okay, you're paying for your fair share and how do we, how do we account for things that are shared services versus dedicated to a business service. So it's not to trivialize those pure finance challenges because enterprises have to have to allocate cost and make sure that, you know, the right parts of the business are budgeting and paying for things. But it doesn't solve the question of is my supply chain introducing risk and or waste. That's a, that detailed level stuff. Hey, maybe this is a question for Chuck, but Andrew Andrew in his last statement there was saying talking about charge back reports and I took that to mean that densify provides charge back reports for, you know, the, the. The managers so they can, you know, take reactions to the information that's shared from, from your technology. Isn't that something that already would come from the cloud providers themselves? Why do they need, why do they need densify. So, I think as you look at, but I think you need to separate the pure cloud consumption cost and and the actor or the complexities of containers as Andrew was talking about them. The actual precise representation of what a business service or an application is actually consuming is is a much more precise type of answer that's required and one that we deliver versus a general sense of what something is costing at a high level, which is what often times we find from the, the, the cloud providers, they, they tend not to drill down into the detail. And, you know, our point of view is that they're not entirely motivated to drive cost efficiency when they're the ones that are actually selling the capacity. So, you think. Yeah, well, it's, I sometimes compare it although the cell cell cellular companies or telcos are getting better at providing data, but they're certainly not calling you up and telling you whether they're savings to be had proactively. I do think the cloud providers are playing specifically AWS a long game and they are generally interested in helping companies efficiently use infrastructure and manage the cost, but they're, they're really not driving down to that next level. Andrew, do you want to talk to this diagram. Yeah, I thought this was relevant to I just pulled up all your speaking is that, you know, we've we've you we've you cost is kind of a pyramid and you think of the top as kind of how you're buying in the bottom as what you're buying. And so I'll just kind of build this out. So we saw it like the cloud market progress where there was a huge focus on understanding the bill, figure out who needs to pay the bill. You know, I'm doing my job if I'm just recovering the cost from all the lines of business that are buying those services. Other anomalies and then of course how you buy reservation savings plan. So it's like picture you have a fleet of vehicles and everybody's driving them everywhere and you think, well, first step is let's get a discount program with the gas station. I get a card a points card or whatever. Well, that doesn't really fix the problem. The problem is actually that everybody driving everywhere. It fixes party fix a problem. It gives you a better problem. But we see this line you cross where you get into not just how you're buying it and what you're spending, but optimizing what you're actually using. So I think this kind of captures nicely the dividing line. We do both sides of this line, but we never be focused in the bottom area because we're not we're not focused on the taxation and all that kind of stuff of the bill analysis. What we're saying is no, you're buying the wrong things. You should be buying compute optimize instances that are half the size or or your containers should all be smaller or different. And so that's where we see, you know, and we draw like a period because that's where we see the bulk of the inefficiency comes from is that not that you're buying it wrong or you don't understand it. It's that you're using the wrong resources and most analysis we do. You mentioned somebody did a study of most containers are size wrong. Absolutely. And also even just most cloud instances on the wrong types of instances. You know, cases where customers say that we recommend to go on to like an eye three and they had never heard of an eye three because we kind of look at the whole the whole portfolio of resources. So we find this this diagram is useful and cloud providers to what you were mentioning in Chuck as well kind of doubt on the top part of this diagram. But no cloud providers going to say, yeah, your scale group nodes are the wrong family and should be scaling differently that they're not. They're just not getting into that, which is really. And the other dynamic is that the bottom part of this diagram, the personas there the engineer, the app owner. Generally doesn't want to take recommendations as to infrastructure change from a finance view. It's the number one challenge in a recent Finops survey. Finops Foundation ran a survey in the law in February. Number one finding was that are the number one issue was getting engineers to actually make the change. In order to drive efficiency and a lot of that problem is because just of the resistance the sort of talk to the hand when a bill generated suggestion for efficiency is brought forward. It needs to be a foundation in precision analytics that tell you a an actionable change so that over time you trust and you'll actually make those changes. That in our experience is the way to deal with that, that number one challenge. And as you get into containers, it gets more and more obscure because there's no way a finance person is going to know or have the information even realize that. All the containers are misspecified. The request values are too big causing the nodes to inflate causing there to be more nodes like you don't see that in the bill. You're just getting a big bill. So we see people say, let's make a container charge back report to figure out who's spending the money, which is useful, but doesn't solve the root cause. The root cause is coming from the actual DevOps tool chain in that case. So that's the disconnect between the DevOps Finops and the capacity side. There's still that gap there. The big win in finance is, okay, everyone's paying what they should. The cost is covered. It's not answering the question of is, is what we're paying appropriate for, for what we actually actually need. Hmm. Okay, one final question I have, then, then maybe we'll start with some of the script that was provided by Heidi if we have time. I don't know. I'm sorry. Are there, are there apps that don't jive well with densify? Like, there's a lot of apps that are built on Kubernetes for Kubernetes, you know, they call themselves cloud native apps. There's other companies, you know, that are or other developers that are trying to forklift upgrade. Legacy apps into the cloud or kind of trying to bring the cloud to them. I don't want to name the point, you know, point fingers at any big giant Oracle, excuse me, database vendors. But that actually was a slip of the tongue. Are there any apps that that you guys can't handle? Or are there some workloads that are better suited for densify? And are there other other workloads that people should just stay away from and just just give up? Well, I mean, that's a good question. I don't know if there's any specific apps, but I'll characterize it as types types of apps or types of workloads. And the one thing that and it's not just us. I think it kind of is very difficult to deal with is if you have a lot of churning systems containers or cloud instances that come and go a lot. It's either because of microservices or something else. What we find is that you can see that and you can actually tell if it's wrong or not. But if it's not tagged, you don't know what to do with that answer. So it's kind of the whole ephemeral instance. So picture if you have a big monolithic app, but turning along for days, that's easy. You see it. You know what it is. If you should optimize it, you can identify a recommendation and you can identify the Terraform file came from and say there, you know, put this line in the Terraform file and it'll be automatically fixed. If stuff comes and goes, let's just say like a grid. We see customers running grids either in containers or just in the cloud and nodes come and go. But they're all identified to be part of that grid. So that's great. So we can say, okay, all your grid nodes, you should do this with them. They'd be better, better optimized. But then you get into stuff that nobody knows what it is. And we still have that problem. It's shameful that we saw this problem where we are in the progression of the IT industry that nobody tagged it. Nobody knows what it was. Something just blips it in and out. And who knows where it came from. So tagging is still a problem. It's always been a problem. You know, right through all this progression we talked about it. So if somebody doesn't tag it and say what it is, you can look at it and say, well, that's all wrong, but you have no idea what to do with that answer. So that's what kind of, I would say if you say what trips us up is if something we give an answer for, but the answer is nowhere to go because organizationally people haven't identified what they are. Again, this is like this dark matter that runs in the cloud and nobody actually knows what it is except for the person running it. And that still exists. So you can see it, you can quantify it, you can tell if it's wrong. You can't fix it because you have no line of sight to where it came from. And so that I think, again, we see that and then Chuck, you might have some comments on other challenges, but that one I think is a big one where if you know the discipline to describe what's running in the cloud, it's like you're running a car without a blueprint. If somebody knows what the pieces are, you still find the challenges. You can't really fix it. You can't allocate the cost to anybody. It's just, it's just a mystery. I think you've also pointed out that certain applications that are single purpose and with built in scaling and architected from the ground up to operate, you know, I mean, I don't know maybe Facebook is a good example but where it's just a native part of the way that it runs that that also may not be a case where we could add value. Yeah, that's a good point Chuck. There's cases where we use the Facebook example as if you run one giant business service that's the same kind of the same app running serving a lot of people highly scaled. Then people might hand tune that and they might, you know, they might be a level detail as opposed to like again, a bank that has 3000 apps and they're all different. That, that becomes very difficult to optimize. So the ones that kid gloves, you know, they still benefit from optimization. They still benefit from the telemetry, but they usually have people that wrap their head around it and they are more proactive in optimizing it. It's the diversity that kind of creates the problem. So I had, I had here in my list of things to talk about, you know, was customer aha moments. And, you know, for me, I just had an aha moment when you were talking about like having a car with no blueprint and customers being able to, you know, use something like densify to get this type of analytics. But if they're not managing their work, you know, their development chain and they don't actually have a way to take action on the feedback that your tooling can provide them. How, how often do you see that inside a customer and how do you, does it get resolved? So, so we do have a report we call the tag compliance report that will say, here's a bunch of stuff out there that isn't described well enough to get back to whoever created it. And so that's one of the most effective ways you give that to a customer, the customer will take that and say, okay, let's now go and make sure everybody we've seen some customers that will even kill things if they're not tagged property. We had one customer that had an auto killer. If somebody comes up for more than five minutes and isn't tagged, he just gets killed. So some of them are very disciplined, but others, you know, they need to kind of go out and reach out and say, who's running this and could you please tag it as you deploy it. So do developers have to rebuild their software to accommodate being analyzed by densify? Or is it just like, hey, make sure you have all this metafile information somewhere so that densify can understand who it's talking to or how much, how much education. I was going to say, how much education needs to happen for, you know, the DevOps team or developers or whatever to be able to get their containers into the production environment so that you folks can actually work with them. So it doesn't change the app logic or the way they're developing the way we're somebody's writing some app in whatever language and they go to deploy. What happens is there's a point where these terraformers example, where they're going to make a file that says run my app and give it 1000 millicores and give it so much memory and do this. So that's the point right there. They don't need to change the writing, but the point where they say how to deploy and what resource it needs, what you can do is don't put that there. Just put a line of code there saying just put in the analytics answer as far as the millicores for what this is. And it'll automatically resolve it back. If you're in terraform, for example, it'll automatically know what this thing is and resolve it back. So it doesn't require developers to change the way they're doing it. If they put it in the right place. And this is the big difference between, again, legacy environments like VMware where you throw stuff out into a VMware environment and then it's just running off into the distance and to change it, you go and change the object. You go and change the VM. You don't do that in container environments or cloud environments. You change the code that created the instance for the container. So as long as the developer puts a line of code in there saying to link it, then they're done. They don't need to do any more. It just says when the recommendation comes up and the engine finds the recommendation, insert it here. And so it actually makes the developers life better because they don't want to be thinking they've got someone powering their fist saying get the Java code done for our release next Friday. They don't care about millicores. They shouldn't be burdened with that. We've seen customers put spelling mistakes in these files. They just shouldn't be doing that. They should just put code in there and then worry about their apps. Okay, we are out of time. I'm surprised that Chris Short hasn't started yelling at me yet in the background. What do we got here? I'm going to share my screen. This is our closing slide. Normally, like, how can people find out more? What do you want them to do? You gave us some QR codes here. Certainly, they can go to your website as well. And presumably there's ways to get engaged with Densify. Any final closing words? Normally, I ask people like, hey, what are the two things that your CMO would want to make sure that you talked about that you didn't so you can prevent that phone call immediately after the call here. But given that Chuck is on here, I can't ask that question or should I? Well, you would expect to get a decent answer. I think a couple of things to take away would be that our trials. Yes, we call them free, but they're also very low effort as far as seeing what our analytics would tell you about your patient. What would the radiology tell you about the health of your patient? As Andrew said, it's agentless. It's very low impact from a load on your infrastructure. We can stand it up very quickly and give you that radiology report very, very quickly as well. So that's an open door. If someone just wants to explore some of these concepts with us conversationally and consultatively, we love to do that as well. Very cool. Well, Andrew and Chuck, thanks so much for joining today. If anybody who's watching on Twitter, YouTube or Facebook or LinkedIn or any of those other places wants to get in touch with with any of the folks at Densify. And they don't know how to do it. You can always send me an email. Let's just wait at redhat.com and I'll be sure to get whatever connections made you need. Great. We're done. Chris Short hasn't started yelling at me, but I'm sure that we're eating into somebody else's time here. We're going to, we're going to hang it up for the OpenShift Commons Briefing Operator Hour show for this Wednesday and we're not going to be on next Wednesday because the Red Hat Summit is going to be consuming every resource inside of all of Red Hat. So we're not going to be on next Wednesday. So see you at the Red Hat Summit. Thanks for joining here today. Thanks.