 The adoption of container orchestration platforms is accelerating at a rate as fast or faster than any category in enterprise IT. Survey data from enterprise technology research shows Kubernetes specifically leads the pack in both spending velocity and market share. Now, like virtualization in its early days, containers bring many new performance and tuning challenges. In particular, ensuring consistent and predictable application performance is tricky, especially because containers, they're so flexible and the enable portability things are constantly changing. DevOps pros have to wade through a sea of observability data and tuning the environment becomes a continuous exercise of trial and error. This endless cycle taxes resources and kills operational efficiency. So teams often just capitulate and simply dial up and throw unnecessary resources at the problem. Stormforge is a company founded mid last decade is attacking these issues with a combination of machine learning and data analysis. And with me to talk about a new offering that directly addresses these concerns is Matt Provo, founder and CEO of Stormforge Matt. Welcome to theCUBE, good to see you. Good to see you, thanks for having me. Yeah, so we saw you guys at a KubeCon sort of first introduce you to our community but add a little color to my intro there. Yeah, well, you semi stole my thunder but I'm okay with that. Absolutely agree with everything you said in the intro. You know, the problem that we have set out to solve which is tailor made for the use of real machine learning not machine learning kind of as a marketing tag is connected to how workloads on Kubernetes are really managed from a resource efficiency standpoint. And so the number of years ago we built the core machine learning engine and have now turned that into a platform around how Kubernetes resources are managed at scale. And so organizations today as they're moving more workloads over sort of drink the Kool-Aid of the flexibility that comes with Kubernetes and how many knobs you can turn and developers in many ways love it. Once they start to operationalize the use of Kubernetes and move workloads from pre-production into production they run into a pretty significant complexity wall. And this is where StormForge comes in to try to help them manage those resources more effectively in ensuring and implementing the right kind of automation that empowers developers into the process ultimately does not automate them out of it. So you've got news, a hard launch coming and to further address these problems, tell us about that. Yeah, so historically like any machine learning we think about data inputs and what kind of data is gonna feed our system to be able to draw the appropriate insights out for the user. And so historically we've kind of been single threaded on load and performance tests in a pre-production environment. And there's been a lot of adoption of that, a lot of excitement around it and frankly amazing results. My vision has been for us to be able to close the loop however between data coming out of pre-production and the associated optimizations and data coming out of production, a production environment and our ability to optimize that. A lot of our users along the way have said these results in pre-production are fantastic. How do I know they reflect reality of what my application is gonna experience in a production environment? And so we're super excited to announce kind of the second core module for our platform called Optimize Live. The data input for that is observability and telemetry data coming out of APM platforms and other data sources. So this is like Nirvana. So I wonder if we could talk a little bit more about the challenges that this addresses. I mean, I've been around a while and it really have observed and I used to ask technology companies all the time. Okay, so you're telling me beforehand what the optimal configuration should be and resource allocation, what happens if something changes? And then it's always a pause. And Kubernetes is more of a rapidly changing environment than anything we've ever seen. So this is specifically the problem you're addressing. Maybe talk about that later. Yeah, so we view what happens in pre-production as sort of the experimentation phase and our machine learning is allowing the user to experiment and scenario plan. What we're doing with Optimize Live and adding the production piece is what we kind of also call kind of our observation phase. And so you need to be able to run the appropriate checks and balances between those two environments to ensure that what you're actually deploying and monitoring from an application performance from a cost standpoint is aligning with your SLOs and your SLAs as well as your business objectives. And so that's the entire point of this addition is to allow our users to experience hopefully than the Nirvana associated with that because it's an exciting opportunity for them and really something that nobody else is doing from the standpoint of closing that loop. So you set up front of machine learning not as a marketing tag. So I want you to sort of double click on that. What's different than how other companies approach this problem? Yeah, I mean, part of it is a bias for me and a frustration as a founder of the reason I started the company in the first place. I think machine learning or AI gets tagged to a lot of stuff. It's very buzzwordy, it looks good. I'm fortunate to have found a number of folks from the outset of the company with PhDs and applied mathematics and a focus on actually building real AI at the core that is connected to solving the right kind of actual business problems. And so for the first three or four years of the company's history, we really operated as a lab and that was our focus. We then decided we're trying to connect a fantastic team with differentiated technology to the right market timing. And when we saw all these pain points around how fast the adoption of containers and Kubernetes have taken place, but the pain that the developers are running into, we actually found for ourselves that this was the perfect use case. So how specifically does optimized live work? Can you add a little detail on that? Yeah, so when you, many organizations today have an existing monitoring APM observability suite really in place, they've also got a metric source. So this could be something like Datadog or Prometheus. And once that data starts flowing, there's an out of the box or kind of a piece of Kubernetes that ships with it called the VPA or the Vertical Pod Autoscaler. And really less than 1% of Kubernetes users take advantage of the VPA, mostly because it's really challenging to configure and it's not super compatible with the tool set or the ecosystem of tools in a Kubernetes environment. And so our biggest competitor is the VPA. And what's happening in this environment or in this world for developers is they're having to make decisions on a number of different metrics or resource elements, typically things like memory and CPU. And they have to decide what are the requests I'm going to allow for this application and what are the limits. So what are those thresholds that I'm gonna be okay with so that I can again try to hit my business objectives and keep in line with my SLAs. And to your earlier point in the intro, it's often guesswork. They either have to rely on out-of-the-box recommendations that ship with the databases and other services that they are using or it's a super manual process to go through and try to configure and tune this. And so with Optimize Live, we're making that one click. And so we're continuously and consistently observing and watching the data that's flowing through these tools and we're serving back recommendations for the user. They can choose to let those recommendations automatically patch and deploy or they can retain some semblance of control over the recommendations and manually deploy them into their environment themselves. And we again really believe that the user knows their application. They know the goals that they have, we don't. But we have a system that's smart enough to align with the business objectives and ultimately provide the relevant recommendations at that point. So the business objectives are an input from the application team. And then your system is smart enough to adapt and address those. Application over application, right? And so the thresholds in any given organization across their different ecosystem of apps or environment could be different. The business objectives could be different. And so we don't wanna pre-define that for people. We wanna give them the opportunity to build those thresholds in and then allow the machine learning to learn and to send recommendations within those bounds. And we're gonna hear later from a customer who's hosting a Drupal, one of the largest Drupal hosts. So it's all do-it-yourself across thousands of customers. So it's very unpredictable. I wanna make something clear though as to where you fit in the ecosystem. You're not an observability platform. Leverage observability platforms, right? So talk about that and where you fit into the ecosystem. Yeah, so this is a great point. We're also a series of start-up and growing where we've made the choice to be very intentionally focused on the problems that we've solved and we've chosen to partner or integrate otherwise. And so we do get put into the APM category from time to time. We're really an intelligence platform. And that intelligence and insights that we're able to draw is because of the core machine learning we've built over the years. And we also don't want organizations or users to have to switch from tools and investments that they've already made. And so we were never gonna catch up to Datadog or Dynatrace or Splunk or AppDynamics or some of the other. And we're totally fine with that. They've got great market share and penetration. They do solve real problems. Instead, we felt like users would want a seamless integration into the tools they're already using. And so we view ourselves as kind of the intel inside for that kind of a scenario. And it takes observability and APM data and insights that were somewhat reactive. They're visualized and somewhat reactive and we add that proactive nature onto it, the insights and ultimately the appropriate level of automation. So when I think about cloud native and I go back to the sort of origins of CNCF it was a handful of companies and now you look at the participants that make your eyes bleed. How do you address dealing with all those companies and what's the partnership strategy? Yeah, it's so interesting because it's just that even that CNCF landscape has exploded. It was not too long ago where it was as small or smaller than the FinOps landscape today, which by the way, the FinOps pieces is also on a neck breaking growth curve. I do see, although there are a lot of companies and a lot of tools, we're starting to see a significant amount of consistency or hardening of the tool chain with our customers and users. And so we've made strategic and intentional decisions on deep partnerships in some cases like OEM uses of our technology and certainly intelligent and seamless integrations into a few. So we'll be announcing a really exciting partnership with AWS and specifically what they're doing with EKS, their Kubernetes distribution and services. We've got a deep partnership and integration with Datadog and then with Prometheus and specifically cloud provider, a few other cloud providers that are operating managed Prometheus environments. Okay, so where do you want to take this thing? It was not, you're not taking the observability guys head on, smart move, so many of those even entering the market now, but what is the vision? Yeah, so we've had this debate a lot as well because it's super difficult to create a category. On one hand, I have a lot of respect for founders and companies that do that. On the other hand, from a market timing standpoint, we fit into AI ops. That's really where we fit. We've made a bet on the future of Kubernetes and what that's gonna look like. And so from a containers and Kubernetes standpoint, that's our bet, but we're an AI ops platform. We'll continue getting better at the problems we solve with machine learning and we'll continue adding data inputs. So we'll go beyond the application layer which is really where we play now. We'll add kind of whole cluster optimization capabilities across the full stack. And the way we'll get there is by continuing to add different data inputs that make sense across the different layers of the stack. And it's exciting. We can stay vertically oriented on the problems that we're really good at solving, but we can become more applicable and compatible over time. So that's your next concentric circle. As the observability vendors expand their observation space, you can just play right into that. More data you get because your purpose built to solving these types of problems. Yeah, so you can imagine a world right now out of observability, we're taking things like telemetry data. Pretty quickly, you can imagine a world where we take traces and logs and other data inputs as that ecosystem continues to grow. It just feeds our own, we are reliant on data. Excellent, Matt, thank you so much. Yeah, thanks for having on. Okay, keep it right there. In a moment, we're going to hear from a customer with a highly diverse and constantly changing environment that I mentioned earlier. They went through a major replatforming with Kubernetes on AWS. You're watching theCUBE. You're a leader in enterprise tech coverage.