 Welcome to this special CUBE conversation here in Palo Alto. I'm John Furrier, host of theCUBE. We're here talking about Kubernetes, Cloud Native and all things Cloud, Cloud Enterprise. Amir, I'm sure a VP of product and Mark Ampisani is with me. Amir, great to have you on theCUBE. Thanks for coming on. Appreciate you taking the time. I appreciate it, John. Good to be here. You know, Cloud Native obviously is super hot right now as the edge is around the corner. You're seeing people looking at 5G, looking at Amazon's wavelength, outposts. You got Azure. You got a lot of cloud companies really pushing distributed computing. And I think one of the things that people really are getting into is, okay, how do I take the cloud and refactor my business? And then that's one business side. Then the technical side, okay, how do I do it? Like, it's not that easy, right? So it sounds really easy. Oh, just go to the move to the cloud. This is something that's been a big problem. So I know you guys are in the center of all this and you got, you know, microservices, Kubernetes at the core of this. Take a minute to introduce the company that you guys do. Then I want to get into some specific questions. Of course. Well, Obsani is a startup, Silicon Valley startup. And what we do is automate system configuration. That's typically a work that an engineer does and takes, it's lengthy and it's done incorrectly. It leads to a lot of errors and cost overruns and user experience problems. We completely automate that using AI and ML backend so that the engineer can focus on writing code and not worry about having to tune the little pieces working together. And I love the, I was talking to a VC on our last startup showcase, cloud startup showcase and really prominent VC. And he was talking about down stack and up stack benefits. And he says, if you're going to be a down stack provider, you got to solve a problem and it has to be a big problem that people don't want to deal with. So I just start getting into some of the systems and configuration when you have automation at the center of this as a table stakes item, problems are cropping up as new use cases are emerging. Can you talk about some of the problems that you guys see that you solve for developers and companies? Of course. So they're basically, the problem expresses itself in a number of domains. The first one is that he who pays the bills is separate from he who consumes their resources. It's the engineers that consume their resources and their incentives are to deliver code rapidly and deliver code that works well, but they don't really care about paying the bills. And then the CFO office sees the bills and there's a disparity between the two. The reason that creates a problem, a business problem is that the developers will over provision stuff to make sure that everything works and they don't want to get called in the middle of the night. The bill comes due at the end of the month or end of the quarter and then the CFO has smell coming out of his ears because there's been clout over overruns. Then the reaction happens to, all right, let's cut cost and then there's an edict that comes down that says everything reduce everything by 30%. So people go across and give a haircut to everything. So what happens next, the system's out of balance. There's allocation, resource misallocation and systems start suffering. So the customers become unhappy and ironically, if you're not provisioned correctly, not ironically, but maybe understandably, customers start suffering. And that leads to a revenue problem down the line if you have too many problems unhappy. So we have to be very careful about how you cut costs and how you apportion resources. So both the revenue side is happy and costs are happy because it all comes down to product experience and what the customers consume. That's something that everyone who's done cloud development knows, who's fault is it, it's this fault. But now you can actually see the services you leave a switch open or I'm oversimplifying it, but as you experiment services, the bills can just have massive overruns and then you got to call the cloud companies and you got to call the engineers and say, why did you do this? You got to get a refund or one bad apple could ruin it for everyone as you highlighted more of the bigger companies. So I have to ask you, I mean, everyone lives this. How do companies have cost overruns? Is there patterns that you see that you guys wrote software for to one automate the obvious ones? Is there certain things that you know always happen? Are there areas that have some indication? So why do companies have cloud cost overruns? That's a great question. And let's start with a bit of history where we came from. So in a pre-cloud world, you built your own data centers, which means that you had enough from CapEx cost and you spend the money and you were forced to live within the means that your data center provided. You really couldn't spend anymore. That provided kind of a predictable expenditure model. It came in big chunks, but you know what your budget was gonna be four years from now, three years from now, and you built for that. With the cloud computing, your consumption is now on demand basis and it's API enabled. So the developer can just ask for more resources. So without any kind of tools that tell the developer, here is X amount of CPU or X amount of memory that you need for this particular service that for it to deliver the right performance for the customer, the developer is incentivized to basically give it a lot more than the application needs. Why? Because the developer doesn't wanna pick up service tickets. He's incentivizing, delivering functionality quickly and moving on to the next project, not in optimizing costs. So that creates sort of an agency problem that the guy that actually controls how resources are consumed is not incentivized to control the consumption of these resources. And we see that across the board in every company. The engineering organization is a separate organization than the financial organization. So the control place is different than the consumption place and it breaks down. The patterns are over provision and what we want to do is give engineers the tools to consume precisely the right amount of resources for the service level objectives that they have. Now, given that you want a transaction rate of X and a latency rate of Y, here's how you configure your cloud infrastructure so the application delivers according to those SLOs with the least possible resources consumed. So on this tool that you guys have and the software you guys have, how do you guys go to mark with that? You target the business buyer or the developer themselves and how do you handle the developer saying, I don't want anyone looking over my shoulder, I'm going to have a blank check, I'm going to do whatever it takes. How do you guys roll that out? Because obviously the business benefits are significant controlling the budget. I get that. How do you guys rolling this out? How do people engage with you? What's your strategy? Right, our buyer is the application owner, is the guy that owns the PNL for the application. It tends to be a VP level or a senior director person that owns a SaaS platform and he or she is responsible for delivering good products to the market and delivering good financial results to the CFO. So in that persona, everything is rolled up but that persona will always favor the revenue side which means consume more resources than you need in order to maximize customer happiness, therefore faster growth and they do that while sacrificing the cost side. So by giving the product owner the optimization tools, the autonomous of optimization tools that Upsani has, we allow him or her to deliver the right experience to the customer with the right of sufficient resources and address both the performance and the cost side of the equation simultaneously. Awesome, can you talk about the impact CICD is having in the cloud native computing on the optimization cycle? Obviously, shifting left for security, we hear a lot of that. You're hearing a lot of more microservices being spun up, spun down automatically. I'll see Kubernetes clusters are going mainstream. You're starting to see a lot more dynamic activity in these new workflows. What is the impact of these new CICD cloud native computing on the optimization cycle? CICD is there to enable a fast delivery of software features basically. So we have a combination of GitOps where you can just pull down repositories, libraries, open source projects from left and right and using glue code, developers can deliver functionality really quick. In fact, microservices are there in service of that capability, deliver functionality quickly by being able to build functional blocks and then through APIs, you put everything together. So CICD is just accelerates a software delivery code and between the time the boss says, give me an application until the application team plus the DevOps team plus the SRE team puts it out in production. Now we can do this really quickly. The problem is though, nobody optimizes in the process. So when we deliver 1.0 in six months or less, we've done zero in terms of optimization and that 1.0 becomes a way that we go through QA in many cases, unfortunately, and it also becomes a way that we go through the optimization, the customer screams, the UI is laggy, the throughput is really slow and we tinker and tinker and tinker and by the time it typically goes through a 12 month cycle of maturation, we get that system stability in the right performance. With AI and machine learning that Opsani has enabled, we can deliver that, we can shrink that time out considerably. In fact, what we're gonna announce in KubeCon is something that we call KiteStorm, is the ability to install our product in a Kubernetes environment in roughly 20 minutes and within two days, you get the results. So before you had this optimization cycle that was going on for a very long time, now that it's shrugged down and because of CICD, you don't have the luxury of waiting and the system itself can become part of the way of configuring the system. The system being the AI ML service that Opsani delivers can be part and parcel of the CICD pipeline that optimizes the code and gives it a right configuration in the get-go. So you guys are really getting down and injecting in some instrumentation from metadata around key areas, is that right? Is that kind of how it's working? Are you getting in there with code to kind of watch? How was it working under the hood? Can you just give me a quick example of how this would play out and what people might expect, how it would handle? Of course. So the way we optimize application performance is we have to have a metric against which we measure performance. That metric is an SLO, service level objective. And in a Kubernetes environment, we typically tap into Prometheus, which is the metrics gathering place, metrics database for Kubernetes workloads. And we really focus on red metrics. The rate of transactions, the error rate and the for delay or latency. So we focus on these three metrics and what we have to do is inject a small container. It's an open source container into the application workspace. We call that container servo. Servo interacts with Prometheus to get the metrics and then it talks to our backend to tell the ML engine what's happening. And the ML engine and does this analysis and comes back with a new configuration which then Servo implements in a Canary instance. So the Canary instance is where we run our experiments and be compared against the main line which the application is doing. After roughly 20 iterations or so, the ML engine learns what part of the problem space to focus on in order to optimize, to deliver the optimal results. And then it very quickly comes to the right set of solutions to try. And it tries those inside the Canary instance. And when it finds the optimal solution, it gives the recommendation back to the application team or alternatively, when you have enough trust in Obsani, you can auto promote it into main line. That's all. That's the learning in there. This is a great example of some cloud native action. I want to get into some examples with your customers but before we get there, I want to ask you since I have you here, Amir, if you don't mind, what does cloud native mean these days? Because you know, cloud native has become kind of much, say, oh, cloud computing, which essentially go move to the cloud. But as people start developing in the cloud, where there's real new benefits, people talk about the word cloud native. Could you take a quick minute to define what is cloud native? What does that even mean? What does cloud native mean? I'll try to give you my understanding of it. We could get into a bit of philosophy. Sure, that's good. Yeah. But basically a cloud native means it's your application is built for the cloud and it takes advantages of the inherent benefits that a cloud environment can give you, which means that you can grow and shrink resources on the fly if you've built your application correctly, that you can scale up and scale down your number of instances very quickly. And everything is taken advantage of APIs. So initially that was kind of done inside of VM environment. AWS EC2 is a perfect example of that. Kubernetes shifted cloud native to a containerized workload because it allows for a more rapid deployment. And even enables or it takes advantage of a more rapid development cycle. As we look forward, cloud native is more likely to be a serverless environment where you write functions and the backend systems of the cloud service provider just give you that capability and you don't have to worry about maintaining and managing a fleet of any sort whether it's VMs or containers. That's where it's going to go. Currently we are a containerized phase. So as you start getting into the serverless model, you get Lambda, which everyone's been playing with loves it. As you get into that, that's going to accelerate more data. So I got to ask you as you get into more of this, this monitor, I won't say monitoring or observability, however you want to look at it, you got to get at the data. This becomes a critical part of solving a lot of problems and also making sure the machine learning is learning the right thing. How do you view that and you guys over there? Because I think everyone's like getting that cloud native it's not hard sell to say that's all good. But we're going to go back, you know, the expression ships created ships and then you have ship wrecks, you know, there's always a double edged sword here. So what's the downside if you don't get the data right? Well, so the, for us, the problem is not too much data. It's lack of data. So if you don't get data right is you don't have enough data and the places where optimization cannot be automated is where the transaction rates are slow where you don't have enough throughput coming into the application and it really becomes difficult to optimize that application with any kind of speed. You have to be able to profile the application long enough to know what moves its needle and in order for you to hit the SLO targets. So it's not too much data. It's not enough data that seems to be the problem. And there are a lot of applications that are expensive to run, but have a low throughput. And I would, in all cases, actually in every customer environment that I've been in where that's been the case, the application is just over provision. If you have a low throughput environment and it's costing too much, don't use ML to solve it. That's a wrong application of the technology. Just take a sledgehammer and back your resources by 50% to see what happens. And if that nothing breaks, then back it again until you find the breakage point. Yeah, exactly. First you over-prison, then you bang it back down again. It's like the old school. Now with the cloud, take me through some examples when you guys had some success. Obviously, you guys are in the right area right now. You're seeing a lot of people looking at this area to do that. In some cases, sledgehammer their whole data center and refactor their business. But as you get it with customers with the app side, what are some successes? Can you share some of the use cases that you guys are being successful with your customers? Can you give some examples? Yeah, so a well-known financial software for mid-sized businesses that does accounting. You know, it's a... There are customers that are in a large fleet and this product has been around for a while. This is not a containerized product. This product runs on VMs and Java is a large component of that. So the problem for this particular vendor has been that they run on a heterogeneous fleet. That the application's been around for a very long time and as new instance types on AWS have come in, developers have used those. So the fleet itself is quite heterogeneous and depending on the time of the day and what kind of reports are being run by organizations, the mix of resources that the applications need are different. So when we started analyzing the stack, we started looking at three different tiers. We looked at the database level. We looked at the Java mid-tier and we looked at the web front end. And one of the things that became counterproductive is that ML discovered that using for the mid-tier, using larger instances, but fewer of them allowed for better performance and lower cost. And typically your gut feel is to go with smaller instances and more of a larger fleet, if you would. But in this case, what the ML produced was completely counterintuitive and the net result for the customer was a 78% cost reduction. Why a latency went down by 10%. So think about it that your, the response time is 10% less, but your costs are down almost 80% and 78% in this case. And the other artifact that happened in the Java mid-tier is that we improved garbage collection significantly. And because whenever a garbage collection happens on a JVM, it takes a pause. And that from a customer perspective, it reflects as downtime because the machines are not responding. So by tuning garbage collection on JVMs across this very large fleet, we were able to recover over 5,000 minutes and month across the entire fleet. So these are some substantial savings and this is what the right application of machine learning on a large fleet can do for a SaaS business. And so talk about this fleet dynamic. You mentioned serverless. How do you see the future evolving for you guys? Where are you skating through the puck is as the expression goes. Obviously with serverless is going to have essentially unlimited fleets potentially. That's going to put a lot of power in the hands of developers. Okay, and people building experiences. What's the next five years look like for you guys? So looking at a product from a product perspective, the serverless market depends on the mercy of the cloud service provider. And typically the algorithms that they use, basically they keep very few instances warm for you until the rate of API calls goes up and they start turning on VMs or containers for you. And then the system becomes more responsive over time. One place that we can optimize the serverless environment is give predictability of what the cyclicality of load is. So we can pre provision those instances and warm up the engine before the loads come in. So the system always stays responsive. And you may have noticed that some of your apps on your phone that when you start them up, they may have a startup like a minute or two, especially if it's a terry gap. What's happening in those cases is you're starting an API calls goes in, the containers being started up for you to serve up that instance. Not enough of our warm to give you that rapid response. And that can lead to customer churn. So by analyzing what the overall load of the system is and pre provisioning the system, we can prevent the downtime or prevent the lag, the startup lag on the downside, which when the usage goes down, it doesn't make sense to keep that many instances up. So we can talk to the backend infrastructure and decommission those VMs in order to prevent cost creeps basically. So that's one place that we're thinking about extending our technology. So it's like the classic example where people say, oh, during Black Monday, everyone surges to do e-commerce. You guys are thinking about it on a level that's a user centric kind of use case where you look at the application and be smart about what the expectation is on any given situation and then flex the resources on that. Is that right? Am I getting it right? And your example of the app is a good one. If I wanted to load fast, that's the expectation. It better load fast. Yes, that's exactly it. But I'm more of a romantic. So I use Valentine's Day and Flowers in my example. It doesn't have to be annual cycles. It can be daily cycles or hourly cycles. And all those patterns are learned by an ML back in. All right, so Mia, I got to ask you. I love this new concept because most people think auto scaling, right? Because that's a server concept, right? Like an auto scale or a database. Okay, I'm going to scale up. You're getting down to the point where, okay, we'll keep the engines warm, getting more detailed. How do you explain this versus say concept like auto scaling? Is it the same? Are they cousins? They're basically the way they're expressed. It's the same technology, but their way they're expressed is different. So in a Kubernetes environment, the HPA is your auto scaler. It basically, in response to the need, it spawns more instances and you get more containers going on. What happens in a service less environment is you're unaware of the underpinnings that do that scale up for you. But there is an auto scaler in place that does that scale up for you. So the question becomes that we're in a stack from a customer's perspective, are you talking about? If you're managing your instances, we're dealing with the HPA. If you're managing at a functional level, we have to have API calls on the service providers infrastructure to pre-warm up the engine before the load comes. I love it. I love this under the hood. It's kind of love new dynamics. Kind of the same wine, new bottle, but still computer science, still coding, still cool and relevant to make these experiences great. I mean, thanks for coming on this cube conversation. I really appreciate it. Take a minute to put a plug in for the company. What are you guys doing in terms of status, funding, scale, employees, what are you looking for? And if someone's watching this in there, should be a customer of you guys. What's going on in their world? What tells them that they need to be calling you? Yeah, so we're a serious aid. We've had the privilege of that, we've been privileged by having very good success with large enterprises. If you go on our website, you'll see the logos of who we have. We will be at KubeCon and there, we're going to be actively targeting the mid-market or smaller Kubernetes instances. As I mentioned, it's going to take about 20 minutes to get started and we'll show results in two hours. And our goal is for our customers to deliver the best user experience in terms of performance, reliability, so that they delight their customers in return and they do so without breaking the bank. So deliver excellent products, do it at the most efficient way possible, deliver good financial results for your stakeholders. This is what we do. So we encourage anybody who is running a SaaS company to come and take a look at us because we think we can help them and we can accelerate their growth at their lower cost. And the last thing people need is have someone coming, breathing down their neck, saying, hey, we're getting overcharged. Why are you guys screwing up? When they're not, they're trying to make a great experience. And I think this is kind of where people really want to do, push the envelope and not have to go back and revisit the cost overrun. Which is actually a good sign if you get some cost overruns here and there because you're experimenting. But again, you don't want it to get out of control. You don't want it to be habitual like a U.S. debt. Exactly. Amir, thank you for coming on, Greg. We'll see you at KubeCon. The Kube will be there in person. It's a hybrid event, so KubeCon's going to be awesome. And thanks for coming on the Kube. Appreciate it. John, it's a pleasure. Thank you for having me on. Okay, I'm John Furrier with the Kube here in Palo Alto, California for a remote interview with Obscending Hot Startup Series A. I'm sure they're going to do well in the right spot in the market. Really well poised and cloud native. Thanks for watching.