 Hey, everyone. Welcome to the CUBE's presentation of the AWS startup showcase, Analytics and Cost Optimization. This is season three episode two of our ongoing series covering exciting startups from the AWS ecosystem. I'm your host, Lisa Martin, and today we're excited to be joined by one of our CUBE alum, Suresh Matthew, founder and CEO of Sidi. Suresh is here to talk about Sidi, AI-powered autonomous cost optimization. Suresh, welcome back. Thanks so much for joining me today. Glad to be here. Thank you, Lisa. Give the audience a little bit of an overview. We talked about this last time and I want them to really understand Sidi. I know you guys are a Gartner cool vendor for 2023, but give us the backstory. Sidi is an autonomous cloud management platform that helps you run your applications with the lowest cost, the highest availability, and with the best performance for modern applications. It includes ECS, EKS, and Lambda 2. We have seen customers like Freshworks, Fabric, and No Before Saving 30 to 50% with absolutely zero manual intervention. And of course, as you said, we are a Gartner cool vendor and an AWS GSP partner. Nice. Let's talk a little bit about cloud cost optimization. You mentioned some great customer names ready to some customer examples later on. But talk about why you think cloud cost optimization is so important, such an important problem for customers in any industry to solve. Sure. First, there is a direct cost to the bottom line for companies with significant cloud spend. When you save that money, that could impact your earnings itself. We have seen 10% impact on a positive way when they report to Wall Street. The second aspect is when you, if you get it wrong, for example, if you over optimize, it might hurt your revenue itself, because your peak could probably suffer because of your optimization. And peak is gold for any company. They care about it. So you have to do it neat and clean so your peak doesn't suffer. Those are the two main things that I can think of. Direct cost to the bottom line actually not having cloud optimized can impact revenue for organizations. As organizations are starting to focus more on modern applications, it's obviously growing rapidly. How are cost management challenges different for modern apps? I imagine there's still the impact on bottom line and revenue, but tell us how that's different for modern apps. Yeah. Modern apps, the best part is you can innovate quickly, faster, right? The one challenge there is, like 10 years ago, you used to have tens or hundreds of services. Now you have like hundreds or thousands of services. So that makes it complex. Now there are a lot of dependencies out there. When you modify something here, you may be really impacting something in the sixth layer. Now this is a complex ecosystem. On top of it, you frequently release. It used to be once a month, once a week. Now it is once every minute, once every five minutes. Now it's impossible for people to manage their production optimized. So that's why modern systems, it's hard to manage with just automation. Services explosion, explosion of releases, the frequency of the cadence has changed so much. So then what is Sadi's approach from an AI perspective that uses machine learning to help customers solve this as modern apps, as we said, are growing so rapidly? Yeah. So why did we use this approach? Let's start there. Why? Why was there a need to have AI or autonomous systems in play, right? If you look at the wastage today, it is 30%. 30% of what is spent on cloud is going to waste. It is not when we don't have automation, we don't have skilled people out there. The only reason is it's extremely hard for companies to now stay optimized. When you optimize, it could still go unoptimized when you make the very next release, right? Now, if you look at why automation or automated systems still fail to optimize it to like 90%, there are two aspects here. One, someone has to take a look at these alerts and execute on it. And these alerts are not just alerts saying reduce it. It could carry some risks too. You need to have a system that can do most of it. And for us, the learning aspect defines the rule itself. Initially with automated system, you say, if CPU goes above 50%, do this. And this 50% could change. It goes stale quickly. What an autonomous system do is it removes that rule 100%. So there is no rule. So there is nothing to go stale. The system learns what that number is. And when it learns it updates, and the action that has to be taken is always reinforced with learning. So that's how it is different, and that's why it makes it more effective. Much more efficient as well. What about for organizations that are adopting containers and really modernizing their overall infrastructures? How does today work in practice in this case? Yeah. So the terminology is different in both these spaces. If you look at containers, it's about, how do you find the right size of it? Where do you deploy these containers into? What exactly is your purchase plan? How do you really make everything glue together? So that exactly are the things out there. When you look at the container area, we call it the size, vertical size and horizontal size of a container. The second aspect is called infrastructure choice. There are tools like Carpenter out there. We make them a little more efficient. The third aspect is really the purchase. Should it be savings? Should it be really our instances? And can you spot? This is the key there. Now, if you do purchase in isolation, you might pick spot because it gets 70% to 90%. But there are applications that can really make use of spot. There are some that cannot. And if you essentially choose a wrong implication, you might hurt yourself. So that is really how containers generally work when it comes to an optimized platform. So five-step approach here, right-sizing workloads, on-demand savings, spot optimization, research intel. You're taking all of this off of the hands of the customer and you showed some significant savings up to 90%. Now, compare and contrast containers with serverless and how Sidi works in that environment. Yeah. On the serverless side, the platform typically lets you control only the memory and CPU becomes an indirect impact there. So first is your right size of the serverless, in this case, Lambda only. And the second aspect is your concurrency, your provision concurrency and your reserve concurrency versus WAMA Passwall. So we call it autonomous concurrency. We manage it for you. We look at the traffic and we manage it for you. And when you release, we change these two things continuously. So let's- And in fact, there is one other topic there. Provision concurrency is infamous to be expensive. People think provision concurrency is expensive. In fact, there are very specific scenarios where it becomes cheaper. It essentially is a lot cheaper to use provision concurrency than a normal Lambda in certain traffic conditions and said, I choose this for you. So you don't have to worry about it. For a Lambda, that provision concurrency is the right choice for you. We will do it for you. Offloading that from the end user. Sorry for interrupting you, but let's focus on right sizing and explain how it actually works for workloads, for infrastructure, and how it optimizes costs as you shared with those two slides. Sure. When it comes to right sizing, there are two aspects here, the vertical size and the horizontal size. Traditionally, horizontal size always takes care of your surge of traffic and vertical size decides on your application behavior. And what we do is figure out your vertical size in production. So we now know the best size for your application in production. The next aspect is learn your traffic, understand your pattern, look at your peak. Let's take an example. Let's say 11 o'clock is your peak and 12 30 peak is over. So we will prep you at 11, 10, 55 so you can handle your peak better. Now your peak customers are extremely happy by 12 30. We will scale you down. Now the thing here is you can put as a rule in there, but this peak keeps changing. The number keeps changing. The application keeps changing. You could have a sudden surge without a peak. You announced a deal that cost a surge. Now there is nothing to handle that things of that nature. So autonomously. So the customer doesn't have to worry about that. You're taking that burden off of their plates. Exactly. The rules are now decided by Sedai with guidelines from the engineering and Finoff's. Let's talk about purchasing optimization. Explain how that works. And from a from a step perspective, why are you doing it after the workload and the infrastructure right sizing? Why is that important? Yeah. Purchasing optimization is where the largest impact in most cases are, but that's where you tend to make some wrong choices to. I've seen a lot of enterprises with overcommit there. You got something for 70% off. You started committing to an infrastructure that you will never use in the next year. And now the problem is now you're sitting on a lot of money with commits. How do you repurpose it? So that's when you have to look at your application and see what applications can really consume this if you adjust it slightly. This cannot be an aftermath. You have to think about it and do the purchase based on what you think of your applications. That's why it's independent decision making would be extremely risky. So we learn the application. We learn the infrastructure. We figure out if the situation changes. This application starts getting lower traffic. What kind of purchase would still be beneficial for you? So we make this purchase decision as a cohesive decision. That's a cohesive decision. You talked about responding to traffic and that Sadiah is doing all that learning on behalf of the customer. Predictive auto scaling. How does that work? What are some of the key advantages there that Sadiah is delivering to its customers? Sure. So peak, as I mentioned, is gold. For any customer, the revenue stream is mainly their peak time. And that is where you want your best customers, your most important customers to have the best experience. If you over optimize, the biggest problem is most of your time, you may be running fine, but at peak, you may have a deteriorated experience. So that's where Sadiah looks at your production, learns your seasonality, and always gives you the recommendation that works well for your peak. In some cases that would be just horizontal scaling. That would be easier. But in some cases it would also mean vertical scaling as well. So your peak customers will get the best experience with Sadiah. If you optimize in an automated fashion, it's almost always guaranteed that your peak will have a slightly deteriorated experience. So that's why we care about peak and peak is most important for Sadiah. Most important. Okay. You talked about one of the things when you were walking us through the steps for container and serverless Sadiah's approach, release intelligence. And you also talked about now with modern applications, the release cadence has changed so much. How does Sadiah keep a customer optimized when there's so many releases, changes in the application? What's your secret sauce? Yep. So release, they say 70% of incidents are caused by changes or releases into production. So that is the number out there. When you release a new version, there are a few people who are worried about a release. The first developer himself when he releases, was it a good release? Did it impact anything? That's the first thing that we check as well. Did this release go well? Availability is good, performance is okay, cost is not spiking. Now the next aspect is, did you violate an SLA or SLO in this process? When you change, when you make a release, you might have increased the latency. That's okay if it is small. But if you cross that latency where your SLA is not met, now you have a problem. So you need a system out there that can guarantee that you will not meet, you will always meet the SLA. So if it goes above that SLA, said I will scale you up or out to make sure that you're under that SLA. The next thing is, maybe after you release, after six months, you might see, you know what? My cost is going up. Now what changed? What release cost us? It was running at, let's say $1,000 a day. Now it's running at $10,000 a day. So what changed? What changed? So we have a release trend and we have graded releases there and you can point now to the exact release that would have cost us problem. So that's how we handle release. So you're giving the customer incredible visibility to those release trends to help them really understand and fine tune and obviously optimize the way that the modern applications work. You also mentioned cost optimization is so important because it can directly impact the bottom line. It can impact revenue. So I'm sure that you must have some great customer stories that really speak to the cost optimization that SLA is enabling with its approach with containers with serverless. Give me a few customer stories that you think really articulate the value that you're delivering. So for a customer, our focus was really cost optimizations. Generally, when you optimize cost, you would see that either things are running in that state or it is slightly deteriorated. For this customer, the story is very interesting. We optimize them to 43% if I'm not wrong. Yes, it is 43% from cost perspective. Then I got a call in the evening saying, my application is running a lot faster now. The customers are saying my application, my checkout is running a lot faster now. And they know that we are saving a lot of money here. So they called us, you know what, what changed, right? We didn't release anything. What changed? We said it was SIDAI. So SIDAI is not about just saving costs. It's about giving you the best customer experience. And that's the best part about my job too. It's not just about building SIDAI. It's about helping you build your company. So that's the most, that was the most exciting story for me. For a public company, again, a public cybersecurity company, we have saved 36% cost with absolutely zero manual intervention. They turned it on. In another couple of weeks, they saw that their monthly bill is going down, went to 36%. For a large cosmetics company, we have started running their lambdas 80% faster, 80% faster with 60% reduction in cost. Big impact there, those customer stories, no wonder they made you smile. As long as everything impacts the customer experience positively, that's what we all want these days. Big savings, a lot of speed. I'm curious if there's a cultural impact that SIDAI is making. And when we talk about Phenops engineering and business folks, they don't necessarily speak the same language. And you and I have talked about this before. Is there a cultural impact there to your customers that SIDAI helps in terms of bridging the gap between the engineering folks, the Phenops folks, so they can get on the same page? Yeah, 100%. So DevOps and Phenops, the most important aspect is they are all after the exact same goal. They just have two different perspectives. The table stakes for engineering is performance, availability, and the customer experience for Phenops, it is cost. Rather than them executing it, they should be able to control the autonomous system that is in here saying, this is what I want you to do. And this is your constraint from engineering side. And you bring the same constraint from Phenops side. Now you have a cost constraint and an application characteristic constraints, and now you have that autonomous system that will take both of them in. With this, it will create the dynamic rules for you and execute in production. So it's really two sensors on both sides and autonomous systems executing it in production. So you've done such a great job of talking about why SIDAI, what you're able to do, the cost optimization impact, the cultural impact that you're making for customers, I imagine across every industry. So take us home, Suresh, give us a little summary for the audience about the approach and where they can go because I know they're going to want to learn more about SIDAI. Sure. Again, it is not about an if, it's about a when. So it is for any company, if you're trying to optimize cost, you have to take your teams out of harm's way. For that, you need an autonomous system that understands your goal, constraints from your teams, and it can evaluate safety and make it make and execute in production. With an autonomous system, the best aspect there is, there will not be this cost related or performance related incidents in production. When you do it, you are keeping all safety, alarms in mind when you execute. The second aspect is when you start saving, it is important to keep it there, which means you have to watch your releases, you have to continuously optimize, you have to feed, give feedback back to the developers. To learn more about autonomous systems, please come visit our website SIDAI.io. You can schedule a call with us or even self-service is super easy. Awesome. Suresh, great to have you back on the program talking about what you guys are doing, the autonomy, taking people out of harm's way, maintaining the savings that you're able to find for organizations. This has been a great presentation as part of the AWS startup showcase on analytics and cost optimization. Thanks so much for coming back and joining me. Thanks a lot. It was a lot of fun. I agree. We want to thank you for watching and remind you to keep it right here for more action on theCUBE, your leader in live tech coverage.