 the applications that we have seen in real life, they are hundreds of microservices. Literally, imagine I have to go for each microservice and set its CPU threshold. First of all, I'll go mad doing that. Secondly, every time there's a release, and nowadays you get releases daily, it can change that. So you have to keep reevaluating. This is not practical. It's certainly not something a human being should be meant to do. This is perfect for a machine. This is why we used AI for this. Hi, this is Yoho Sapilbharti and we are here at KubeCon and CloudReadyCon in Chicago. Today, we have with us Raj Nair, CEO of Avesha Raj. It's great to have you on the show. Thank you. I'm happy to be here. It's my pleasure to host you here today. We are at this event. You have a presence here. You have folks have a booth here. Talk a bit about what has been your experience so far at this event. It's been very, very good. Had some excellent conversations with potential customers and partners and so we're looking forward to this. It's been a very great show so far. It is a great show. Did you folks make any announcement here at the show? We did announce our partnership with Dell and also we did a press release to that effect. We are very happy to be here. We also have this new product called Smart Scalar. We are unveiling here at this show and it's been a great product and especially at these times to help you to bring your cloud spend under control. Smart Scalar is a new category of product in the cloud optimization space and what it does is it takes metrics like application level metrics and infrastructure metrics and then brings uses AI techniques to do a predictive scaling of your pods. So this is new because other tools, they can give you recommendations. They don't actually do any scaling. We actually do the scaling and so this is new and it uses also some advanced AI. In fact, the same AI that was used for training chat GPT, the exact same reinforcement learning engine called PPO, Proximal Policy Optimization is exactly what we use to train this. So it's really a really advanced tool that can do amazing things for your cloud spend and to get you visibility into how your applications are behaving and how they're performing from release to release. You have all of that in a dashboard. So it's an awesome product. You said that there are tools which will give recommendations but they will not auto scale. Does that mean that you folks are the only one who are doing it? I just want to understand the state of ecosystem. There are other tools but they're not as advanced as we are doing, particularly from the way we're using generative AI for doing the scaling and the beauty of that is Ours is completely, it removes all work from the DevOps guys. So this is very useful. We also have some very interesting features like we have an event scaler. So what you can do is you can have a calendar of events and you can say to what level you want it to be scaled up and then right before the event our system would actually scale up your pods to the level that is needed. And just in case, let's say you didn't guess right and more than double of those came, no problem, we can handle that. So it's also adaptive. So beauty of that is somebody can get this product and start saving money today. Why? Just think of it, according to Gartner, at least 50% of clusters are underutilized or underutilized. With this, you can simply have a lights on, lights off schedule and that's it. You set that up, you say that these are my working hours. You don't have to be accurate because this will cope with if there are errors but then you immediately start saving money for idle servers that are just kept idle that's burning money in the clouds. Of course, we are here. So we have to talk about the elephant in the room which is Kubernetes complexity. Yes. Talk a bit about the impact of this complexity on cost. Well, just take this example. The applications that we have seen in real life they have hundreds of microservices. Literally, imagine I could go for each microservice and set its CPU threshold. I will have, first of all, I'll go mad doing that. And secondly, every time there's a release and nowadays, you get releases daily, it can change that. So you have to keep re-evaluating this. This is not practical. It's certainly not something a human being should be meant to do. This is perfect for a machine. This is why we used AI for this. Let's talk about one of the hottest topic these days, genitive AI. Do you think that it's just another hype or it's a reliable technology like Linux kernel or Kubernetes? I mean, I have been talking to a lot of companies which are putting Genitive AI into their products that they're building for enterprise customers. And then company like Avisah bet on it. It kind of validates these technologies. So I want to hear from you. How much are you relying on Genitive AI in your products and services? Yeah, no, it's a very good question. I think there is some hype to it. There's definitely, because even from our own experiences using chat GPT, we can very quickly see its limitations. You know, after you use it for a week or two, you'll start running into problems where it is saying something which is not necessarily true and like them. So you cannot use these tools blindly. You have to use it with common sense. What we have done is we have taken the good parts of it, which is the fact that you can train it and with a human in the loop. And that model can then be used to do some of the automation. But we are not entirely relying on that. Not do we recommend that. Which is, for example, why in the case of our event scaler, we allow you, the human, to put the schedules and so on. This will then compensate if there are any errors. A balanced approach is always the best approach. And I would not recommend that any other way. And even when it is running, we give you, like for example, if you were scaling up for an event, we actually do a connected to the GitOps where you get a PR, which you can approve. Any deviations, any abnormal things, always an alert is sent so that the human knows what's going on. And I would not recommend that it's not involved at all. It should be involved, human should be involved to oversee it. But it will be a great tool to reduce the TDM for humans. That's the way I look at it. Can you talk about how does Avesha fit into this cloud native ecosystem? Talk a bit about how you make developers life easier so that they can continue to focus on their core job, which is to be their core job, which is to write business application that adds value to these businesses. The interesting thing about using these technologies like Smart Scalar and even our other product CubeSlices, which is a way for you to handle a multi-cluster deployment. So think of an application that has to be in different locations, either full stack or even what they call poly cloud, where some portions are in different clouds. Well, in both these products, we are reducing the TDM of work, of which is very non-productive work that an application developer has to go through. In the case of the CubeSlices, they right now don't have any visibility. First of all, the connectivity is a problem that it can wait for the networking folks to set it up. Secondly, even after that, they don't get visibility and they have to take multiple dashboards, correlated, it's a lot of work. The Smart Scalar too, the issue is whenever there's a new release, you have no idea how it's performing. We have a view for the developer where they can look at how their performance changed over time. How did it, you know, has it suddenly decreased? They wouldn't know this unless they did lots of testing. We, when we do this, we have a, as a part of this Smart Scalar, there's a portion that continuously evaluates each microservice for its performance and looking at data. And then it's presenting this to you in a very visual way. So you don't need to do all of the tedious work. It is telling you that something is wrong and you can take action from that. So we believe we've added a lot of value from that standpoint, from a productivity standpoint. Let's also talk about cloud cost. Of course, there are companies, there are tools. They help with cloud cost management. Many helps with metering. Talk a bit about the role that Avesha is playing in this space. You'd always need tools such as, there are many in the market, they can tell you what is your current cloud spend in AWS and Google and so on, right? We are not in that category, but they work very well with us because now, after you do your optimization with Smart Scalar, you wanna see if that made a difference in the spend, you can go back to those tools. But we show you in a dashboard everything about the optimum behavior for your applications and if there's any deviation from that, you will know that immediately because just think of it, there are many dimensions to this problem. This is why it is a data science problem. You look at application, this application latency, there are errors, then there's infrastructure like CPU, memory, then there's the request per second. All of these things have to be taken into account and then of course the utilization that you wanna achieve. And if you look at the pods, the pod spec that you wanna have because again, that's another factor, it's too complicated for humans. This is why you need a tool like this. It takes all this data, munges it and gives it to you in a form that you can understand easily and it also takes actions or provides you also the ability to override it if you want. So these are all the benefits of this tool. Going back to the point of Kubernetes complexity, do you see that things will get simpler and easier or you see that this complexity will going to become more complicated? We actually see it becoming more and more complex, mostly because it is after all, just starting to be adopted at scale, Kubernetes. And now we see application or uses of it in a lot of areas, even for critical infrastructure. So now there's a need to claim the complexity. See, it's no longer, for example, multi-cluster, multi-cloud is a foregone contortion and Kubernetes itself didn't have support for that natively. So now you're left with a problem, right? So these are just the beginnings of it. Now with scale, we see issues like this, how do you scale this? How do you scale with pods and it's no longer, you could have gotten away with just a little bit of over-provisioning, but now if you did over-provisioning for all of your 100 clusters, you're going to go broke. So I think it's very important to pay attention to that. Very important to pay attention to the scaling aspects and to cost for Kubernetes to be successful. It's a great tool, but it needs this additional automation just help it get under control. And this is why we did this. Now let's talk about the evolution of Avesha. If you look at the evolution of the whole ecosystem, the evolution of the whole market, how are you folks evolving so that you are there with the customers wherever they are in their journey? Our founding principle is to be a SaaS, okay? So we have the, at least from the point of view of the product. Now, we have a work, you know, the CubeStrice can be completely installed in your premise without any connection to our system, which is fine. We support that, but we will, it's a subscription model. The reason we did this is we want to grow with the customer's needs. And they have been the ones asking for features and that's how we have been evolving this product. And we stayed true and committed to that. We want to, at all costs, make our customers delighted. And so that is our only role. So whatever be there, so to answer your question, initially we had some beliefs in our heads about the workloads, you know, a Panacea where workloads will just move around everywhere. We found that that's not the case. And then we learned that there are specific use cases. There is the migration use case, there's the resilience use case. These are specific actual customer use cases for, for example, a CubeStrice. And the interesting thing about Smart Scalar, if I told you, which is again an evolution of thought, we were thinking, oh, is it just all about because we were able to show dramatic savings of the cloud spend itself, you know, down to like, I mean, in some cases 70%. If I told you that that is not what is interesting customers, you may be shocked because they're more interested in their automation. For them, they have a small DevOps, they don't always trust their DevOps team. They know this problem as we solve. So they like the fact that we are solving it. It is not the raw costs, which is a shock in the beginning. But we understand the way they're coming from because for them, you know, they're not what these are institutions that are not worried about, they're growing very well. Saving some money on the cloud is not their focus. They want their DevOps guys to be more efficient because that will help them in their top line. Makes sense, right? Right. And also give us kind of either a glimpse or a teaser of what things are there in the pipeline. What to expect from Avesha next? We do have another product in that development. We call it Smart Traffic Director. And that one allows you to move your workloads to different clusters in response to load and latency and things like that. That is being in development. We hope to have that by the end of the year or early next year. The other one, of course, is that as we see more opportunities for automation, more things that we can bring into Smart Scalar to do its own magic of learning on its own, you know, better learnings, for example, on the VPA side, because that's another complicated area. So we want to use AI in more and more complex things for Kubernetes space to essentially fulfill the original motive for Kubernetes, which was to make this whole DevOps process more simpler and easier to deploy. We want to make sure that that stays the same. So AI is the magic that you can use for that. When we talk about developers, we do have to talk about developer experience. We do have to talk about culture. Talk a bit about the importance of culture for enterprises. Now, because of culture, sometimes they choose tools that fit perfectly into their culture. At the same time, there are tools, services that kind of becomes a catalyst to bring that cultural change which is needed in these organizations. First of all, I mean, a good point. What we have seen, you know, is that the, like the DevOps folks are super busy in all our customers. So they don't have the bandwidth to do anything beyond just the easiest push button type of operations. So what that means is we are seeing a shift, you know, to more and more of a SaaS, you know, purely, you know, including with CubeSrize, you know, one of the things that we're trying to do is introduce a managed control plane because the more we can do that can offload them, the more they want to adapt. So that is where the trend we are seeing from a culture standpoint. The other thing, of course, is there are customers. I mean, a lot of customers who still have VMs. The question is we can't ignore that. How do we help them in this transformation? So we are looking into connecting with VMs on the side of at least CubeSrize. And there are some explorations for some, couple of customers where they want to see if they can utilize our smart scaler with VMs, but, you know, where they would put the VM into a, like your word or whatever, you know, and then see there's some things we can do there. We are looking into those areas. So there are such things that are happening. Again, I'm not saying it's a culture thing I said, but it is the legacy. Raj, thank you so much for taking time out today and talk about Ayesha, all the things that you folks are building, working on. I see there are a lot of things to talk about. So I look forward to talk to you again soon. Thank you. Thank you very much, Apnil, and enjoy talking to you.