 Today we have with us is Benek Robolik, founder and CEO of Catify. It's Benek. Great to have you on the show. Thank you. Thank you for having me on the show. It's a pleasure. First of all, thanks for joining me today. And today we are going to talk about CADA graduation. But before we get into graduation and the whole project, talk a bit about what is CADA? What is the origin of the project? What is the problem it is solving for the larger CNCF cloud native ecosystem? In short, CADA tries to make Kubernetes application autoscaling as simple as possible. So basically we are enabling users to scale their workloads based on various different metrics because the default building horizontal pod autoscaler which is present in Kubernetes can only scale your applications. You can scale your applications only by resource consumption, so by CPU or memory usage. But there are certain use cases when you would like to scale your application by a different set of metrics, maybe some custom metrics, maybe some events happening in the external system. And this is what CADA provides. So we are trying to not build something new, but we are extending the capabilities of Kubernetes and the HPA, so we are building on top of it. So imagine for example the simple use case I'm usually giving is that imagine that you have some application that is a consumer, it is consuming some messages and processing those messages. And it is consuming messages from some event system or messaging system, for example Kafka, so it could be like Kafka topic. And you would like to autoscale your application, the consumer based on the number of unprocessed messages in the Kafka topic. And this is what CADA does. Plus it enables you to scale to zero because HPA cannot scale to zero, but with CADA you can scale your applications to zero. So if you don't have any load the application is scaled to zero. If there is a load the application is scaled appropriately, so it saves costs and all these problems that we are facing these days with Kubernetes and trying to mitigate costs of our applications and also trying to handle all the load and everything that's coming to our application. Can you also talk about, of course, Kubernetes is of course mature. Folks are running, I mean they have been running in production for a while, but we are talking about it like one or two years now. As it has moved to production, what are some of the day to pain points? And of course there are some challenges that come with a complex system like Kubernetes itself, but then there are a lot of challenges that folks start to see when things start to move in production. You did talk about some, but from your perspective for, of course, Kedify's perspective for Keda community's perspective, what are some large challenges that you are seeing there that you feel that, hey, of course these projects are going to solve them, but there are some challenges that you see, hey, we have to address them as well. As I mentioned, the costs, because you can build a big platform, but the cost is something that you should consider all the time. And also when you are building these large applications on Kubernetes, it comes with complexity, because you can have multiple modules, multiple components, and managing all these pieces together can be painful. So I think it important is to have a good observability and monitoring stack. So then you can monitor your application, your platform, and based on that you can decide, okay, maybe this part of the platform needs maybe some scale out, we need more replicas, maybe this is something that we are over provisioning, so we are having a lot of deployments of this component, but we don't need it, so basically I think this is like the challenge that we can see right now, to get the correct metrics about the usage of different components of our application and then configure the portfolio based on that. Now, can you also talk about the origin of the project, how we talked about why the project started, let's talk about where it started and how have you been involved with the project from the early days and how you're still involved with the project? I'm still like involved with the project, I'm the maintainer of the project, one of the maintainers, and the project has been started in 2019 by Microsoft and Red Hat as a joint operation. We tried this POC on auto scaling, I joined early on on this project and then we were okay, this is nice, this is something that can be beneficial for the whole community, so later on the project has been donated to CNCF, it was like after couple of months, we donated it into the Sandbox project and then we start building the project with the help of CNCF, getting maintainers from other companies and now we slowly progressed, or not slowly, but I would say we nicely progressed to the graduation phase, so it means that for us it means like the project is major enough, we have a lot of users of the project, there are some big companies, Reddit, Xbox is using it, internal infrastructure, we have a lot of users, some user stories on the website, so the graduation for us was really like, let's say, the symbol for the community that we are major enough for production setup. I mean as you said the project was started in 2019, collaboration between Microsoft and Red Hat, talk a bit about what kind of community is there today around this project and how is the community growing, how is the adoption you're seeing there? So like the traction around the project, I can see like I would say a lot of new users coming and I would say like the trend is pretty stable, we still see like the increase of interest in the project, because the goal that we are trying to achieve, I would say that there is no other project that offers the same capabilities on Kubernetes, so basically I would say that KEDA is like the number one solution for auto-skill applications on Kubernetes, so we see a lot of users, a lot of us for new features, requests and etc. So I would say the community is healthy, but usually more contributors we have is better, so I encourage everybody to reach out and to work with us on community and improving KEDA because there are still a lot of things to do and to achieve. Can you also talk a bit about what kind of community is there around the project? Of course it was Red Hat and Microsoft that initially started the project, of course your Kedify is also there, but talk about how diverse is the community today, who else is involved? Regarding like the community as a whole, we can see like a lot of different users from different companies because they came and okay we need to fix for example this problem or we need these features, so we see a lot of contributions from different companies, but usually they are like this one of contributions or you know no stable ones I would say, but from the core what I can see I would say is Microsoft, Kedify and SRM International Hub, which is like the technological provider for Lidl and similar grocery stores. What does Graduation mean for a project like KEDA now? It can also be, hey what does it mean for a project at the same time, what does it mean for the users who are either consuming or looking at consuming it? What does it also mean for the community around KEDA? I want to understand the impact on these three constituents of the project. Graduation from my point of view what I can see for the users is basically that they can be sure that the project is well maintained because to comply with the requirements for Graduation project you need to pass security check, security audit, you need to have the code ready, you need to have licenses ready on the project, so you need to basically to say simple I would say that the users and community knows that the project is healthy, it's well maintained and it's major enough to be used. You don't have to be afraid that in two months the project will be suspended or something like that. So I would say this is the assurance for users that the project is in a good shape and it is major enough to be used. And being graduated, how does as a maintainer your day-to-day life change in the project? What does it mean for the maintainers of the project or the community? I wouldn't say there is a big change like in the day-to-day life because it's more like a mark of the project rather than some change in the processes or something like that. I hope that the graduation will bring us more contributors, more users, maybe some people interested in becoming maintainers of the project. We always welcome new faces. So I would say maybe like this, getting even more traction from the community. Can you also talk about the cadence, the release cycle of this project? We usually try to release a new version every three months. It may be like a couple of weeks off, but basically we are trying to stick to this cadence and we as a community, if I'm not mistaken, we kind of support community support like the last three releases, I suppose. Maybe I'm broke on this, but something like that. So we try to release new releases every three months and we always try to bring something new to the project, new features and new stuff. Can you also talk about what are the things that are in your pipeline? If you look at the future of KEDA, of course it's open source projects. It will evolve in the changing time, but there are certain things you can look at. Hey, these are the things that you folks are working on. Yeah, sure. There are a couple of exciting features that we would like to add to KEDA. So as I mentioned, KEDA is built on top of HPA, which is the Kubernetes Port Auto Scaler, which is there. And we are extending the capabilities. But there are some limitations in the underlying stuff. So we are trying to mitigate from the KEDA side because our main goal is to make it as simple as possible from the user's perspective. So there are some features that I would like to mention. It's like we are trying to extend the monitoring and observability stack. So this is the stuff I talked about before, which is important. So we would like to expose more information about what's happening with the application. So let's say this application has been scaled out, scaled in or something like that. And then the other cool feature I would like to mention is that we try to implement, let's say, it's called formula-based evaluation of metrics. So imagine that you are scaling your application based on some metric coming from one system. But maybe you have multiple different systems and you would like to scale the application based on multiple metrics. But you would like to introduce some kind of logic between the metrics. So let's say if this metric is above 100 but this is below 50, do this. So more like this query approach when you can specify the formula and the final scaling will be a result of that. So this is the, I would say, the main two areas that we are trying to improve at the moment, which is the observability monitoring and this more advanced configuration capabilities regarding metrics. Isbeniak, thank you so much for taking time out today and talk about this release. And I would love to chat with you folks again. Thank you. Thank you very much. It was a pleasure to talk. Thank you.