 My definition friends, I hope you are doing well. It's a pleasure to be here for a new Tech Talk. So I'm your host today. Two weeks ago it was Natali, I think. This week it's me. So I'm Sebastien Blanc. Maybe you know me from the definitions, the show or from the deep dive. And today we have a really exciting topic with an awesome speaker. So without waiting anymore, let me bring to the stage Danielle. Hey Danielle, how are you? Hi Sebby, it's good to have you. I mean, thanks for having me. It's really good to see you again after Devon Nexus. Yeah, exactly. I'm doing pretty good. Okay, and where are you based again? I don't know. Yeah, sure. So I'm based in Boston in East Coast. Yeah, so today's a little bit chill. Okay. Then last week, so last week, so almost 75, almost 80. It's a pretty much a summer weather, but today's a 55, a little bit chill, like just in the weather. Yeah, yeah. Well, I'm live from South of France and I can tell you here it's summer already since two weeks, it's really hot. I have my small airco here. I turned it off because it makes a lot of sounds so it's getting really warm here. And you are going to present us a really exciting topic. Even driven autoscading using CADA and creative integration. So I'm really excited about it. Daniel, the stage is yours. I give you the screen share. I will move myself and see you at the end for the questions. Good luck. Yeah, that'd be great. Thanks for the great introduction and hello everybody and good morning, good afternoon. It depends on where you are. So today I'm going to talk a little bit about event driven autoscaling using CADA and the CADA integration. Maybe some people already heard about what CADA is, Kubernetes event driven autoscaling. And the CADA is allows you to have some less function capability. So let's get started right now. So just who am I? Just a little bit real quick. So my name is Daniel Oh, I'm working for Red Hat as a technical marketing major slash developer advocate, specialized in cloud-native runtime, for example, Quarkus, Spring Boot and Node.js. And I spend a lot of time to put cloud-native runtime application into sub-less and sub-summit with Kubernetes and slash your function. I'm also responsible CNC for investor. Tried to help a lot of people how to build the cloud-native application and architecture in one of the DevOps platform with the GitOps practice and then sub-less and sub-summit practice, et cetera. Here's a few contact information if you want to reach out to me on my Twitter and then my Vinny URL, Daniel TV, YouTube channel and then my Twitter, just feel free to reach out to me if you have any question around this technology but also CNC projects. Okay, so let's try to take one step back, understand what auto-scaling architecture looks like in Kubernetes. For example, you can have multiple Microsoft applications as a running on part on Kubernetes. For example, inventory application and catalog application. Just imagine that you're in a business application like online shopping or something like that and the order card payment. Each Microsoft application deployed on Kubernetes as a pod with the deployment manufacturer on Kubernetes. And then if you want to scale that application automatically you just define horizon to pod auto-scaler aka HPA. And each HPA resources decide when your application pod will be scaled out and in based on the resources, the CPU memory. But this is really good architecture for auto-scaling on the Kubernetes. But what if you could have external services which impact your existing running application on Kubernetes? For example, you have a prometheus put together the metric data or a Kafka source to process event-driven messaging from external system or the Azure monitor or the AWS SD monitoring or the event-driven system. So this is a bunch of stuff is a common practice and use cases that you could integrate the external services with inside the Kubernetes application. In that case, somebody auto-scaling should be what must be designed based on event-driven messaging or event-driven metrics rather than CPU memory because it takes some time to measure the metrics based on CPU and memory. So, but in order to redesign and handle their external services with your agency Microsoft's application, you could have the custom metric, custom auto-scaling infrastructure layer on top of this box. You can see you can add Amazon cloud watch metric adapter or you can have promising metric adapter or Azure monitor metric adapter. You could have multiple custom metric adapter but the problem is here, you can have only one metric at once. So some of the application, for example, inventory needed to aggregate or communicate with the symmetric data with the Amazon cloud watch rather than the Prometheus or Azure. But in that case, you can just auto-scaling the other application part, for example, order for a card related to Prometheus metric adapter which he consumed the event messaging from Kafka server. But you cannot auto-scale it immensely because you cannot enable Amazon cloud watch metric adapter at the same time. So this is just some potential problem. If you wanted to auto-scale your application based on event-driven metric rather than just standard auto-scaling based on CPU memory resources. So this is the reason why Kader was designed. So once again, the Kader is the Kubernetes event-driven auto-scaling, it's a principle and then it invented by Red Hat and Microsoft a couple of years ago and then he actually donated the CNC ever project. It is now incubated project in the last October. The current version is 2.6 and then you can go to diskkader.sh, you already combined all detail. A little bit detail about this Kader. So Kader actually allows developer to scale standard Kubernetes manifest, for example, deployment or job or existing custom resources. And there are more than almost 50 built-in scaler to handle external services, for example, Kafka from Mythos or RabbitMQ and Amazon services or Azure services like a monitor, et cetera. And then all resources will be scaled based on events message rather than just a CPU. For example, you got a bunch of the messaging from Kafka topic, the application needs to consume that messaging from Kafka topic. So you have to one criteria to scale your application based on that maturing message rather than actual the CPU memory realization on the part. But one good thing is Kader does not manipulate the data itself. And then you can also install Kader using operator or a ham chart. So with the Kader, you can improve your own scaling infrastructure. For example here, you can have a multiple scaler controller. And then the behind the scenes, the Kader scale controller actually trigger introducing horizontal part or a scalar HPA. And then you don't need to HPA resources for each application part. You can see here the middle of a side, you can define scaled object. And then each scale object just call scale controller and scale controller code to your backend HPA to scale from maybe 10 or 100 part based on event metrics. And also scale controller only handle from zero to one and one to zero. And if you need to more scale more than one part I mean the replication number HPA or intervene and then scale your application. So now you could have multiple resources and multiple external services whenever you define scaled object. So inventory application, you can have a scale object to get metrics data from AWS cloud watch. And then Kader could be integrated with the Apache Kafka for all the scaling. And then current application enable the metrics data based on the scaling with the edge or monitor. So here's a little bit detail how Kader works here. So of course Kader absolutely based on Kubernetes and then you can define scale object or scale jobs. Scale object just manage your standard object for example deployment and the job is sort of more Apache the job, something like that. And then the message workload from scale to one and zero to one. And then so behind the CRO dimension here so HPA you can use the Kader or the scalar actually called to horizontal or scale to scale your application like a zero to one one to zero. Okay, so here's a quick example. How to define scaled object. And for example, here is the one importing here such as the minimum replication count and the maximum replication count. And then what is the trigger? So for example here, so I want to all scale my application based on the Kafka source. So that's why I set it at the example as a Kafka type and then you can specify the Kafka servers detail. For example, bootstrap server you are like a service name and the full name in the consumer group and the topic name. And this is all I need for Kafka source to all scale my application based on the metric which comes from the Kafka server. So what about the server less or scaling? So why we need to talk about server less in this moment? So a lot of people saying event driven application really feelable in the server less because the server less application just scale up, scale and down depends on the network traffic. And then which means all about event driven application just like a server less. So if you want to scaling server less application as a part of an event driven application, how do we do that? So one of the interesting project maybe you already know that K native. So K native is built on Kubernetes and then K native allows developer to manage and deploy your application as K native service which he provide exactly same capability almost similar capability like a server less just like a lambda. But here the two some of the differentiated between Kater and K native. So for example here, the Kater only scale standard Kubernetes resources like a deployment or part or something like that. But K native it deploy different resources like a K native services. It's not standard services, it's not standard deployment. And then the Kater just keep monitoring your application status and then just scale out. But the K native they have own order scalar not HEPA their own K native scalar. And then that scalar trigger the application scaling like a from zero to one or one to zero. That's why we have we could say put and push model. And there are a few more things I'm gonna really explain in a next second demo here. So I'm gonna showcase four different demo today but my Kubernetes like open to the cluster just like some issue. So let me try to read log in in a minute but if you I cannot log in my open to the cluster but don't worry about that I actually create the related video. So maybe I can play the video and showcase how the demo works it. So the first is case. So we're gonna do try to order scale the standard application based on HEPA using CPU or our memory utilization but this is not event-driven order scaling. And with the reason why I wanna try to showcase it's really helpful to understand what difference thing between event-driven order scaling versus just the standard order scaling even if the application consume the mass event messaging from Kafka topic. Okay, so let's go into the demo and here to my open to the cluster. Okay, here we go. Oh, thank God. So I already install a bunch of the operator as you can see here is the Red Hat Integration AMP Stream which he allows me to deploy Kafka server. And then here's a cater as I mentioned earlier you can install cater as operator just like I did it or you can install cater using ham chart. So I use operator and this is up to the serverless based on Knative. So with that you can deploy application as a serverless. So I already installed three operator and then go to developer console. Developer console. I already create the Kafka topic in my project, the first project here. So Kafka dash HPA. And then let me try to log in this open to the cluster from my local environment. Okay, luckily open to provide the login token. And then, okay, here we go. I just log in my local host and then here is my current project Kafka HPA, which is good. Okay, so I just, I already installed to save my demo time my Kafka topic here. The topic name is my topic. The cluster here you can see the jukey per and then here's my Kafka topic with the three, the high availability pause. Okay, so go to my terminal and then I'm gonna open my application here. So here's my first, I'm gonna make it bigger. It'll be better for you hopefully. So here's my Quarkus application. So the Quarkus application just in case you never ever heard about the Quarkus workforce Quarkus Kubernetes native Java framework is really fitting the cloud every application development but also serverless application because Quarkus provide a lot of developer experience feature, for example, live coding and the continued testing and demo services which are also used being up your container image automatically, for example, database in single sign on and then a key clock single sign on database and then the messaging broker and a lot of the integration container automatically stand up. And also the Quarkus provide two different build strategy one board just in compile like creating job file you can running on job file on top of the JPM and the other one is to provide the native compilation which allows developer to packaging application and running without JPM because this is based on ahead of time strategy and the running on some train VM actually grow VM which is make a super fast response and start of time as well as tiny memory footprint. So this is my sample application to consume the event messaging from Kafka topic the topic name is my topic. It's a pretty simple, if you have some experience to develop reactive application based on like a spring boot there are some bunch of different methods and API you have to implement on spring react with totally different architecture or development practice compared to spring boot application. But the Quarkus are one of the great and the beauty of the Quarkus you can develop imperative application as well as a reactive application at same method and then you just need to write in a proper annotation which he make your method in the class for imperative application or reactive application. So for example, you can see income annotation it allows this method, a reasonable method just consume the event-driven messaging from my topic the topic name from my Kafka. It's really simple and the other application pretty simple I just print out the messaging from the record and then here's my resources and then there are a bunch of resources here. So first of all, I'm gonna deploy this application to my OpenShift cluster. To do that, you can use a Maven command line like a Maven or Gradle but today I'm gonna use a Quarkus CLI. Quarkus CLI is a pretty simple for developer you don't need to remember all Maven command line on parameters and in arguments the Quarkus CLI give you all the complete functions as a range that are pretty much easy and good developer experience for you. So I'm gonna build the Quarkus application I'm gonna skill unit test for this time and then it automatically packaging oh yeah, I need to go to write directory. Okay, so I'm gonna Quarkus build it will build my application like a Java app first and then in order to containerize the application using Docker or PubMed and then lastly, it will push the containerized image into containerized. In this case, we're gonna push the image into integrate containerized inside of the cluster but you can also push that image into external container registry, registry for example, Docker Hub or Query.io or Google container registry, et cetera. So, and then in last step the open-shift try to pull the container image and then just running on one of the available worker just like a Kubernetes feature. So, and then you can see there are multiple steps how to containerize this application. When you go back to here and then there are already generated Docker file which is one of the good feature of the Quarkus framework when you generate the Quarkus project your Quarkus CLI will download from the code that Quarkus IO and these Docker file automatically generated to build your application as a container just in case for the JVM or native, et cetera. Okay, let's go back to here. I just view success and then here we go. My application Quarkus is running and then go to view logs. You can see, so my topic is inelible. Here's the Quarkus race version 292 and there are a bunch of extension like a Kafka client and then like deploy to Kubernetes and then in the reactive application to reactive messaging stuff. So now one thing I need to do is to just you can see that I can manually scale and scale down. So you know the auto scaling I need to set it up the auto scaling here the Quarkus and then like a minimum one and then make sure on 10 and CPU realize a percent like a 75% which is a pretty normal use case. I just set it up and go back to console you can now see the arrow is gone which means this part will be auto scale by Kubernetes and not me like a human. Okay, so back to the ID tool. I just created a few of the job YAML file. This is the actually run the Kafka produce performance test batch screw which already if it is inside the Kafka servers I just use that and then here's a bunch of the parallelism and then it will run five job to keep generating the dummy message over the 10,000 record size and then it's maybe a million messaging generated every single second. I just copy this YAML file and then go back to my official console I can just the input YAML file and back to the top of the view and then the application here is the job will be create the five and then click on one of the job file you can see the record just generate in the sense to Kafka copy and then back to the top of the view and then here to corpus and the view logs and now you can see there are so many messaging just consuming from your Kafka copy and then this application and observe here and observe here and you can metrics okay or go to resources here we go HPA now the target 75% but the current naturally it takes some time to aggregate out the utilization of a CPU it takes some time but in the meantime your corpus application actually keep consuming and processing the message maybe sometimes it will end up with like a latency for the end user it's speeders so it takes some time to all the scale this application and then just give us a moment and then it will scale out so meantime let's go back to slide there and adjust to remove on the next use cases here the second use case we're gonna redesign this auto scaling based on Kater as you can see now we have a Kater here and Kater just pull the metric and from a Kafka copy and then in the end Kater trigger the application auto scaling specifically Kubernetes deployment so this is based on event driven we're not gonna scale based on CPU or memory realization so let's go to that demo as well so here is the application still just one part which means it takes some time to scale out based on CPU memory realization let's switch another project Kater the Kafka Kater I also create the same Kafka topic here and then when you go to the application and here's the same application but I'm gonna use this YAML file to set it up the Kafka topic and then the same application so because I will say the Kater only impact the standard Kubernetes resources for example, deployment so I just create a new deployment here back to here and then needed to create a new scale object and the same name is the EDA our deployment name, Quarkus EDA demo just like here and then you can actually change this part just to represent the Quarkus of which.io and on time equal Quarkus okay now we can see that and then I'm gonna create that auto scaling scale object and then once I create it, automatically scale down to zero because there's no network traffic at this moment so that's why there's no event driven messaging so just scale down to zero just like happening and then I'm gonna use the same load job YAML file based on the Kafka project performance testing here and just create that and then back to the topology and it will start and then pull deeper on the application start and then click on the messaging just starting generate and send it to Kafka topic and then back to the topology view and it will scale up soon in the meantime, let's go back to our Kafka HPA project as you can see, the job is done and then our Quarkus is scale to 10 so but it's already done already done all the load testing but it's still in scale out to 10 and you go to logs file and then it will still processing application because it takes some time to scale out and it also takes some time to scale in as well so go to auto scalar HPA now you can see the current is only 2% which means it's already done to scale out and now it takes some time to scale in after some time period okay, so try to back to the Kafka Cater and now you can see it's already scale out to 10 it's pretty much faster than just using HPA so this is the actual event driven or scaling on Kubernetes plus Cater and now we go back to and in the end one more thing in the end, when the job is done and the application will be scaled down to zero based on Cater but when you go back to this namespace based on HPA it will be scaled down to one, not zero that's the one of a big different thing and also the how much time we needed to scale out and constantly scale down that's another big different thing between only HPA and Cater okay, move on to the next use case and back to the slide where so next case is a sublass and the sublass as I already mentioned here so Cater, so when you go to the right-hand side this is a new designed architecture so we have a Kafka topic here and then here is that we need to create the Kafka source which is one of the integrated feature KNEM eventing with the Kafka so when you create the Kafka source so Kafka message just sync into Kafka source and then we send it to actual application which is your KNEM services and then the KNEM auto scaler can manually our KNEM service and then it will decide when these services will be scaled up and down and then so one of the good thing is you can handle cloud event messaging so for example, you got a lot of different types of the messaging format for example, I want to use JSON format I want to use the binary format I want to use like a protocol type format to manage my evangelism messaging but you sometimes need to put together the old application which you need to use different messaging format it's a pretty complicated to handle all kinds of stuff at the same time so cloud event was designed for handling and they provide a standard messaging format regardless of your application runtime or your messaging server so cloud event really good specifically cloud network district environment to manage multiple types of event to deliver architecture or tools but the problem is you have to change your application code to consume cloud event messaging so this is the one of the caveat when you use KNEM eventing to manage a cloud event and then let's go get back to the demo and then how it works this kind of stuff okay, so I'm going to switch to another project here so and you go to the one simple Java application you can see just the cloud event but just simple event stream and then the same thing the just pre-down resources and then go to application and here is the, we're going to deploy to KNEM of the above that in the previous project you can see we're going to deploy to OpenShift but now just a new property here we're going to deploy to not this one here we go so we're going to deploy KNEM which means this application deployed to as KNEM service with the Quarkus this is really pretty good feature for developer to deploy using application to as KNEM services okay, so there's a whole bunch of the application property here to deploy this application to OpenShift so I'm going to switch it to another project like a K.NETIF, a Kafka K.NETIF as the same I already installed the Kafka topic here and then I needed to change my project to Apka-KNETIF and then also change it to project here and then I'm going to build and deploy this application here we go and then it's almost the same process so we're going to build this application as a job file I'm not going to try to use the name of the compilation at this moment but if you need a more high performance result to deploy application, specifically serverless I would pre-suggest to use the name of the compilation but it really will take some time longer because once we packaging application we need to create the executable file natively and then packaging container so it really will take some time so in the reality practice you might need to add the name of the compilation process as a part of your GitOps pipeline or DevOps pipeline rather than every single time you do whenever you change the code on your local environment and then once you redeploy this application to OpenShift-Kubernetes you combine a little bit different icon on OpenShift-DevConsole which you'll represent this is the not normal application but it's more like K-Nav serverless application okay I just deployed that and back to the here so now we have a new application a little bit different and then you can see here is K-Nav services and here's a revision maybe you can change a little bit revision to showcase this is a Quarkus application not Node.js or not a Python all right, so now with Quarkus then you go to a view logs and then now we can see the same here is just the same 292 version and then running on JVM same maybe dependency we have okay so the one interesting part we need to do a three second to start off when you have an average compilation it may be a maybe 30 millisecond start off something like that okay so back to topology view now I gotta go back to my ID tool and then here is the one thing I have to do here I need to add a Kafka source as I mentioned earlier and then back to the here and in the meantime your application all automatically terminated which is you can check the same behavior of serverless like Amazon Render the default time period scale down to zero is 30 seconds you can change that like a one minute or even 15 seconds I never ever tried invoke the application in the meantime that's why this application terminating and it will be a scale down to zero just like here and then I'm gonna add the Kafka source which he consumed events messaging from Kafka topic and it will link to K-nav services here so Quarkus EDA K-nav demo which is the same service name as my serverless here and back to topology view now you can see the Kafka source is the link to your application and then I'm gonna to back to the my ID tool I'm gonna use the same performance load script and then I just create that and then it automatically start up but you already saw that the automatically start up because I just tested a couple I just tested a couple minutes ago but I just started this definition talk which means my Kafka topic actually still a bunch of the advantage of dummy message it's automatically just send it to Kafka source and then Kafka source just send it back to the my serverless function in Quarkus that's why the application automatically start up when I added the Kafka source here and then go back to here and then click on the view logs and you can find a bunch of the event message keep consuming from Kafka topic Okay, so the one interesting part here you can see just only one part so whatever you send the messaging how much message you wanna send it to but it already scale zero to one and then it's sometimes overflowing your part or show like a little latency to process like some web rendering or so return your RESTful API or something like that So let's go back to quick chat to HPA just if we scale down as you can see now it's scaled down to one, just one part here and then go to Kafka with the Kata and now you can see there's no running part so two different things based on HPA and Kata and now we have go back to Kafka with the Kata it's still one part and then still the job is running it will generate a five million event messaging every single second it will send it to Kafka topic and then it will consume through Kafka source into our Kata service application based on Quarkus it takes some time to consume a five million message and it will eventually scale down to zero so now I'm gonna back to my slide deck and then go to the last use case which really I wanna showcase today so what about event-driven serverless auto scaling so previously it's not technically event-driven because even though it's consuming but not like scaling the multiple serverless application so here is our design box project integrate Kata and KNATIVE some people say oh KNATIVE a little bit competitive thing with the Kata because Kata auto scaling application KNATIVE auto scaling application but KNATIVE a little bit more focused on serverless and Kata just focused on the normal application but the auto scaling perspective the two project provide the same capability even behind the scene you use different auto scaler and different algorithm and mechanism but what if we could integrate the Kata and KNATIVE and then provide a more great feature for example Kata can be used auto scale KNATIVE event infrastructure here is KNATIVE eventing source or a channel and in the end it will auto scale the infrastructure itself and then the application as well and then we gotta still working on this project by Red Hat and then Microsoft and then a lot of the vendors and individual contributor and then one of the goal of the Kata and KNATIVE integration to Kata in the end scale KNATIVE service itself just like with auto scaling normal deployment part so here to finally redesign an ultimate goal auto scaling event driven application on top of the Kubernetes with Kata and KNATIVE really complicated maybe you feel but there's just a point is you can have a two compound here KNATIVE Kata so Kata are responsible for the scraps of event the metric from Kafka and then just scale the Kafka source I mean this is the eventing metric infrastructure and then it based on that increasing the Kafka source is automatically scale your KNATIVE service based on KNATIVE auto scale so this is really good so previously version we actually the K Kafka source deploy as part but the latest version of KNATIVE and then the KNATIVE source not going to deploy as part is just interior in the Kafka infrastructure but in the future of it in a future roadmap it will be come back to deploy as part once again so let's try go back to our demo environment and then try to deploy Kata and KNATIVE at the same time okay I'm going to switch my last name space Kafka dash Kata dash KNATIVE here and we will so three Kafka infrastructure here and then back to my IDE and the one thing I need to change in my application here so I'm going to change my image group which is a name space Kafka Kata KNATIVE in the huge of bunch of the my KNATIVE configuration which is actually related to Kata stuff so this is the actual how to scaling and how to the scale up and the target and the min max, et cetera and then back to my terminal I'm going to change my project Kafka KNATIVE Kata first and KNATIVE okay now here we go and then I'm going to just run Kata's bill unit test here and then just remember that the previously and back to the here so previously into Kata and Kafka project we actually create the scale scale object manually when you just open import YAML editor and then we just paste the Kafka scale object manually and it will end up with increasing your application based on Kata controller but we did the Quarkus annotation I want to show that here it's automatically scaled up set up the scaled object automatically when you deploy this application into Kafka Kata KNATIVE project back to here then take a look back to our terminal window so this is the the same process just packaging application and then now we're going to you can see multiple spare which is based on Docker file to packaging application as a container and then we're going to push that image into integral containerized to here and then just done and then we just deploy the application as KNATIVE services or less okay and now we have an application and now we also create the Kafka source for the KNATIVE and then the one different thing is here let's go back to here and then I'm going to make it bigger and then there's the service name here and then here is the Kata or a scaling annotation so this is your make this Kafka source will be scaled based on Kata now with that annotation I just create that back to here and now the application here and then application automatically already starting and then go to log4j and drop file in the meantime once again I already testing just a couple of minutes ago so that's why the application automatically scale so try to once again to create the job and then it's super fast or scaling even this is just some less but when you go to here it's three parts scale and then we have a new job in the file and back to here to keep sending messaging to the Kafka topic and then it automatically and it's really flexibly scale out and in based on evangelism messaging so I'm going to really be changing the label once again showcase the quarter thing one time all right and you can see the part is now four but here so now you can see the part is nine and this is a really bit super fast but in the meantime to scale down to five and scale up to scale out and it's a really bit flexibly that depends on how much event messaging comes from your Kafka topic not just scale out just once really be differently handling by HPA so this is the how to redesign our order scaling for evangelism application on the Kubernetes all right so I'm going to just summarize today topic and then go back to my demo environment how do we scale down to zero and I'm going to more than happy to address any question so here's Dean K-Nav and K-DAR Sandbox to integration effort you can also use the Kafka source so today I will show that because Kafka one of the popular common event driven architecture event driven infrastructure and then you can use the AWS SQS and then Redis Stream and then RevitMK broker and then just to there are so many related technical video and the topics I already push it into my YouTube channel here to be in URL DanieloTV you can scan the QR code feel free to watch it and keep running and also don't hesitate add some comment or suggestion on my YouTube channel and feel free to subscribe to it as well it really helpful for me to keep creating valuable and technical tutorial and demos so here's your full take away from today so K-DAR is Kubernetes event driven autoscaling it's super simple and then K-DAR just pull model and K-Nav pushing model and then so K-DAR only influence standard Kubernetes resources but K-Native only deploy K-Nav services unique resources rather than deployment or a job that's like that and then K-DAR autoscaling K-Nav eventing infrastructure which is integration with the K-Native and based on event autoscaling K-DAR annotation okay so that's what I have today now we gotta go back to demo and then you can see the pod scale down to five and the one last thing go back to Kafka K-Native then is still colony one which means it will eventually scale down to zero okay so I'm gonna give us a moment ask us some questions and then the question here Hey, thank you Daniel that was awesome and there was one question not directly related to what you showed but was more question about Quarkus regarding Springboard so I gave some answers and especially especially if you want to if you're doing scale to zero autoscaling to zero and you want a really fast startup time when your first event comes in then you really want a fast startup time and that's another use case where you really want to use Quarkus in combination with K-Native of K-DAR so that's Yeah, absolutely so as long as you wanna use a neighbor cooperation with this new architecture for example K-DAR plus K-Native with a native cooperation with Quarkus it's much faster than a startup time when the new message comes first yeah, that's really a really good point and that's what we want to showcase today but it takes some time to have a neighbor cooperation arrive at that and well I have a question about K-DAR and it comes with a whole set of scalers that are already available but you can also write your own custom specific trigger scaler I don't know what the correct name is Yeah, you can actually made that but so the one thing is as I mentioned earlier you can create the custom scalar metric for the K-DAR itself but you cannot use that custom thing for K-Native and K-DAR in a moment Yeah, okay, that makes sense I really like the four different use cases I guess you have GitHub repo with all those examples Sure, yeah Maybe you can put it in the private chat here in the ReStream and I share it and then we will have to give the channel back but if you have a last question now is the moment in the audience so don't be shy and ask your question now of course the replay will be available the URL that you see in YouTube is also the URL of the replay I will put down the GitHub repo in the beginning I also put Daniel's Twitter channel so if you have any other question that pops up later don't bother, just send in a tweet and you will be more than happy to reply to you Let me see, no other questions? No, okay, we are a bit over time so I have to give the channel back but again, it was really awesome really four different really comprehensive use cases that really give the complete panel of how we can modern autoscaling with events stuff and with awesome projects like K-DAR that I didn't knew really well that I will play with and K-Native which is one of my preferred technology out there in the Kubernetes ecosystem Hey, Daniel, thank you so, so much we will meet in a few weeks in Rio in Poland for Demax Poland, that will be awesome I'm really looking forward to that in the meantime, everyone, thank you everyone for attending, see you again and I think next week will be the show not with Mia's host but with Anna I think awesome guest and the week after we will be back or end of the month we will be back for another tech talk so until then, please stay safe enjoy the rest of your day and see you later bye bye everyone bye