 Hello, everyone, and welcome to the next session. It will be about serverless with K1K and Open Shift. The stage is yours. OK, yeah. Thank you. So myself, Selen Singh, I am a consultant at Red Hat. And along with me, I have Vasa. I have Vasa. Oh, I think she should be back. I see. OK, fine. Nobody's. So yeah, here we are to start with serverless with KAMLK and Open Shift. So let's see what are the agenda for this talk. So we'll still try to understand what is serverless. Serverless with Open Shift, or it can be any Kubernetes cluster. What is KAMLK? Vasa will give you a walkthrough of what is KAMLK and its KAMLK architecture. And we'll have a demo on top of that. Let's see what is serverless. So the first time when I heard about this word serverless, I thought, is it something like application without server? And I was totally wrong. So what does serverless means? Where you do not have to manage your server. You do not have to worry about your resource allocation of your servers. All those things will be taken care on demand. And what does the CNCF says about serverless? It says about the serverless computing refers to a concept of building and running your application that do not require a server management. You do not have to do your server management. You do not have to worry about whether what is the CPU consumption, what is the RAM consumption of my application server. It will be always fulfilled about your demand. And you only focus to developing your application. So how serverless with OpenSIFT? So serverless on top of OpenSIFT has been provided with the help of Knative. Knative is an open source project. What does the Knative does? It's provide a layer between the infra as well as from the developer perspective. Like every developer, whenever I talk and discuss to a developer, the first challenge or first issues what they know, like, hey, I'm developing my application, but why I should learn those configuration which is required for an infrastructure, like how my application is running, how I have to configure my application, what all those configuration, why I should worry about those configuration. There has to be some easy way. And that is what Knative helps to do. The Knative will create all those configuration for you and the developer focus only on their development. On top of OpenSIFT, on the right side, what you see the OpenSIFT platform, on top of that, you get a OpenSIFT serverless which is basically provides you the Knative serving, eventing, and function on top of that where you can run your application. Where I can use the serverless, this is the question and this is the confusion, like I assume a lot of enterprise customers are also having, can I use my database workload to run as a serverless? I would say no, because database application or the application which requires 24 cross seven and can I use them for a serverless? So my answer would be no, because you have to choose those application and one of the best choice for a serverless application is the stateless application. So application with unpredictable buzz or busty number of requests where you do not predict where the nature of the request, what it is coming up, okay. Wherever you want to do AB kind of testing, can redeployment or something like you run a seasonal or a periodic workload. For example, every end of the season or every end of the quarter, you have to generate some report. And I think that's where the serverless will help, will help you to build your application where it utilizes the resource when you need that. Microservices or container want to leverage a serverless. So let's see some of the pattern from the serverless. So what you see, I have a browser and I'm getting a request and I have one container up and running. And as soon as my request goes up increasing, my container will keep on increasing. I think this is a pattern of K native serving. Utilizing a K native serving. I can have a pattern with the eventing pattern where some event is generated. It get triggered to my application scale up and it produce some result. So moving to what is camel K? So I will hand over to Varsha for the camel K. Hello and audible. Yeah Varsha. Yes, thank you Shailendra. So I'm assuming everyone is aware of Apache camel and how it is helping us to integrate various components till date. But nowadays cloud computing is constantly evaluating from bare metal to container technologies. The latest trend in this process is the serverless computing model. As Shailendra explained, with the help of K native, we can make a developer friendly serverless environment to work Apache camel with the K native Apache community develop camel K. Camel K is a lightweight integration framework that runs natively on Kubernetes and it specifically designed for serverless and microservice architecture. Using a camel K with open shift, serverless and K native serving, containers are created only as needed and are auto-scaled based on the load. That means the container count increases with increase in the number of requests and it becomes zero when there is no request on pod. It reduces the cost by removing the overhead of server provisioning and maintenance. It enables a developer to focus on application development instead of building and taking care of the underlying environment. So camel K is implemented in Go programming language and uses Kubernetes operator SDK to automatically deploy integrations in the cloud. This includes automatically creating services routes on open shift. So Apache camel K is a community driven project. Users can write code in a various languages like YAML, Java, XML, Groovy, JavaScript, Kotlin, Java shell, et cetera. Next slide please. Thank you. With 300 connectors and built-in integration patterns, developer can connect to almost everything, everything in a flexible and scalable way. Here in this diagram, we can see with the help of Kafka topic, Kafka topic developer can send messages to any remote endpoint. So camel K is not a replacement of Apache camel, it's just an extension or we can see a subset or a sub-project of Apache camel, which runs on Kubernetes and supports serverless feature. Let's see how it works. Next slide please. Thank you. The developer can write the route using any supported camel domain specific language. Here, let's take an example of a route written in Java. The camel KCLI provides a camel command line as the main entry point for running camel K integrations on OpenShift. The developer needs to install camel KCLI on their local environment. With the help of the camel run command, developer can deploy their integration on Kubernetes or OpenShift. As soon as the application or integration is deployed on OpenShift, the cluster, the OpenShift cluster or Kubernetes cluster starts the pod. And based on the request, it autoscales. Next slide please. Thank you. Using camel K with OpenShift serverless and KDATIVE, we can manage how components in our system communicate in event-driven architecture for serverless applications. This provides a flexibility and creates efficiency through deployed or decapable relationship between event producers and consumers. And a public subscriber or we can say even streaming model. When running camel K in a developer mode, we can make a live updates to integration DSL and view results instantly on OpenShift pod without waiting for integration to redeploy. Wherever you make any changes, the pod automatically gets reloaded or it destroys the older pod and it creates a new pod and we can see the changes, we can see the changes, whatever changes we are made in DSL. When the integration runs for the first time, camel K builds the integration kit for container images which downloads all the required camel modules and adds them to the image class path. It results all the dependencies itself when the developer deploys it out. As explained, based on the incoming request, the pods and resources are used by the application. This provides much faster turnaround time when deploying and redeploying integrations in the cloud. Let's see in a practical way how camel K works. Over to you, Shalindra. Thank you. Thanks, Vasa. That was more informative on the camel K. Let's have something on the demo perspective. So how the demo architecture would be. So I have my development machine in which I'm running my camel with the KCLI command and on my open-zip or it can be any Kubernetes cluster which has two operator running camel K as well as serverless operator. And the camel CLI create integration custom resource and that custom resource, this is the important part which the camel K operator takes care from the developer. The developer focus developing their code and with the help of camel CLI, this integration custom resource is created which will trigger the camel K operator and with the help of serverless operator to spawn up the pods running pod or to scale down. The key feature in serverless is scale down to zero. Let's see some of the live examples. So how the setup is. So in terms of this is an open-zip cluster. I think it would be a latest open-zip cluster which I'm running, it would be 9.5. Yeah, it would be 9.5. It can be any Kubernetes cluster which you can have. And I have installed two operator and I hope the people are aware of the operator. It's a way where you package and deploy your application on any Kubernetes cluster. So it simplifies a lot of things doing installation. So I have my camel K as well as a serverless operator which is running. Once you deploy a serverless operator, you get on the left side, you get a serverless on the console where you can have the service revision and the round tab which is the part of the K native. In terms of deployment from the camel K, I have created an integration pattern which is already been created. There is no integration created and will use the CLI running on my laptop to create this integration and to create an application. So let's see with a very simple Java application. So I have this Java application which I have written in sample.java file. It's a language I'm using in Java. And I'm just creating a form and I'm setting a body of Hello DevCon and printing a log. And I'm using a platform STTP. It's a STTP server where I can use this as a consumer and I'm exposing at a point slash test. And how simple is if I have to deploy this in this application, what I have to do, I will just see if that sample, yes, this application is over here. So I will just have to do this, sorry, okay. So what I have to do, I will just run camel run and sample because I already have my contacts. I have already set up my contacts of my cluster which I'm running. So I already log in with that contacts, okay. So as soon as I say camel run, sample.java, it will start the integration has been created. If I do OCKIT, the integration, I can see my integration sample has been created up. So what does that integration will create? So let's go back and see, okay. So if I go to this integration, I have my sample integration created. Is there any running pod? Yes, I have my sample, my application which is running those camel run, okay. So how I can identify, what are the integration? What does this integration will have? So let's see that part. If I go and see this integration, I can see my YAML file and I can see, okay, the content, this is the whole content of my file which is there, okay, perfect. And if you have seen, I just ran from one Java file, it can be any language, which was I had just explained, it can be a Java, Python, Kotlin, okay. Any file, I don't even have to define the dependencies also. By default, my dependencies, this camel case will identify what all component I'm using and it just define those dependencies. Yes, camel platform HTTP has been defined, okay, fine. So I have my application running. Now you can see the application just got terminated because there is no loader, there is no users of this application so it will automatically scale it down. Now let's go and see what are the servings because as I said, the camel, the serverless will create the K native service, the serving part, okay. It has created a serving, it has created a revision as well as route, okay. Fine, next let's go and see try hit this route. As soon as I hit this route, if I have my running command prompt around this next tab, what I will do, I have my operator learning, okay. Find the sample you can see within 14 second it has started up, okay. And I can do my slash test and I got, oh, hello, deaf on, okay, perfect. So this is how it's, this is how it's simply, I can deploy an application on any Kubernetes cluster without the knowledge of all those configuration which I have to create, okay. The next important thing is if I have to see the logs, I can always go and say, okay, logs and I can just give my sample and yes, I can see all my logs, okay. Hello camel, can as soon as I try to hit something, the logs will get printed. Let's see that and if I will refresh again, yes. I can see hello camel, okay. And if I have to delete also, it's very simple to delete all those things, all those integration and I can just delete in one line and I can just say, okay, camel delete the sample. That's all I have to say. And let's go back and see if yes, everything has been deleted from the services, from the serving part. What about the integration? The integration itself is deleted. The next other feature, one of the major feature which the developer will like, run time in the dev mode because I want to keep on changing my route and I want to see the behavior, okay, which was explained by Varsha in her previous slides, for example, let's say the same thing I would run, camel can run Java and I would say iPhone, iPhone dev, so I'm running a dev mode, I'm running a development mode and what you see, okay, it's getting running up, okay, fine. And it has been started up, okay, fine. So what is the difference in the dev mode? The difference in the dev mode is, for example, what I will do, I will start pumping up. So I have some, the same, I will do this, keep on doing a curve. So I have this load.sh, what it does, it just simply does a curve and it's in a for loop so that I can continuously try hitting my application, okay, fine, so I'm doing that, I'm continuously hitting my application, now I can see hello DevCon. And I go back and I change my application and I say, okay, hello DevCon 2022 and I does a controller save, as I did because I'm running in a dev mode, now you can see the container, the new container has been started up, it's running. And now, without any downtime, without any changes at the application and I can see, I can start getting up those changes reflecting on the server, okay. So that is what the advantage I get when I run in a dev mode, okay. Again, the most important feature, if there is no traffic coming up automatically, it will scale down to zero, okay. Now the next thing is, will it scale up when my traffic is getting increased? Yes, so what I will do, I will come out of this, I will reset for that, I will have to update some parameter, for example, let's say with the concurrency, now what I'm saying, okay, auto scaling matrix is my concurrency and I'm setting up, okay, auto scaling target to five. So what it will do, it will create all those traffic is coming up, it's terminating, okay. It's showing the load, okay, I just wanted to show this command which I ran, which I ran. So what I'm doing, I'm saying auto scale with a target of five, that means five concurrent request should go to one of the pod. And what is that, is the concurrency, okay, fine. This is what I have already executed. Now, okay, my application, it just started up, my application, if I keep some time, if there is no request coming up, it will be automatically scaled down, that is one of the, again, I'm repeating one of the feature of serverless scale down to zero. Now let's go back and see what is the changes it has done. If I go in my survey, in my YAML, so in YAML it would have added an annotation of the same concurrency and five. So five is the target and concurrency it has added up. So what I will do, I will try to pump some load, okay. So what I have done, I will try to use the same script as well as I will request for, so also, she also had a similar script running at her and also to execute that script. So what will happen, we'll see, instead of starting up one pod, there would be a lot of multi-pod would be created up. So I'm just waiting, let's get it terminated completely, okay, because now it's in a terminated state so that we can see yes, now it has completely terminated. If I go and see no pod is running, that means no resource are getting utilized. So what I will do, I will run the same load test and yes, I have executed the same load test, now you can see, now I can be the number of container has increased because I have pumped up a load and because I have reduced the concurrency to five earlier, by default the concurrency value is 100, I have reduced to five, that means one five request, parallel request can be taken up by my container. This is how auto scaling, scaling down to zero, there is no traffic coming up. I can reduce my traffic and this is where now I have stopped. So after some time, it would be automatically we scale down to zero. So it will take some time, it will wait, so that at least then I think the timeout values 60 to 90 second, you cannot tune, you can tune those values, those are tunable value but you do not want to keep it too low because it should not spike, with the spike of the traffic, it should not fluctuate. So as soon as the, if it not receiving so much traffic, it will scale down slowly and you can see all those spot are getting terminated. And yeah, that's all from the demo perspective. Thank you. So let's see if there are any question for us. So let me stop my sharing here. So how serverless react even means how the handle request to case of failure. Okay, so Vikas has some question on that part. So Vikas to answer your question. So how the serverless handle the request because from the K native, I think I have that part also. So I have one diagram but I think it's not there. So one diagram will definitely help you out if you're able to let me share my screen. So what happens? I'll give you about, do you have just 30 seconds left? So please quickly move to the question, if you may. So yeah, I think this is a, let me, let me take this one. So I will quickly go through. I can, we can discuss more on this part. So activator, activator is where you will have the buffer. So this is a K native part where all the controller, all the requests which are coming up the activator will keep my buffer. So that's how you get all those buffer requests, get buffer when your application is scaled out to zero. I think it's, the time is up. So I think it will be available in the round. So I think we can have those questions. Thank you. Thank you very much for presentation. If you would like to discuss anything further, please go to the work adventure. It's a virtual platform where you can interact with each other. So feel free to go there and enjoy the discussion.