 Hi everyone. My name is Gaurav Gelot. I am a developer advocate at InfraCloud Technologies. Outside work, I am a Docker community leader and I also organize multiple Meetup groups in Pune. If you're happy to connect with me, I'll be available at LinkedIn, Twitter and my own website. Great. My name is Vishal Bihani. I'm CTO and founder of InfraCloud Technologies. I'm one of the fish and maintainers and I'm also an active organizer of Pune Kubernetes Meetup. I am usually found on Twitter and LinkedIn for connecting. Cool. So today we are going to talk about auto-skilling event-driven applications with Fission and Keda. Now, before we go and understand the actual demo and the application, let's understand what Fission is. Cool. So Fission is a serverless framework on top of Kubernetes. It allows you to write functions and focus on code and not have to worry about underlying details about infrastructure or Kubernetes as much. While it tries to hide the details of Kubernetes and Docker from you, it doesn't completely limit you. You can get to as much material as you want, but if you don't want, you can always stay abstracted from that. And it does support a lot of integrations. We'll look at that in the subsequent slides. Now, what Fission can do for you? So Fission is basically inherently a framework written on top of Kubernetes. It's a bunch of CRDs and controllers, and it relies heavily on Keda project for a lot of integrations with event sources or message queues or databases. Now, as a developer, you might want to write functions, you might want to write microservices. Sometimes you might want to just give source code and let Fission figure out, build, deploy, and package it into a container, but sometimes you might say, hey, I will give you a container and not the source code. In all of these scenarios, Fission can help you deploy and run microservices or functions. Today, we support a whole bunch of languages, right from Java, Golang, Node.js, Ruby, Perl, Python. We also have something called as binary environment. So people actually run shell scripts for some of their operational stuff using Fission binary environment. Now, once you're deployed, your functions, microservices in many of these languages onto Fission, you might, of course, want to call them. So you can call them using HTTP, obviously. You can also call them using CRON. There's a CRON timer kind of built into Fission and allows you to have functions in work periodically. It also integrates with a whole bunch of, you know, message queue resources using KDA. And today, we support, for example, Amazon Kinesis, Amazon SQS, Nats, Kafka, and there is more connectors being added. Now, when you have a platform like this, which is executing a whole bunch of functions, microservices, which are being invoked on the fly, you want pretty detailed observability. Fission does integrate with almost all the major observability tools, like elastic, Prometheus, Yeager, Grafana, to give you the visibility into what is happening in your cluster when you execute this functions and microservices. Now, beyond GitHub, of course, you can find us on Fission slash Fission. Please, star us and follow us. And if you have any issues in trying out something, do check out the documentation, join us on Slack and ask questions, please. Well, if you look at a very simple hello world version of Fission basically, right? In the first line, what we are doing here is, we are creating an environment of Node.js runtime, and we are using the efficient wired image as the base image. In the second line, we are creating a function called hello.js, in which we are using the runtime being the Node.js that we declared in the first line, and we are simply pointing to the code on a GitHub repository. Now, this code line is doing just a simple hello world. And once you have done these two creations, we can simply call it using Fission function test and hello, and we get a hello world. Now, I'm not going to show this simple example. You can go and check it out later on your own, but just gives you an idea that without having to understand the whole details of departments or other things, you are able to run a simple piece of code onto Fission. Great. So let's talk about the demo that we are actually going to show you today. Now, this is a simple diagram of the demo that I'm going to show you. There's one function. It is going to produce messages and write into a Kafka topic called Request topic. Request topic is subscribed by another function using a trigger. Now, trigger is a function Fission terminology basically, and we'll explain what does that mean in subsequent slides. Now, this function will get directly the message body without having to understand or talk to Fission at all. This function is going to process that message. If there is an error, it is going to put that into an error topic. And this is again, you know, facilitated by Fission. The function doesn't have to know, you know, it just has to return a response code and the response body, so to speak. But if the processing is successful, it will return a 200 response and the message body, which will be put into a Kafka topic called response topic. On the response topic, there is another function which has subscribed to that topic using another trigger. And that trigger, you know, will invoke this function whenever there is a message in the response topic. This function, I guess, is going to do some more reprocessing on top of that message. And it has the code to actually write the message to a RabbitMQ queue. As soon as you write into the RabbitMQ queue called publisher, there's another function which has subscribed to this message queue. And when there is a message, this trigger will ensure that the message is read from the queue and actually function is invoked with the body of the message as the payload. And then function is going to, you know, this last function is going to do something more on top of that. So the first one, we are calling it Kafka producer. The second one, we are calling it Kafka consumer. The third one, we are calling it Rabbit producer, RabbitMQ producer. And then the last one is, you know, just a consumer, RabbitMQ consumer. Now, two of them are written in Golang. Two of them are written in Node.js. And, you know, we'll look at the code, of course, shortly. But what happens is I want to explain the trigger part of it. I think it's very crucial for us to understand. Technically, in normal, you know, scenarios, when you have a message queue, you have some service listening to that message queue. And that service is running always, so to speak, right? But with the KEDA, the beauty is you don't run a pod always to listen to messages. As long as you have KEDA running and installed, KEDA is listening based on your details of, you know, which topics we'll listen to, what message queue we'll listen to and all that stuff. And only when there is a message, it'll spin up a pod. And in this case, the pod is a KEDA connector. You know, I'll talk about KEDA connector in subsequent slides. And this KEDA connector will now go and read the message from the queue and call the next function over HTTP, right? And the best part is KEDA not only spins up a pod only when there is a message, it also scales out the pods when there are more messages. And the scaling is different from message queue to message queue. For example, in case of Kafka, the scaling is based on the number of partitions. In case of ribs, I'm not sure, but that is based on some other parameters. So, you know, kind of double clicking into the KEDA connector part of it. So a connector is nothing but a component which reads from one source and drops it into another distinction. So when we say Kafka HTTP connector, it is reading from Kafka and dropping the message as a HTTP payload. And the good part is these connectors are not specific to fission. They can be used in any context, any environment, as long as you create a pod, and you set all the right parameters in terms of the destination or the source of the queue, the destination to call and stuff like that. And you can always deploy a pod, but the great part is if you deploy KEDA along with that and define the scalers for that specific message queue, these will be scaled out only when there are messages, right? And this is the same mechanism that we use in fission to actually read messages when there are messages in a message queue and then scale out the number of functions as the number of messages scale into the message queue. Today, we have, you know, about five, six R connectors, Kafka, AWS SQS, RabbitMQ, AWS Kinesis, NAS HTTP, and there are a bunch of more coming, you know, they are being developed actively. So I go, you know, I suggest you to go and check out this repository called KEDA connectors, very useful if you're ordering anything with data processing, you know, between message queues and microservices and functions basically, all right? Coming back to the previous slide, right? So this was a part of auto scaling based on events happening in the message queue. So you're not consuming any resources when there are no messages, you only scale out based on demand when there are messages in the message queue, right? So that is one part of the auto scaling. The second part of the auto scaling is the actual functions. Now, all of these functions, for example, two of them written in Golang, two of them written in Node.js, they are not running always, they only are, you know, invoked and scaled out when there are more messages coming in, right? So what we do is in question, there is a concept of environment, we get a small pool. So for example, for both the go functions, we get a small pool called go environment. And there are, let's say two or three, you know, idle parts running there. And for Node.js, there are again two or three idle parts running there, which can be used by both the functions. Now as messages start arriving, the first message arrives at the first, or we invoke the first function manually, right? This will be scaled if there is more like requests coming in, but potentially just one will be scaled initially, right? Now the message is going to Kafka topic. And the trigger gets triggered. And this trigger is again, you know, auto scaling the actual parts, which are like Kedak nectar parts, it calls the second function, which is Kafka consumer. This Kafka consumer function will be scaled from, you know, that pool with one part and then eventually to more parts as the more messages start coming in. And then it goes to underground of the Kafka topic and from there, you know, the trigger. And then again, this function is only scaled based on the flow that is coming in from this, you know, Q, so to speak, right? And so on and so forth till the end, basically. The idea is when there are no messages, when there is no activity, technically, none of the connectors are consuming any parts. So zero between the Golang and Node.js environment, we are, you know, consuming about five or six parts. You can again configure this pool size to be just one or more than one, basically. And when there are messages, it might happen that each of the Kedak connector parts scales, let's say from, you know, zero to one, two or three parts and each of the function parts, let's say, scale from nothing to, you know, let's say two, three, four, five parts. So I could speak, there could be maybe 25, 25 odd parts running, process all the messages. Once they are done, you can scale back to zero. And that's that's the demo, you know, we are going to watch today. Cool. So let me switch the screen here and go to first of all VS code and show you some code. And then we can go to terminal, try it out and then go look at the RabbitMQ console and actually see messages coming in, basically. So this is our first function, Kafka producer. It's a simple Golang function. It uses a specific contract, you know, for the defining of the function. It's a handler function, which gets a request and you return response, basically. We connect to the Kafka queue here and we create a few random messages with timestamp and the message ID. And then you simply write it back, you know, to the, message queue. So that's the producer function. Now, I don't have like, there is no code for trigger, but there is like a spec for trigger, which I can show very quickly. So all of our functions, we are defining in specs. So if I show K2K trigger, for example. So this trigger defines, you know, first of all, what is the function reference? So consumer is the function to be called, right? And then it talks about which Kafka server to talk to, which consumer group to talk to, which topic to listen to. And then you define also, you know, a polling interval and also, you know, like minimum, maximum, maximum retries and stuff like that. Right. You can define all this on the CLI of fishing as well, but you can also define them as a spec. It's a message to trigger, you know, custom resource, basically. Cool. We look at the producer, we look at one of the triggers. Now let's go look at the consumer. The first consumer gets directly the message body, you know, as part of the payload, request payload. You don't have to connect to Kafka. You don't want to know anything from where the message is coming. Right. So the trigger did the job of listening to that request topic. As soon as there is message, read the message, connect it to a, you know, specific format and post to this function as part of the request body. So all we do is get the body, we add another field called case status. So we are saying Kafka process status or something of that sort, right, to, to that original message and simply returning that message. Again, when we're getting the message here, we don't know where the message is going. But that is configured in the trigger that if there is success in this function processing, please send the message to response topic. But if there is error, so if the status code will be anything other than 200, it would go to error top. Right. So the function is completely abstracted from how it gets the message and where the message goes. Right. So very loosely decoupled. Now, when it goes to the, the second, you know, topic Kafka topic, which is response topic, we are looking at the second trigger, which is K2R. You know, we are saying great from Kafka, but right to a rabbit MQ producer. Right. Here again, you know, we have configured which function to call, and you know, from where to read messages, what topic and stuff like that. Right. So that will eventually call the rabbit MQ producer. Now, again, within the rabbit MQ producer, we simply get the request and the request body is what we get. Right. Somewhere on the request.body. Now, the writing part of it, of course, we have to connect to, you know, rabbit MQ, you know, give some credentials and stuff. And that is again defined in the function specification. So if you go and look at rabbit consumer, rabbit producer. Right. So here is, you know, we are defining which rabbit cluster to connect to and stuff like that. And it writes the body to the rabbit MQ queue. Once it is done writing, the next trigger, which is rabbit to function will be called. Again, here, it has defined, you know, what to do and all that stuff. And the actual message doesn't know anything from where the message is coming. Or does it get some message, where, you know, appending one more string to the message, and then getting that message. Right. So that was overall flow, you know, two functions in Golang, listening to Kafka topics. And, you know, one last one listening to rabbit MQ and so on and so forth. Great. Now, before we actually go into the demo, what I want to show you is clear up all the screens just so that, you know, we see things a little clearly. Right. So if I go and look at, oops, yeah, you can go and look at the deployments in the default namespace. These are for three different connectors, right, the Kafka to Kafka, Kafka to Rabbit and Rabbit to function. Right. If you look at the availability replicas, zero, because there is no messages coming in, all the replicas, zero, right. Similarly, if I show you the HPAs, all of them targeting specific deployments and replicas are currently zero, right. Great. Secondly, if I look at the pool, the pool I was talking about, right. So I'm going to quickly get parts and from physician function namespace. So these are the pool parts. I have three pool parts for Golang as environment and three pool parts for node, you know, JS as environment. But if I had to look at functions, function specific parts, right. So GPO and so rabbit producer, right. Rabbit producer function. And many say manageable to false. That is the part which is actually doing any work. Right. There is no part right now. Similarly, I'm going to have been consumer. There's no part right. Right. So right now, all the resources are zero. What I'm going to do is I'm going to call this producer. Now in the producer code, when we were producing the message to Kafka request topic, we were producing about 10 hour messages. So I'm going to call this function a couple of times. So it produces like 40, 50 hour messages, you know, right. Now let's look at the deployments here. As you can see the Kafka to Kafka connector deployment has scaled from zero to one, because they have messages arrived in the Kafka queue, right. Similarly, if I go look at function, producer function has been called, which is the first function in the pipeline. Let me go and look at if there is any consumer function parts, you know, which have been created. There is one, right. If I look at the R consumer, which is RabbitMQ, the last function, there are already four, five of them, right. And lastly, the RabbitMQ producer, right, there is just one. So and on the bottom, if you see all three of the triggers, the K2K, K2R, and let's see if the R2F still is zero. Yeah. So now K2K has again scaled back to zero. K2R has scaled back to zero. The R2F, you know, has scaled back to one. So these are all connected parts, by the way. And as you might have seen here, again, Rabbit producer still one. But if you look at Rabbit consumer, let's say, right, about six hour parts are working together. And if I look at the Kafka consumer, there is just one part running still, right. Now, as the messages get processed, these six will again come back to zero. And this R2F connectors, KDA connector part will also come back to zero. Now let's go and check out in the RabbitMQ console. As you can see, we did a spike. So there are about 30 auto messages, which were received or queued. And then there were, you know, other that were consumed as well, right, by different, different consumers. And yeah, I think there is nothing left in the queue anymore right now. But that gives you a sense of, you know, the spike that happened when we invoke the function. If I go back again and look at, let's look at R consumer, those were the maximum parts. I think it has scaled down 1, 2, 3, 4, 5, 6, to still six, still working, still working on it. And here on the connector parts, the KDA connector parts still one here. So I can actually go look at it. There is just one part for R2F. Yeah, it's running a RabbitMQ. Cool. So the idea is, when you are executing workloads, things should scale up on demand. When you're not doing anything, things should scale down back to, you know, nothing. Now, of course, I'm talking about parts. You must be asking, what about the underlying nodes? Now I didn't, you know, do for this demo, like a node auto scalar, but you could very well back this with a node auto scalar. And actually, when there are no messages appearing from your, you know, sources, you could just run with one node cluster. And as messages arrive, you could actually scale it out to, you know, 2, 3, or, you know, as many numbers of nodes as you want, and, you know, accordingly process the messages. That's going to take, I think, a while. Now they are going into terminating state. There you go. So from running to terminating, because it has probably processed all of them, I don't know why this is still scale out to 1. It should probably go back to 0, hopefully in a minute or so. So consumer parts for Rabbit and Q already gone back to kind of original state, consumer part for Kafka also gone back to more or less original state. If I look at the producer terminating as well. And if I look at the Rabbit producer, terminating as well. So all the function parts are pretty much in terminating state. There is just one part, which is for connector, which is still in, you know, running state should go back to terminating state in a while. Cool. So that is, you know, a brief demo, the code walkthrough and, you know, how this whole thing works, how truly it is autoscale, not just from actual workload processing units, but also the units which actually read a message to and supply, you know, to the processing units, so to speak. They're also autoscale using KDA. So this is truly autoscaleable and, you know, only on demand scalable kind of setup with KDA and Fission. Hey, Gaurav, you want to speak out? So if you're happy or if you're delighted with what we have seen, like this is just one example, which we shall have demoed. There are more examples that are available on the blogs, on the documentation side, and also on the Fission examples. So what you can do is try this example, which takes in six functions. And what you play with is Kafka, Redis, and a database. And also that has a web UI. So Vishal has already written a blog post about the same, where he is describing the whole functions. And yes, this is something that we really like you to try and give us your feedback, how you like it. And if you have run into any issues, we'll be more than happy to help you on this lack. If you are starting your journey as a fresh contributor out of college, but if you're already part of different communities and helping in different projects, we'll be happy to have you contribute to Fission as well. And yes, definitely you're just contributing to code is just one side, contributing documentation, raising issues and helping with questions. In fact, asking questions is just marvelous way of contributing. So yes, we would like to connect with you on Slack or Twitter, wherever you feel like, and just help us out with contributions to Fission. In fact, if you're interested in contributing to just Fission, you can also start contributing to BotCube. So BotCube is another project which we started at InfraCloud. This is a chat ops way of interacting with your Kubernetes cluster. It not just allows you to monitor your cluster, but it also allows you to do some more fancy stuff like creating deployments, creating ports and all. So yes, this is also our open source project that you can start your journey with. If you have any questions, we'll be more than happy to take them. And if you have any questions, even after this talk and all, we'll be available on Slack. So thank you guys very much for your time and we'll see you around.