 Hello everybody. Welcome to our session about vendor agnostic serverless functions. We will be presenting today with Mohit, so first let's introduce ourselves. My name is Zvynie Grubalik, I'm based in Czech Republic. I'm in general working at Red Hat in Open Chief Serverless Team. I'm a member of K&E team community, so I'm a member of K&E team TOC. I'm also a maintainer of Project KDA, so if you have any questions on those two projects feel free to ask me. Good afternoon everyone. My name is Mohit Zvyn and I work at Red Hat and I'm a senior product manager working around Red Hat developer tools. I'm based out of India and today we are going through some of the cool stuffs, what we do around serverless, K&E team and functions. So let's pass on to Vinik. Cool. So this is the short agenda, so we will do some introduction, this has been done, right? And then we are talking about serverless functions, so maybe we will say what the serverless actually is. Then we will show the demo. OK, so what is serverless? I bet that the majority of people, when they serve serverless, they think, OK, that's AWS Lambda, right? But there might be different ways how to achieve those similar capabilities. So, for example, this is a definition from CNCF. I won't read the whole definition, but basically the main point is that serverless, there are applications that do not require server management. And they have like a very simple deployment model. And they should be executed and scaled based on the demand in the very specific moment. So if I can summarize this in a few bullet points. So serverless, it's like autoscaling, right? This is important feature of serverless, so scale to zero. Simplified development and deployment model. And by its nature, the serverless services or functions are event-driven, so they should be asynchronous. So we need to take this into consideration when we are, let's say, building our solution based on the serverless approaches. Also the serverless, I don't want to deal with the server configuration or with the infrastructure configuration. It should be very simple with a very nice UX. Also when we are talking about serverless, there are serverless functions, functions of the service. There are different opinions on these terms, so this is how we see it, actually. So for us, the serverless is a deployment model that basically abstracts the way that we deploy the application on the infrastructure. It provides the capabilities to scale the workloads, save the configuration and all this stuff. Functions, on the other hand, for us is more like a programming model. So because it has like a certain function signature that you need to match to actually deploy the function. So once you build this function, then you can deploy it as a serverless workload. So serverless, you can take basically any of your containers or whatever, deploy it as a serverless, but functions is more like a glue, so you have a strict thing to follow, and it's usually like a smaller service. So what is the idle serverless workload? It might be a workload that is stateless because with the serverless we would like to scale very fast, so we don't want to keep the state, but we don't want to keep the state inside the application. We can keep the state outside the application in some key value store, for example, Redis, but we should not have the state. It should be ideally short-running, so if your application is short-running, it's ideally candidate for serverless workload because it will do this job and it will scale in. And if your application is HTTP-based or event-driven, then also it is a good candidate for a serverless workload. So let me explain more, like let's say this is the serverless pattern, we call it. So on the left-hand side you can see event. It could be HTTP request, it could be Kafka message, it could be some other event, and this event triggers our application and this application does some work and produces some results based on this event. Once the job is done, the application should be scaled in. Also if there are more requests, the application will scale out, so this is the serverless, basically nutshell. And since we are on KubeCon, so let's take a look on the serverless approach and let's say from the Kubernetes way, because this might have several benefits, because, for example, if you have your traditional applications, your legacy applications already deployed on your Kubernetes cluster, and you would like to maybe connect a bunch of services and make them, let's say, serverless ready, then you can use Knetif basically to have all the workloads in a single environment, in a single cluster. If you use, for example, AWS Lambda, you might have some services in this runtime and the other might be in another cluster. So everything in one cluster is a benefit that basically you approach all the workloads in a similar way. You can have the same CI, et cetera. Okay, so let's get started with what we have with Knetif. So this was a CNCF incubating project started in March 2022, and we have seen a lot of community growth. We have a dedicated booth in the project maintenance track for Knetif, and these are the sites where you can read more about Knetif and how you want to work ahead with. So the demo will be focusing more on that. These are the three specific aspects of Knetif currently serving, eventing, and functions. Serving basically is allowing your container to have your applications from scale zero, and eventing is basically monitoring your infrastructure. Let's say you have some events which are there and you want to simulate your applications through the eventing process, you can do that. And functions is something which we are focusing a lot in the demo today, so we'll be having more understanding about how you can do serverless functions and with a vendor agnostic approach. So let's see how that exactly works when we say vendor agnostic. So right now, if you want to deploy a serverless function, it should not be dedicated to a specific cloud provider. It can work on any Kubernetes platform. It can be a raw Kubernetes, or it can be in a hybrid cloud. Let's say it's working on Red Hat OpenShift, Microsoft Azure, AWS, or GCP, and the best advantage is any runtime you want to select to deploy your functions will work out of the box on any cloud provider. So that's a major advantage we are trying to showcase here, and in the demo you will be seeing how we can replicate the same logic onto multiple clouds. To help the developers more, we have also integrated this serverless workflow into multiple ways, like you can do the same using CLI, or you can also do through your ID extensions. So you as a developer need not switch between multiple tools to perform one task. So the idea is if you are a developer who resides in their specific ID, let's say you are comfortable with Visual Studio Code or you are comfortable with IntelliJ. So the idea is you want to do all those serverless tasks directly from your ID. So you have an application, an opening in a workspace, and you want to deploy that application, creating a function, building that function and deploying, everything can be done directly from your ID. So we have dedicated extensions for them, which is known as VS Code Knative and IntelliJ Knative. So it goes by the community name of Knative. So we have a lot of community participants who are contributing to that extension and it has multiple releases across every three weeks. So we are continuously working on that and these are some of the stuffs which will be also focusing in the demo. So right now let's get started with what we have for the demo because that's the most exciting part here. So let's get started with it. So right now if you see, we have an OpenShift running on AWS and we have our application already deployed for the demo, so this will showcase your frontend application, which is written on React and already deployed on OpenShift. If you can see the view, the application is already deployed here. If you click on it, you can see all the routes are already created and it's currently running. So let's go and see how that application currently looks. This is where we have already hosted our application. You can see the URL, it might be a bit smaller, but this is how the application looks. And this is a simple React application which we have written and let's say what's the functionality. So right now I want to search any neighborhood in Amsterdam city and I want to see the latitude and longitude of the area. So let's say right now we are in Amsterdam Rye, so let's type and see. OK, so we have multiple places here which mentions that, so let's type that. OK, so we have got this. So you can see it automatically fetches, and this is the latitude and longitude here. So we have already defined that. So this application is deployed. So the next step is how we can connect multiple functions to it, and we can see, OK, for this location, what's the specific weather. And if we know, OK, what's the specific weather, then we have the second task which we will be showcasing later on. So let's go back to the slides again. So in this demo we already have, as I mentioned, we have a React application with Node.js and a backend, which is already running in a Kubernetes deployment. This deployment is currently open-shift on AWS, but it will also seamlessly work on any Kubernetes platform you have. With this front-end application, the socket.io communication does with the backend which is on Node, and we have a cloud event emitter and the REST APIs which communicate together to send the data given on that street what's the latitude and longitude and what's the weather which will be coming from the functions we have. The REST APIs which basically communicate to the front-end. When I say cloud event, these are some of the examples of how you see the cloud event look like. You can specify what type of endpoint it needs, what type of content it has, it is an application type and you can also pass the exact format here that you want to provide the name, the quantity, the price. It has an SDK which is available across languages from Golang to Node.js Basically the data here contains a payload, what payload we want to pass on to it. The next part comes to the eventing part here. There are three specific types when we talk about eventing. One of them is a broker which is an event mesh. Right now in the demo we will be going through in memory but you can also configure your Kafka or your RabbitMQ instance. The second part comes from the triggering that when you have a workload which is already subscribed to that broker to receive those cloud events, the third one is pretty important here because it does a sync binding. Basically your standard application will be connected to that broker together and your application can listen to all the communications coming from that cloud event through the backend. You can read more about it from the eventing documentation we have. This is how exactly the application looks. Right now what I showcase on the demo this is the frontend app which already has a backend connected to it and now what it will do is do a cloud event emitting to connect to that broker. So let's go to VS code and see how it does. So this is my application which is already open in my visual studio code instance and I have my broker.yaml already open. If you can see these are the configuration which we need to define here and this is the sync binding which will be added to it. So let's run this code and we do sync binding is already created and if you go to the k-native extension so this is the k-native ID extension what we have on visual studio code and the similar experience will be there on IntelliJ also. So let's say if you are a developer who is comfortable more on the IntelliJ side we will have the similar experience and if you see I already have deployed my broker so it will be present here and my sync binding is also added here and if I go to my topology view you can see as soon as previously just having this react application deployed but now it also has the broker and the sync binding connected to it. So I'll pass on to Binik to discuss more about what next we can do with the functions experience. So what you just saw it's the standard application and as I mentioned you can mix your standard applications with the service functions so let's build a function so let's build a function that will tell us what is the weather actually on that location. So what can I do? I can open the K&T view and I just say OK, let's create the function I will call it weather I need to select run time so with the K&T functions we support multiple run times so I will write this function and go and we have several templates so when I was talking about K&T functions it's like this programming model so we offer different templates for the developers so they can just create a function and it will serve the request so it can be template based on HTTP so it will accept incoming HTTP requests or it could parse cloud events because our standard application exposes cloud events we will use the cloud events one and we will just create it in our in our directory so let me just select the location and create a function OK, so and I'll add it to my workspace so as you can see it's visible it's a pretty standard Go function there is a Go mode and the most important thing is this handle.go so basically this is all I need to do to implement the business logic so this is the function signature and it accepts the cloud event and I just need to write the code in here so I will need to fetch the weather information from some endpoint I will cheat a little bit because we don't have much time I will just copy paste the source code but I will explain it a little bit so OK and I will actually deploy this function so I can do it through UI or through this command line so all I need to do is to run func deploy, which is our CLI tool and now it will start building the function image and it will deploy the image what the function does is basically it just calls the API endpoint on this location sends a new cloud event back to the broker with our information about the weather information let me go back to the slide so this is what I was just talking about we have this K&T functions it provides the developer experience and we have the templates and we support multiple runtimes and you can trigger the functions based on HTTP or cloud events so how do we build the function from the source code we are using buildpacks or we can use S2I strategies and we are also planning to add more strategies for building the image so if you don't know what our buildpacks is, it's a cool CNCF project that lets you automate the process of building the container image is based on the runtime that you have so we with the functions provide like the set of runtimes and set of prepared buildpacks builders that will automatically package our application and produce a container image that is ready then it's been pushed to container registry and then it's been deployed as a K&T service so K&T service is one of the components of K&T is the serverless deployment model for the sales application applications and what it provides basically I can just tell it this is my container, it will deploy the container as a K&T service and it will automatically scale my application based on incoming HTTP requests I would like to stop a little bit on here because scaling based on incoming HTTP requests is another simple task because imagine you have your application accepting HTTP requests let's say I'll scale this application to zero replicas and guess what I would like to reach this application so if I do a request I need to somehow catch the incoming request hold it for a while scale out my application and request further down so this is all been handled by K&T for you so you don't need to take care of this there is also separation of code and configuration so let's say I deployed my application in version 1 then I can deploy version 2 and K&T automatically stores let's say snapshot of the configuration so you can easily roll back between the versions or you can even specify multiple traffic splitting about 80% of traffic to my previous version and 20% traffic to my new version so our function is being built by buildpacks as a container and then deployed as a standard K&T service so this is what the application looks like now so the application should send the cloud event to our broker with the coordinates we should connect the weather application to the broker and then it will reply back so our function has been already deployed as you can see in here and it has been already scaled to 0 replicas because there is no traffic I just need to connect my app to this broker so I will connect this function to the broker and because I know that the standard application emits cloud events cloud events let's say some metadata about the cloud event and the payload and there is a field which is called type so I can filter these cloud events based on the type so this is very useful when you are connecting different let's say event providers it could be Kafka, it could be some custom stuff and everything has been bundled as a cloud event so you can imagine the cloud event as a wrapper around your arbitrary workload and you can access the cloud event all the cloud events in a unified way huge benefit so let's filter based on type and I know the type is coordinates and now my function receives the coordinates it queries the endpoint and it receives the metadata and now I need to talk back to my standard application I can do it from the function or I can write another function that will do that for me so let me go back to the VS code and I will create another function this time it will be a node function I will call it a responder so it will respond back to the application and it will also accept the cloud event so it will accept the cloud event from the weather function and it will reply back to my standard application so let's create this function I'll add it to the workspace and again this is the responder and it is very standard node project there is index.js where you need to implement your business logic again I will cheat because we don't have much time so I will I will just copy paste the code and I will put it into my index.js and what we are doing here let me let me so this is the handle function which receives an event and if the event is type of weather which is the one from the function it will send it to the backend on the resend point and that's it so let's deploy this function again all I need to do is to write a func deploy responder and this will build the function and it will it will deploy it on a Kubernetes cluster so if we go back to the slides so this setup this setup should be okay so we are building this responder function that takes the event from the broker and making HTTP request to our app I did the deployment through the command line but we also have the extension where you can for functions you can do this through the ID so you can build the function deploy the function or you can invoke the function invoking the function means that for example you would like to test your function except certain cloud event so you can send the test payload to this directly to this function and this function can run on your cluster or it can run locally so for example this is node application so you can just run npm start it will start on your localhost or you can run it locally in your container so I can see that my application my second function has been deployed so let's quickly check okay my responder function is here so I will also connect it to the broker and I'm interested in cloud event type of weather so that's it so now if I search for an address I hope that we will see some weather information in between so you can see that now we can see the weather information about a specific location I just want to highlight that this is a simple demo so I know that you can do all this stuff maybe in a single application but we just want to showcase how you can use this event driven approach so you can have multiple functions and each function is being scaled based on the specific needs so maybe some part of the application requires higher load some requires lower load so this is the current state so what can we do now so it looks like a very good weather and with a good wind speed so I want to rent a e-bike can you help me with that sure let's write a function for this data or Amsterdam city provides an open data on this endpoint it's in Dutch so I needed to use a translator because I don't speak any Dutch but this particular endpoint is telling me what are the scooters e-scooters available in Amsterdam so we will just query this endpoint we will get the list of the scooters and we will try to find the closest scooters coordinates that we specify in the application so basically we will implement this so we will implement another function that will receive the very same coordinates from the broker and respond back so let's go again I will use Golang because I love Go so let's call this scooters let's use Go again we will use cloud events and we will use directory we will create a function we will edit to reverse space and if you look at the scooters again it's a good function again I will cheat so I will put the implementation in there scooters in here so if we look at the function it's simple it receives the coordinates then it talks to the endpoint and then it checks and the scooter is available because in the rest of the page there is also unavailable scooters and it will sort them by distance and it will give me 5 closest scooters and again I will respond back in a cloud event it will be type of type scooters so let's deploy this deploy this function so right now again it's very simple just provide the business logic and the build packs are building the application by deploying as a as a cognitive service so so we need to wait a little bit in the meantime what we can do we can quickly check again the extension so you can see that I have all the functions here I can see that if they are local or if they are deployed already on the cluster I can deploy them, I can deploy them I can build them here I can see all the eventing infrastructure for plugging all the stuff together and we can see that our function has been deployed so let's quickly check ok, it's here, perfect so I will also connect it to my broker I can do it through the UI I can do it through the command line that we have we have KNCLI for this or I can use YAML file so this type I would like to receive coordinates to this coordinates and also to actually update my responder because my responder function if you recall my responder function just accepts the type of cloud event type of weather I need to extend it with the type of scooters which provides my new my new function so I will need to update this function and build it so is there any other way which you can do this well, yeah, yeah so basically at the moment we are building the function locally because some maybe I don't want to run a broker on my machine I don't want to run podman on my machine or maybe my company doesn't allow me to run this kind of stuff on my machines or I just want to use the CI or something like that so what we can do, we can build a function on a cluster because we have the power in the cluster so let's use this one and how we can achieve that we will approach, we call it on cluster build so on cluster build takes my source code it forwards it to the cluster to a volume and it will initiate the tecton pipeline if you don't know tecton, tecton is a Kubernetes CI let's say and it will execute the pipeline it will build the function again and it will again deploy it as a as a cognitive service what I want to show you today is something a little bit more advanced pipeline is a code so the pipeline definition lives with the source code of my function and I can submit it to the git repository and this function will be automatically built with each commit or each action against this repository so I can easily build the function on cluster triggered from the GitHub so let me update the function again I will cheat a little bit so I just need to add this at this case so in here so in this case I have two cases and I will talk to different endpoint on my on my application so what I need to do I need to configure this function to actually do this stuff so what I need to do I can run this command I can run fun config of the function and I would like to configure git but before I can do it I will actually initiate the git repository on my responder function so I will go to the responder directory and I also have like this so basically what I'm doing I'm just initializing new git repo in this and I've already created a GitHub project so what I did I initiated the git repo and now I need to run my configuration for the git so it will ask me couple of questions it will find the correct URL for the function are we targeting the main branch so we are targeting the main branch we can target multiple branches it could be subpath whatever and this is like it ask me if I want to configure the webhook so it means that it will automatically build the function for me so yes, I won't like to do it and I need to provide my access token so I will copy it and I will just provide the access token sorry, I just need to probably copy paste it the layout is a little bit broken so here it is so what's happening in my source code of repo there are new pipeline definitions created on the cluster the cluster is configured to talk to the GitHub repo and it's been hopefully set up for the action I can see some error, but I hope that it will be work so what I need to do I just need to commit these changes to my git repo so I will commit this stuff and I will publish so if we go to my this is the GitHub repository I will just refresh it and we should see the new code and here is a small action running on this latest commit and it is running the build of the function on my cluster so I will go back to my cluster and switch to pipelines and under pipelines we can see that there is a new pipeline running and it has three tasks so the first task is pulling the git repo the second task is just finished it will build the function using build tags and the third step is to deploy the function so we just need to wait a little bit hopefully it will be so the deploy step is still running and once the function is updated I will see a new revision of my function seems like the demo gods are not with us today but trust me this actually works so let me fall back to my backup solution which is the application that has been deployed in different namespace the very same application and I will show that it actually works so if I select some address now if we go back to the topology view just let me refresh maybe the internet connection is not good it already like forward me like the cosy scooters to this location and really what it does it just talks through these functions do all the stuff so this is the final solution and thanks for the feedback if anyone has any questions feel free to ask so are there any questions yeah there is a question can I hear you can you be louder it is Kubernetes so it's like the Kubernetes scheduler so how fast we can schedule the port but what we can do we can set the minimum replicas to one so we always have one so the function is running and we will just scale out but we are using Kubernetes so there are some drawbacks in this and this is relates to the scheduler any other questions so we are also available at redhat booth so if you have more questions around this feel free to drop by and we will be glad to answer so thank you guys