 Hello, Daniel from GitLab here. Today we're going to be walking through using the serverless feature in GitLab. So serverless uses Knative, which is a Kubernetes-based platform, to build and deploy serverless workloads to a Kubernetes cluster. We're going to be setting up this feature from scratch, so we'll be switching from the docs to our GitLab project in order to do so. So let's go ahead and get started. So in the documents, I see that I have the option to either deploy functions on my serverless implementation or deploy a serverless application. I will go ahead and deploy the functions project to get started. So heading over to GitLab, I am going to create a new project and I am going to say import project. And I'll use repo by URL. And I will name my project serverless demo. And I'll use the same projects look just for consistency. I'm going to make my project public and say create. So all the assets for the project will be imported here. And our very next step will be to add a Kubernetes cluster to our project. So I am going to do that by going to Operations and Kubernetes. I'm going to add a cluster on GKE, which means simpler for me. And I am going to name this cluster serverless demo. I'm going to leave the environment scope as star so it will cover all environments. I'm going to select the relevant project here. And I am going to leave the defaults here, which is three nodes on an N1 standard two machine and with RBAC enabled. It will take a second to create the cluster. And then we'll deploy the necessary applications. Okay, so after a couple of seconds, my cluster is ready to go. And in order to deploy applications to it, I will go ahead and install Helm. And that way I will be able to deploy the rest of the applications. In this case, I'll be looking to deploy Knative, of course, and Prometheus for monitoring. Now that Helm has finished installing, I will move on. Note that I do not need to install Ingress as Knative comes with its own STO based Ingress, which we'll be using. So next we'll go ahead and set up Prometheus for monitoring. Okay, now that Prometheus has finished installing, I will go ahead and install Knative. Before I click on that install button, I will go ahead and provide a domain name, which then I will set an A record for where all my functions will be served. Okay, so after Knative has finished installing, the Knative IP address field will display a question mark until the full IP is retrieved, which we will then use to configure that A record. Okay, now that the IP address has been generated and it's displaying here, I will go ahead and copy it to my clipboard and head over to my domain settings. And here I will enter an A record, and I will just enter star here and that'll route any domain and subdomain to this IP address. So I'll go ahead and click add. And that is done. So I'll head back to my project and let's go over the repository structure of the sample project. So there are two templates that are provided here with the sample project. The first one is the gitlabci.yaml. Here we specify a deploy stage that is making use of the trigger mesh CLI to do the built in Knative. And the second one is the serverless .yaml file. Here we are specifying the function attributes such as its name, the runtime it's using, and a brief description. And finally, we have the directory where the function data is. Here we're using an echo function that basically echoes back whatever data I submit to it. Okay, so we're ready to trigger our function in order for the service to be deployed to Knative. We need to run a CI pipeline in GitLab. So it's important to note that you need to set up a runner in order for this to work. My project already has one set up so I can simply run the pipeline. If your project does not have one set up, you can deploy the GitLab runner to your Kubernetes cluster. And that will work just fine. So let's go ahead and run the pipeline here. I will say create for master branch and no variables, just create pipeline. Okay, so once my pipeline is done, this will have actually deployed the Knative service, which in this case is a function. To view its information, we can head over to operations and serverless. Up here, we see all the functions that have been deployed to our cluster. And in this case, we have the HTTP endpoint that we need in order to test our function. So I'm going to head back to the docs here. And I see that on the deploying function section towards the end of it, we'll have an example of how to test the function using either a curl or a web-based tool like Postman. So let's go ahead and test it using curl from our command line. So the two pieces of data that I'll need is first the input that will be echoed back and the URL for my function, which I now have available from my serverless tab in GitLab. So let's go ahead and copy this. And we'll come here to the terminal, just paste this information. Let me then copy the URL for my function. And I will go ahead and replace it here. All right, so the data that we're passing in is simply GitLab fast. So let's go ahead and press enter. And I see that it return GitLab fast. Let's go ahead and test it again with a different value in order to validate that everything's working properly. So we're simply going to say here hello from GitLab serverless. Okay, and it echoed back hello from GitLab serverless. Now that we have some traffic going through our function, we can go ahead and click the function name to get more detail. So here we see two things. We see the URL and we see the scale for the function. Right now only one pod is in use for this function because I've only called it once. As the function gets more traffic, you'll see that number of pods increase. And as the traffic decreases, you'll see the same with the pods. And that is it for this walkthrough of the GitLab serverless feature. I hope you've enjoyed it and please reach out. I am at at Daniel Gruzo on GitLab. And also on Twitter, please feel free to reach out should you have any questions. Thank you.