 Okay. Yeah. I'll start off by saying thank you for joining me today. I know that there's a number of other great presentations going on, and so I appreciate you showing up for my talk. So yeah, let's talk about serverless. I'm not sure that there's a word that invokes as much maybe confusion or interesting thoughts within the cloud native space. I feel like just the word serverless kind of invokes a lot of opinions that people can have. And so far over the last couple of years I've met a number of people when I've talked about serverless or just the work that I do. And usually people are either, you know, there's a whole spectrum. There might be people that are hardcore serverless, evangelists and users. There may be people that don't really know what it is, and then there's people that think it's just a bunch of hype, right? It's just, it's a total nonsense. And so the question really is, you know, what is it? Is it an architecture? Is it a framework? Is it some sort of service? Does it just, does it involve no servers at all? Is it just functions? Functions as a service? Is that serverless? Or is it just pure hype? So what is it? So I'm going to attempt to kind of deconstruct it a little bit and, you know, so that we can have kind of a common framework of understanding for what serverless is. So in my opinion serverless is a collection of technologies. So it's a group of technologies, not one particular thing. It's like a number of things working together to write and deploy code without worrying about the underlying infrastructure. It's also auto scaling generally and highly available. And usually it has kind of a different pricing model. Usually that's either per execution, per invocation, per record, per transaction, something. The cost basis is different than typical server. So to kind of cover that again, the characteristics of serverless, you'll find that there's this decreased need to manage the infrastructure. You'll find that there's usually dynamic capacity allocation where you, you know, as your application scales, it'll scale with it. Fault tolerance and redundancy are kind of like built in to it. And ideally you don't have any idle infrastructure. You have less waste. You have more efficiency in your infrastructure. So that's all well and good. We can kind of understand roughly what serverless is, but how can we use it? Like how can we practically put it to best use? So use cases for serverless. Let's go over a few that might be interesting. Auto scaling APIs. Imagine building an API. Let's say you have some sort of an MVP and you build an API and maybe it gets more popular than you ever expected very quickly. So it just will scale on its own. You don't have to worry about it scaling up or scaling down. It just scales. It's elastic. Another opportunity to use it is with multimedia manipulation. So maybe you have images or videos that you want to, whenever one of those gets uploaded, it's automatically, you know, it's split or it's cropped. You're doing some sort of manipulation with that media. Processing events or cron jobs where you have some sort of schedule and you're executing that function on a particular schedule. Mobile backends, IoT backends and single-page apps. There's a lot of applications for this and I'm going to go into some examples that will help to kind of suss this out and provide more detail. So yeah, to kind of wrap it up in one statement, serverless architectures are perfect for building lightweight and flexible applications that can be expanded, scaled up or updated quickly. One thing to note, I think, again, in this kind of hype that you see around serverless is that, in my opinion, I don't think serverless is going to replace monoliths or microservices at all. I think it is actually, instead, just another tool in the tool belt that you can pull out and utilize for your use case. And I really want to get into that pragmatic frame of mind when we're thinking about what serverless is and how it can be useful for you. So now I'd like to go into a few things that I'm finding exciting in the serverless landscape. There's a lot of things happening, but I just picked out three things that I felt like were exciting that we can dive into. The first of those is Knative. Knative is the serverless, is basically a serverless project for deploying serverless applications to Kubernetes. Okay. There are other ways to do serverless with Kubernetes. Knative appears to be, I'm not going to say the winner, but it definitely has a lot of backing and a lot of energy. And what's exciting about it, and I'm going to jump to my last point, what makes it exciting right now is that they're moving towards going to GA. It's going to be production ready for you to deploy serverless workloads on Kubernetes. Basically, and another thing that's exciting about it is that unlike, say, something like AWS Lambda, which is obviously AWS specific, Knative is agnostic. It's provider agnostic so that you can, once you create a Knative deployment, you could deploy it to EKS, GKE, or to your own Kubernetes cluster that you're running in your own infrastructure. So as well, it supports a number of different runtimes. So you can bring your own runtime, whether Ruby, Python, Golang, whatever you're running, you can do that with Knative. So why would you want to use it? Well, maybe in your use case, you're already using Kubernetes a lot, and it would make sense to add on some serverless functionality to your infrastructure. You can deploy it into your existing Kubernetes architecture already. As well, maybe you want to be cloud agnostic. Maybe you don't want to have that vendor lock-in. That's something like Lambda. You might feel like you're locked into that a little bit. Or maybe the cost structure of what you're doing makes sense to use it. Because in this case, because you are deploying it within a Kubernetes environment, you have to have a Kubernetes cluster running. So it's a little bit different pricing model from the per execution. But maybe you're doing really high volumes of consistent execution of those functions. So you have, with something like Lambda, maybe that might be cost-prohibited. So those are reasons to look at it. So we'll go into an example, because it's, yeah, let's get into an example. So we're going to talk about some dog-fooding project that we're using at Gillab, where we're triaging our issues using a Knative deployment. So to kind of talk about what the goal of this project was, we were trying to enable the auto assignment of issue labels. So in Gillab issues, some of them have a very colorful menagerie of issue labels. And so we are trying to automate the assignment of labels based on webhook events. And so how this application works is that it responds to webhook events, and then it'll make requests to the Gillab API and trigger the assignment of issue labels. So to kind of break it out how, like, what you're going to see when I show you the example, you'll see that we've created, we've previously created the Kubernetes cluster, and you're actually able to do that through the Gillab UI. And I'll show you a couple screenshots of that. So we created on GKE through the Gillab UI. This application is actually a Ruby Rack application. And so every time that we push to that repo, there's going to be a, through Gillab CI CD, we're building images with Conoco. Those images get pushed to the Gillab registry. And further in the deployment pipeline, we are, Gillab CI CD is deploying to the Key Native cluster. So this is the project, just an overview of the repo. You can see, I wanted to highlight that, because some people don't know that on the side of your Gillab instance, there'll be an operations tab, and under that operations tab, there's a whole bunch of interesting stuff to just kind of pitch what we do in the configure stage and the monitor stage. So we have a whole bunch of work that we're doing here with serverless and Kubernetes, as well as monitoring, tracing, stuff like that. So you're able to click on serverless, and then once you go into that, you are able to create a Kubernetes cluster. You can do it on EKS or GKE, or you can potentially bring your own by adding an existing one. Once you've created it, and you've gone to that cluster spun up, within the Gillab UI, there'll be a few things. You'll see the integration status and the environment scope of that cluster. And as well, you'll be able to see the endpoint for that Key Native endpoint. This is an example of the serverless YAML. So as part of the project, basically, and to kind of roll back a little bit, we have a number of examples that if you want to try this, we have example code, and in each one of those examples, we have kind of like a pre-baked serverless YAML that you can leverage. And in this, we're utilizing the TriggerMesh provider. And TriggerMesh, essentially, is the wrap the Key Native client and add some, and kind of extend it a bit in a way that makes it easier for us to use. So we're leveraging that. And you can see we're also defining a function, the function handler. We're using the Ruby runtime. And we're assigning, we're setting up a couple of environment secrets that we need to build a pass to the Kubernetes cluster. Within the GitLab CI file, we have a couple, a number of stages. I'm not going to go into super detail on this, but I just want to point out that there's a few different stages in here where we're testing, building, then deploying the secrets, and then doing the final deployment. This is what the pipeline looks like. We split the deploy, the deploy secrets and deploy triage because we have two different instances of this function running. And so within the build stage, I don't have a pointer here, but I don't know if you can read there, it says, it says conico executer in line 31. Basically, within the build, I just want to point out that conico is actually building the container image and pushing it to the GitLab registry. And within the secrets, the secrets job, we're again pushing those tokens out. And then we're finally getting to the serverless deployment in the final job. And this is the result of all that work. Essentially, whenever basically when certain criteria are met, when certain web hooks are triggered, those will cause the, or it will initiate, you know, a quest to this and it will assign labels, for example, this customer label to a particular issue. So that's canated. The second thing that I'm kind of excited about in the serverless space is serverless at the edge. Now, it sounds really amazing and fancy and it kind of is. So basically what serverless at edge is, is it allows you to leverage the provider CDNs and you can deploy your functions right to the edge. You can execute those functions much closer to your users. So you have less of an issue with latency, get better performance. And there's a few examples of this. For example, Lambda at the edge, Cloud Flare workers, or Fastly Compute at the edge. Those are three examples of providers that are enabling you to be able to do this. So why would you want to use it? Well, maybe in your case you have a requirement for lower latency. Maybe you want to be able to do some A-B testing. Or, for example, let's say that you had some custom security routing that you wanted to do to be able to catch a certain IP range and do something specific with it at the edge routing level. You can generate dynamic content as well as manipulate the request and response headers of that, of requests that are coming in. As well, you can build single page apps and deploy those to the edge. And that's what we'll go ahead and we'll do. So I want to do an example of building a single page app and deploying that to a Cloud Flare worker. And we're going to do this. It's going to be a to-do list application. And it will run within Cloud Flare. And this application is going to connect to the Cloud Flare worker KV, which is a key value store that they provide for you. You get a specific namespace. And so you can actually connect that application directly to a data store. And this is all happening on the edge. We're going to deploy it using the Service Framework. So the Service Framework, again, is an open source project that allows you to do this type of deployment. And it will all be deployed through, we're going to invoke Service Framework through GitLab CI. So this is the serverless YAML file. You can see that, and so this is the configuration that Service Framework is going to use. We are utilizing the Cloud Flare provider and configuring it. And obviously, we need to give it our account credentials so that it knows what account to use. We're leveraging a plugin called the serverless Cloud Flare worker. Further down, we're defining our function. And also, I wanted to note that under the resources, we're assigning the KV namespace. And that's actually what's going to create the key value, kind of data store for us to be able to leverage. And then further down, we're assigning a route to it. So I'm not going to show you all the single-page application code, but this is a block that I wanted to highlight. You can see on line 85 and 86 is where I'm within the code because I'm going to define that namespace and assign that namespace to a variable I can leverage that variable within my single-page application and get access to that key value store. So that's what's happening on line 85 and 86, is I'm kind of setting up that cache to be able to leverage. This is the GitLab CI. We've got a couple of stages. Essentially, and this is obviously very, very basic, we are installing, you know, Service Framework and all the plug-ins. I'm doing a little bit of checks to make sure that my Cloudflare credentials are actually available. If not, it's going to blow up. And then we're doing the serverless deploy. So, yeah, we've got a couple jobs here to do that. So, once the production job runs, you can see it's going to line 48, 49 is where we've got a successful deployment. Once that goes out, when you log into Cloudflare, if you go into Cloudflare, you'll see that under there, they have like a workers section. Each worker, it's kind of interesting. It has a, it's kind of like an editor, but you can preview like what the actual, you know, like what the actual application or the response that you want is going to be. And so, we've deployed it here. You can see there's some of the code. And this is what the, this is a preview, what it's going to look like. As well, this, our deployment also creates the workers KV Datastore. So, it sets up a, that variable name and assigns it to a particular ID. And then, voila, we have a single page application that's running. And yeah, I mean, luckily, I was able to get most of the things done on my to-do list, but there was one thing that didn't happen. So, that's serverless at the edge. And I think it's really interesting and has a lot of potential applications that are worth looking into. So, the next thing or the final thing that I'm excited about is serverless integrations. And serverless integrations is essentially where you leverage cloud service integrations to stitch together and create kind of dynamic serverless applications without writing a lot of functions. These are kind of like built-in integrations between the different services that you can use. And it saves you a lot of boilerplate. And so, essentially, you're linking these services together, but you need to assign linkages between them. And you can do that using things like Terraform or service framework, or AWS SAM, things like that. So, another thing that's cool is that functionless serverless, we're going to get Uber hype here, is possible with this. And we'll actually do an example of that. So, why would you want to use serverless integrations? Well, again, you can create the direct connections between services. There's tons of different ones that are supported already. So, for example, queues and NoSQL and data streaming, things like that. It allows you to do it with fewer functions, so you don't have to actually do the manual kind of like movement of the requests or movement of the data or whatever. It just automatically happens. It's reducing complexity, and it allows you to easily build powerful applications. So, let's go ahead and we'll take a look at an example of this. Basically, this is an example where we're going to do a functionless serverless application where it'll allow us to push data directly into AWS SQS via API Gateway. There's no lamb is involved in this at all. So, what we're going to do is we're going to make a direct proxy integration from API Gateway to SQS, and we're going to do that utilizing the serverless framework again and deploy it through GitLab to CI CD. So, this is the entirety of the project. So, you can see there's only the GitLab CI file, which is doing the testing deploying, and there's only the serverless YAML, which is literally just setting up AWS resources. There's no functions involved. So, here we have the serverless YAML. You can see that I'm utilizing a plugin called the serverless API Gateway service proxy. This plugin is kind of fancy. It does kind of enable, it's specific to creating configuration with API Gateway and setting up those proxies, those proxy services. And so, within that, I'm assigning a particular endpoint and a HTTP method for that. And also, like further down, I'm creating an SQS queue and giving it a name, and then I utilize that up in the queue name under the API Gateway services proxy. So, and then within my GitLab CI, again, this is similar to the last one. It's pretty simple. Obviously, we could get very much more complicated with this. But a couple stages. We're doing, you know, NPM install. I want to check to make sure that my AWS keys are present. And then I'm actually doing the deployment, utilizing service framework. So, yeah, we got a couple stages. And then within that production stage, you can see if it's successful at the very end, you can see the stack output. So, when it's successful, we have a service endpoint. We have, and then right there on 117, you can see the serverless API Gateway service proxy outputs. And that's going to be, and you can, on line 119, it kind of wrapped a little bit, but you can see that my endpoint is there. So, let's take a look at AWS. What did it do? It created a resource with an endpoint. And, yeah, in the middle there, you can see that there's the integration request. And it connects directly to SQS. So, then if I go into SQS, you can see that it created my queue for me. And I'm going to go ahead and test that by running a call request against my API gateway endpoint with just a simple block adjacent. And you can see that it goes straight into the SQS, goes right into the queue. This is obviously a very simple example. But, like, as you can see that you can start to connect these services together in a dynamic way without a lot of functions involved. Now, if I wanted to, I could add a function to this, for example, if, you know, every time that something went into this proxy queue, I could, you know, like, that would trigger a lambda function to do something else with it. That's a pretty common pattern. So, but I wanted to point out, like, the important thing is that there's these service integrations that you can leverage that don't have anything to do with functions at all. So, when people talk about serverless, when they talk about, like, they're trying to define it, I'm trying to expand the definition of it to be much more than just functions, okay? Like, much more than just executing, you know, a cloud function, all right? That's the point. So, key takeaways here. Serverless, we tried to come to a definition that was, I mean, like, we tried to come to a more defined definition of it, a refined definition, I should say. So, it's highly scalable, it's cost-efficient, and it lowers your infrastructure overhead, okay? We talked a little bit about Knative. I'm excited that it'll be production ready soon, and if you're using Kubernetes already, it might be a solution for you to look into, to deploy service workloads to your Kubernetes clusters. Serverless at edge, I think is really exciting. If you're dealing with, you know, for example, horrible VCL and fastly or something, like, this might be something to investigate. So, because you can be writing JavaScript or another, you know, easier, higher-level language than trying to do a lot of work in VCL. And then finally, the service integrations we covered. I think they're really cool. They empower developers to kind of stitch together these cloud resources and create interesting dynamic applications. So, thanks for attending today. I appreciate your time, and if you have any questions, feel free to come on up, and we can have a conversation.