 Hello all. Welcome to migrating the AWS Lambda functions to an open source serverless solution on OpenFAS. I am Burton Rutan. I am currently a principal engineer at Dell Technologies. I was until recently one of the members of the OpenFAS core team. I've since stepped down as the work at my main job has taken a lot of my time and I haven't been able to contribute as much as I would like to. So I've stepped down for the time being, but I'm still a part of the project. I was once a BMW certified technician and spent some time in the US Navy. Had a very mechanical background, but now I am in the software world and in the open source as much as I possibly can. So quickly, just set some expectations for this talk. What it is, what it is, what it is not, what it is, and what I hope you can get out of this. So first, this is not a bash on AWS Lambda. And anyway, AWS is one of the main leaders in the cloud computing world. They have a lot of open source projects, lots of contributions. There's no knock on them at all. Also, this is not intended to sell you OpenFAS as a solution to your problems. You may find that it will help, but this is hopefully not a sales pitch. Also, it's not a direct walkthrough and there is no live coding. I've found that the virtual platform here makes it difficult to share my screen and type the commands on the terminal and also be able to answer your questions at the same time. So I've decided to just show screen shots of the code and all the commands that I've been running. So to show you what was happening, but give you the opportunity to use the Q&A section as we go through it. This is a real world example. Something I've seen happen in the past. So what I'm going to do is explain the scenario, go through all the changes and show you all the screen shots like I said. What I hope, you can see that you can have serverless and have open source together. Hopefully be able to show you some new concepts and then at the end, I'll give you some links to try some things out yourself if you're interested. So a quick brief background on serverless, which I'm sure everyone's aware of now, but serverless really means that the server the code is running on no longer needs to be in consideration. You don't have to log in somewhere and install packages or anything of that nature. It starts when it's needed, runs for a very short time, disappears, there's no state sticking around and it just executes code and gets out of the way. So AWS Lambda is really what introduced the concept to the general public back in 2014. It has a ton of integrations with other AWS services that makes it really easy to just plug in and use. So AWS Lambda is cheap, it's easy, it's very popular and it's really common. They advertise 20 cents for one million requests. That of course, price varies depending on what you're doing with it, of course. It's super easy. They have drop down menus in most of their other services that let you plug into your Lambda functions. Out of the people who responded that they used serverless in the CNCF survey last year, 53% said that they used Lambda. And since they pretty much introduced the idea, most of the time when you speak about serverless, most people assume you're talking about AWS Lambda. It's almost synonymous with the word at this point. However, it is a product. There are limits and it only works with AWS. It's cheap 20 cents per million requests, but it still makes Amazon money at some level. This means that changes and upgrades are ultimately determined by the revenue potential at some level. They have limits, but they are pretty liberal, but they are decided by AWS only. 15 minutes of a maximum runtime, 3 gigabytes memory per function, and it only runs on Amazon Linux operating system. Also, since it relies on AWS to run, your local environment is never going to be an exact representation of what is in production. There are plenty of tools out there to simulate the environment, but it will never be exactly the same. The LAMCI Docker image is one great tool and the SAM CLI that AWS owns is another great tool for simulating the local environment. On the other side, OpenFast is open source. It's developer-first, operator-friendly, and it's very familiar and portable. It's MIT license, which means it's free for commercial use and free to fork it and make changes as you need. It's also part of the CNCF serverless landscape. It was first introduced back in 2016 as a Docker hack, a pet project, and now it's got almost 300 contributors, most of which are active or multi-commit contributors. It's very familiar. It uses a lot of the other CNCF projects as part of the stack. We call it the PLONK stack, which stands for Prometheus, LinkerD, or you could substitute Linux with ContainerD. OpenFast, of course, NATS and Kubernetes. Then we have a custom CLI that we use to integrate with the functions, but it also works with the cube control or cube cuddle, however you prefer to pronounce it. It's also portable, so we can deploy production containers with Docker Desktop, MicroKADS, MiniCube, K3S, whatever is comfortable for you in your local environment. The local environment is using the exact same technologies that your production environment is using. On the other hand, it's not hosted. There's no definite SLAs, and it's a bunch of building blocks that come together as a whole. You'll need to bring your own cluster, or it also runs on a Linux VM with our somewhat recently released FASD. It's very minimal impact. The OpenFast itself may not even notice it. It's such a small impact. Of course, depending on the loads of your functions, you might start to see some impact there. Although there is no support team, we have a really large Slack community that's pretty active at all times of the day. We have people from all over the world and all different time zones, so somebody's almost always around. There is also commercial support via OpenFast Limited, just throwing it out there. If something does go wrong, it's not just one thing. It could be one of a few different things because of the stack nature, which is not uncommon for software applications these days. This is our scenario. It's a realistic situation. It could easily happen to any one of us. We're working on a startup that's about to get ready to launch. The development team is focused on the product. We want to make sure the thing that we're selling is ready and working as we expect it to. While we, the development team, focus on the product, the founder or owner, whomever, outsources a sign-up page for new customers to come in. The team that he contracted sets up a web page and it calls an AWS Lambda function that stores the customer sign-up information and stores it into a DynamoDB that the sales team can use later to contact these customers. Great news. Shortly after launch, everything's really successful. We get a lot of funding. They hire a CTO, but their opinion is that all of the development should be managed by the company. The existing product that we've been working on is running in Kubernetes cluster hosted by DigitalOcean. That's where the product is living. The CTO wants the customer sign-up page that you see here to also be a part of that product stack. We need to be able to move the AWS Lambda and the sign-up page into the Kubernetes cluster possibly on DigitalOcean. The CTO is also in talks with Google Cloud and Azure possibility for larger clusters and increased load and all that stuff. We want to be able to bring everything together into one form. Sorry, just catching up on the questions that have been coming in. This is the AWS Lambda function that was created and is out there running right now that is handling our sign-up page. If you're not familiar with the platform, you are able to resize the Windows, the slides Windows if you can't read that code exactly, although it's not that important. It's a fairly standard Lambda function. We receive the event from the API gateway via a post request, read the data from the body, maybe do some validation, make sure they have a legitimate email address, and then we save the data to a DynamoDB table. If you're not familiar, DynamoDB is AWS's NoSQL, database as a service, and it implements a lot of the MongoDB API, which is really nice. It has a hosted database solution. If we take a step back real quick and look at the bare bones, hello world examples from AWS Lambda. This is the function that is created when you create a new Lambda function with no other input. Just click through the web UI and it gives you this. It gives you an event object as a parameter, then you create a response object with the status in the body, and then you return that object. It's really straightforward, very simple. Similarly, OpenFast, we created a brand new function using the Node.js version 12 template that we have. Again, it takes an event parameter, but we also have a context object that comes in. I'll show you what both of those include here in just a few minutes. Then we return the context directly and we chain on the status code and also indicate a successful response and include the body of the response we want to send back to the caller. Now we have to install OpenFast. Again, we're assuming that there is a Kubernetes cluster already available to us that we have access to, or the operations team perhaps has access to. What we're going to use is the official OpenFast installer called Arcade. It's another open source project that's on GitHub you can search for. It's basically a package installer for Kubernetes and it executes upstream Helm charts with CLI flags rather than values, YAML files that you might be familiar with. If you see at the top, and again you can resize the window, hopefully you'll be able to read this better. At the top, we're running the install command for OpenFast and we're going to pass the load balancer flag because we're going to deploy this into a cloud cluster. Then you can see it's found the cube control file, or sorry, cube config file on my system and it's found that Helm 3 is installed. And then about halfway down, you can see when the orange or yellow colors start to show up. That's the actual Helm command that's being executed for us by Arcade with all of the flags set to some same defaults that were decided in the Arcade package. And of course, you can change some of those with the flags on the command line. Then at the end, you'll see the cube control command as shown and that you can run, you can copy and run to verify that the OpenFast has been installed. There's also a lot more helpful information that I've trimmed off just to make this picture fit into the slide. The whole process took about a minute or so, but it could be longer if it has to download Helm for you or download cube control if they're not already installed on the system that you're running this on. So now we need to install the OpenFast CLI on our local system in order to manage the functions easier. Then you'll see at the bottom after we've installed it, we try to run a command and it automatically tries to connect to a local instance. So by default, everything is ready to run locally. So if we go back and run one of those commands that were from the Arcade output, we can see the deployed pods at the top. And then in the bottom section there, we'll run another one of those Arcade commands to get the external IP address of our load balancer. So then after we get that public IP address, we'll set that to an environment variable so that the FAS CLI knows where to connect. And then we'll use that going forward so we don't have to pass that in for every command. Then we're going to get the login credentials from the Kubernetes secrets, so the Arcade installation generated a random password for us to use. And then we can pull that out of the Kubernetes secrets and again pass that into a variable and forward that into the login command. And so at the bottom there, you see now we're connected to the remote cluster and there are no functions yet, but we're getting there. So if we go back real quick to the pods that were deployed, you can see the Planck stack as it is deployed. So we have Prometheus and Alert Manager and those two are responsible for the metrics and scaling of the functions. Then we have the NATS queue worker and FAS idler for scale to zero, the queuing system, and the asynchronous invocation of the functions. And then finally, we have the gateway and the basic auth for accessing the functions. And again, since this is not a live coding session and we're just going through the commands that were executed, please feel free to put your questions into the Q&A section and I'll do my best to catch them as they come in and try to answer them as we go through if you do have any questions on what's being run or what's being shown. So now we've got Open FAS installed. We've got, or sorry, the Open FAS CLI installed, we've got Open FAS installed on our cluster. So now let's create a new function that's going to replace our existing Lambda. So using the FAS CLI, we're going to use the new command, we'll give it a name, we'll call it signup, and we'll tell it what language to use, in this case, node version 12. There are several available, I'll get back to that in a few minutes. And then I'm passing in the prefix flag with my username, with my username to prefix the image that is created so I can push it into Docker Hub later. So now if we change the directory into the named directory of the function, we list the files that are there and we have a handler.js and a package.json. So it looks very similar to a bare bones basic node.js application. And since we're migrating this AWS Lambda function, which is still connected to DynamoDB, we're going to need AWS SDK. So here in this directory, we can just run npm install and just like normal and get that dependency installed. And behind the scenes, the node 12 template uses ExpressJS as the application routing. Although you don't need to be concerned with that inside the function, there are some features available because we're using Express behind the scenes. So while we're moving and migrating the code over, we're going to need to make sure that everything is working as we go through the migration process. So we're going to use the FastCLI and we're going to build the function into a container image. And so we run the FastBuild and it takes the function code that is in the handler.js file and the package.json and bundle it with the template code that includes Express, as I mentioned. And it'll bring those two things together. And as you can see, about a quarter of the way down, step one of 29, we're pulling in the open fast watchdog, which allows the functions to be invoked through events. And then at the end, it'll spit out a docker image that's ready to run, ready to build. Ready to run as a container. I also wanted to point out right there in the middle that unit tests, if they do exist with your function, they will get executed as part of the build. So it'll protect you from any mistakes that you may have made while you're working in it. That's not available on all of the templates, but it is a nice feature for a lot of them. And then at the very end, you see that the image is built with my username as a prefix and the signup as the name. So now we can actually use docker to run the container that was built. No other dependencies. It's just the image that was built. We're going to run it and we'll use the port forwarding to be able to invoke this locally. And so in this example, what I've done is just simply console.log the context and the event that the function receives. And I was using the Insomnia API client to invoke this just so you can see the host and things like that. And so you'll see in the event we have the body headers, method query, request path, you'll make note that the body is an actual JSON object, not a string as AWS likes to do. It's very similar, though, in structure to the AWS Lambda event that comes in. The context on Iran is an object that will get serialized into a response by the template code. And you can see you have access to the headers and status codes separately. And the CB or the callback is what will serialize the object for you to return as the body. All right. So now that we have the function running, as you can see, everything's ready to go. Let's see what the difference was. What actually changed? If we made all of these changes in place in a Git repository, this is what the Git diff would look like. So, let Lambda is on the left, open path is on the right. You can see at the top we're going to require the FS library to access the file system to get the AWS secrets. I'll get back to that in a minute. And so we're going to have to add a new function called configure AWS and then set the DynamoDB variable. And again, I'll show you that in just a few minutes. And so the event body, oh, sorry, we no longer need to create the default response object since it's now being handled by the context directly. So we don't just have this object getting modified as we go through the code. The event body then needs to be checked for the keys since it's not a string anymore. We need to see if, make sure that there was a body included in the request. It just has a safety measure. And then you can see on lines 14 and 15 on the right hand side. The status code is being passed to the context. And then we are calling the fail extension. So that will automatically log errors and by default return a status code of 500 if you don't specify anything. So we could have just said context.fail and be done with it. It would have returned empty with a 500 to let the user know that this has not worked without having to pass anything additional inside. So if we scroll down a little bit further, the main parts of the function remain for the most part intact. The change is here just replacing the modification of the response object that was created at the beginning from lambda and changing that to calling extensions on the Elpifas context. So at the top, on line 19, you see that we no longer need to have the JSON parse function as the body is already a JSON object. So we can skip that potential for failure right there. And then this feels a little bit more natural for a node developer, I think, to call extensions rather than setting a property and passing an object to a callback directly. Now we're just using the framework that's given to us. And then finally, this is that new function that we were talking about earlier, the configure AWS. So since we're not running within the Lambda environment anymore, we need to be able to provide the AWS SDK with the credentials in order to access the DynamoDB and various other features that maybe the function is using. So Lambda handles all of this automatically. When you create a new function, a Lambda function, you select the drop down to select your IAM permissions and all of that. And so, on contrary, so now we're running an OpenFas, which does not have that set. So what we're going to use is native Kubernetes secrets. And I'll show you that in just a moment. But by default, the secrets are stored in the slash bar slash OpenFas slash secrets, and then the name of the secret. So that's why we needed the FS library earlier. So now we can access those secrets. And then we're setting the AWS config and passing those secrets in so that it can then access DynamoDB with the same permissions that the Lambda had. But I would argue a little more secure since we're using Kubernetes secrets that are well tested and secure. Then finally, in the sign up page, we'll just change the URL from calling the Lambda function on the left to the OpenFas URL and function path on the right with our new function name. So before we deploy the function, we need to make sure those secrets exist. And so we can use the fast CLI to do this with fast secret create, give it a name, and pass in the values. Alternatively, if you're more comfortable with cube control, we can create the secrets just the same in the same fashion as the fast CLI, just trying to make things easy. And then so now we're able to publish the container image. And in this case, we're going to publish to Docker Hub, but you could also pass a registry URL for custom or private registries for your images. So now we go to deploy the function. And again, using the fast CLI to deploy, you'll see it's failed at the bottom there. And so I wanted to show this real quick. It's a small, but I think important feature. So the fast CLI will not deploy a function when it requires secrets that aren't yet available in the cluster being deployed. So it's just a little safety net just to make sure you're not missing something or possibly attempting to deploy a function in the wrong place. So if we go back and change that unknown secret name to the proper name, we do the deploy again, and you see we got a 202 accepted, and we have the URL with the path to actually invoke that function directly. So now that the function is actually out there and deployed, we might want to see what's happening. Make sure it's working the way it's supposed to. So view the logs of the function. We can again use the fast CLI logs and then the name of the function. And also, again, if you're more comfortable with cube control, we can use that just as well. And if you remember earlier, the logs when we were running locally look exactly the same. Just to show that everything is working exactly as it was on your local environment as is the deployed or production environment. And now that our function is deployed, our website is updated to the plan. And yeah, our web page is updated to the FAST URL. But that's ugly, right? It's an IP address. We could fix that by deploying a reverse proxy somewhere, set up DNS, give it a proper domain, all that other things that are sort of traditional for web application, or we can use the open FAST Ingress operator. So this will create Ingress records for you. It'll create certificates with cert manager and let's encrypt certificates, support several different Ingress options for your cluster, and also works on ARM and Raspberry Pi. So again, we'll use the arcade installer and first we'll install the Ingress engine X. And then again, you'll see the helm command there. Then we'll install the cert manager separately. And again, the helm command that is being run is shown for verification. Now we installed the open FAST Ingress operator. If you can see at the top, it passed in the registration email and the domain as flags, which is the nice feature of arcade. It does a check to make sure our cluster is prepared with engine X and cert manager. And then when it finishes, you'll see a handful of commands to give you some notes about what they're for, include checking the certificate status, getting the Ingress certificate logs, and all of that. All right, so that was successful. And now that we have a proper domain, we can update our FAST CLI credentials and associate the login with the actual domain address. And then we can see if we do a list, we have our function that we deployed earlier to sign up. So one more quick update to the website and change the IP address to the nice domain URL for the gateway. So the gateway URL changed, but we still have that path to the function. So it's gateway slash function slash function name. But if you remember, the CTO said that they would like to have all of the services together. But at this point, the startup, the site is still being hosted as an AWS S3 static site, which is nice, don't get me wrong, but the CTO wanted everything together. And we can do that. Since open FAST functions are nothing more than a container, why not have engine X and static assets as a function? There's an existing custom template from Alex Alice, the creator of the open FAST, that is just that. It's still early. It exists as mostly a proof of work, work in progress. But you can download this, you can use the FAST CLI, download this template, or any custom template, as a matter of fact, and create a new function from that template. So you see at the bottom there FAST new sign up site, and the language is the static site engine X. And again, passing my username prefix, and we have created the function code. So now, first thing we do is copy the existing sites bundled and minified code in the dist directory into the new functions directory. And then we do the FAST build. And then we can run it again just as we did before with the Docker run and passing in the port so we can access it from outside. And then you see the logs is just engine X and is providing our static assets. So now we have a function that is our static site, which is really nice, but we still have the same path as the sign up function, which is the gateway URL slash function slash function name. And that's no good to hand to customers as a website. So we introduce function ingress, which is ingress per function. So you can directly access a function through an ingress. Now this is still in an incubator status. So you'll need to clone that incubator repository for the ingress operator, and then run kubectl to apply those artifacts. And then you'll see in the logs, it'll set up the ingress and at the bottom gives you a TLS route by default to access your function that you've provided. So this is the function ingress YAML file that was applied in that previous slide. And so we're going to define the custom resource with a domain name and then the name of the function to route to. And finally, the certificate issuer to use. In this case, let's encrypt prod. And so now our site has a pretty nice domain name to access. The site is hosted in our cluster. So it's part of the rest of the projects and applications. And you'll see we have a little lock in the address bar to indicate the certificate has been verified. And if we use kubectl to get the pods in the open fads fn or function namespace, you can see both the previous lambda function and the new static site are there. So great. Everything's deployed. Now what? So people familiar with Kubernetes operations will feel comfortable with open fads. Developers new hires will feel right at home as well. Operators can continue to use kubectl if they're familiar with that to manage functions and ingress with CRDs and pods and various. Prometheus, since Prometheus is part of the stack, we have metrics and monitoring and they're included. So SREs are going to love you. It is Kubernetes native. So we have containers, CRDs as I mentioned, we're using ingress operators. And for developers, they can be productive almost right away as the result are just standard containers. There are community driven function stores, stores for functions and templates, but you could also build your own company's private template store or a function store to make it easy and you can do a fast new with the language of your company sign up as a template. Then the final step of this whole scenario to move everything into an open source and out of AWS entirely if that was your direction would be to migrate the data to a MongoDB instance. As I mentioned earlier, DynamoDB handles a lot of the same APIs as MongoDB. So it would be a pretty straightforward operation. But it goes a little beyond what I was trying to show here with the migration to open fads. But if you did do that, you could then go back to your open fads function, uninstall the AWS SDK, remove those AWS secrets from the cluster, and then switch to using MongoJS or something similar to access this new data store. The AWS then at that point could be closed. The entire solution could then easily be moved between Kubernetes clusters with the simple arcade to install open fads and fast CLI to deploy those functions into the new cluster. It'd be almost no effort to move to either Azure or Google hosted Kubernetes as the CTO had mentioned earlier in our scenario. So I hope you enjoyed going through this effort. You realize it's not as big of a challenge as it may sound at the beginning. So we have a fully open source serverless solution. Then if you'd like to learn a little bit more about open fads, the official workshop is maintained by the team. It'll demonstrate all of the features of open fads, and it will run entirely on your system using Docker desktop, Docker from Mac, MicroKADS, whatever. Suit your fancy for local Kubernetes. We also have a large Slack community. So you can join the community, introduce yourself. Let them know that you saw this presentation here. Everyone's always happy to answer questions, give you help. If you get stuck, encounter some sort of errors, and also introduce some new ideas and projects as they come along. Finally, as was mentioned in yesterday's keynote, this is another one of those open source projects that is completely independently funded. It's not built by a large corporation. It's just a bunch of volunteers working on their free time to make this a solution that works well for everybody. So we do appreciate your support, and it'll keep the project moving forward and adding new features as they come along. So if there are any questions, the Q&A section to the left of the screen, I'm happy to answer anything. Somebody says, other than fads and open fads, what are good Google search phrases to learn more about this? I think the word serverless is good, although it is an actual company and a project that uses AWS Lambda. So you might get conflated answers with that actual company. I think it's a company or project. But fads is a well-known term for functions as a service, and open fads, of course, is the open fads project. So that would be definitely a good search term. Somebody says, what does open fads do anything to solve the cold start problem? So by default, all of the functions are always at the ready and sitting idle. And you can set that, and that's also a configuration that you can change. So you can have a minimum replicas of one. So your functions are always warm, they call it. Alternatively, the templates that are used and the languages and all of that other stuff are highly tuned to make sure that that cold start is as low as possible. Of course, it won't be zero unless you keep one around, but it's an idle container. So you likely wouldn't notice much of a difference. But we do make a lot of work towards keeping that cold start time as low as possible. Some of the other languages like Java, for example, is one that we spent quite a bit of time struggling with because of the JVM start time. But I think we've got that with a few other libraries to a very reasonable amount. And then someone else says, as a complete beginner, I need to learn a variety of stack layers before I can even start to learn open fast. I assume I should study VirtualBox, Docker and Kubernetes, what else, and in one order. I don't think that's necessary. You don't necessarily need to understand all of the layers, per se, to get started. Of course, that understanding will come as you use it more and more often. And then you can sort of, as they say, dip your toes into these new concepts. But to get started, if you're running Windows or Mac, there's Docker desktop. And then Ubuntu has microk8s. There's also k3s and kind, which is Kubernetes and Docker. So all of these things are great tools to just get you started with the Kubernetes cluster. And then using the arcade installer, you can easily install open fads onto that cluster with almost no work on your part. And then you can get started on that in that workshop and step through and grow your understanding of all of the features in open fads. And so what open fads is, is trying, and as was mentioned earlier, as developer friendly, we're trying to make it easy to deploy workloads into Kubernetes. It can be as small as a function or as large as a full API, if you need. As you saw in the log output from earlier, you can access a function and with a path and everything else inside the function. So it could actually be as big as a full API as a function that would include the auto scaling and things. And someone says, do pods spin up per function or are their pods waiting to do work? So there are, I'm trying to think of the right word, but projects that scale Kubernetes automatically and add nodes and remove nodes through, sorry, I can't think of the term right now, but most of the hosted Kubernetes clusters also include some sort of scaling operation where you can automatically scale your cluster. Open fads itself does not do that. It scales out the functions. So if, depending on how you set it up and what configuration options you use, the function will scale when there's so many requests per second and it will just add another replica of that function to be able to distribute the load between the two. And then your Kubernetes cluster behind the scenes could scale out and add pods as it becomes necessary. Then open fads will be aware of those new pods and deploy those functions to those new pods. And then again, when the load goes down, the functions will scale back. And then behind the scenes again, Kubernetes cluster will scale back those newly added pods. Again, depending on how you have it set up. So how is your function accessing DynamoDB when that exists only on AWS? Is this some kind of hybrid cloud local data center implementation? That's a good question. Let me see if I can quickly go back to that slide real quick. So because we have included the AWS SDK in our function, and we are passing the credentials here to the AWS SDK.config. And so we're updating that configuration of the SDK with the region and the credentials. And then so the URL, or sorry, the DynamoDB accesses that region with those credentials. And it uses then IAM to connect to the database and connect to it. So it's just because we're using AWS SDK and those credentials, it works that way through over the public internet. So this is actually deployed on DigitalOcean and it's accessing the DynamoDB instance in AWS. It's a good question though. I see that it is more lightweight than Knative. Could you compare a bit between open fads and Knative? So Knative and open fads are very, very similar. They follow the same sort of paradigm of a function as a container, which makes it really easy to distribute across implementations. So Knative is a really great open source serverless project that came out of Google. And so there's a lot of similarities. There's a lot of differences. As you mentioned, this is a lot more lightweight than Knative. If you absolutely need all of the hands-on and knobs and buttons to twist, Knative might be more the way to go. But as I understand it personally, Knative was built to provide a platform for other projects to build on top of to obfuscate that complexity that is Knative. Whereas open fads is built on top of Kubernetes directly and it's built with the developer and the operator in mind. So we intentionally leave out some things, but try to make it as easy as possible to get started and get using it. And it's absolutely not lacking any features. But some things we decide are too complex and don't really fit in the open fads model. So what is your favorite thing about open fads? Nice. So I got running out of time, but I got started into the project because I was really curious of serverless. And I didn't want to set up an account with AWS and possibly run my credit card into the ground from spending too much money. So I went on a search for how to do serverless locally. And I found open fads. And I really came to love it because it's all open source. It all runs on my laptop if I want it to. And I can deploy it into a production environment and still have all of the great features of Kubernetes built in and still be able to play around with it locally. So that looks like our time is up. And I think I've answered all of the questions that came in. And if not, I will be available on the Slack channel to Cloud App. And I look forward to seeing you all there. So thank you.