 I'll be talking about serverless computing, right? How we can manage serverless workloads, which would be great because many of us here are one way or another involved in trying to enable developers to do stuff, right? And serverless is one of the great examples of that. Now, quick note here. Before I start, when I say serverless, I don't mean only in exclusively functions as a service, right? I don't mean lambda only. I mean serverless. I don't think about servers. I don't know how they run. They just exist. Things just happen, right? So not about lambdas. That's the important thing. Now, quick introduction. This is me. This is me Q&A section. So ask me questions. I work in a bound that does not matter, really. And this does not matter either. What does matter is serverless computing, right? And there are many ways you can describe it. But my way to describe serverless computing is a developer saying, here's my source code. Run it. Don't ask me any questions, right? Or unnecessary questions. This is the code I wanted to run. And I want it not only to run, I want it to scale up. I want it to scale down because we all want to be efficient. We don't want to waste resources for no good reason and so on and so forth, right? So run my code. Don't ask me questions, scale up and down, and probably quite a few other things. Now, if you were to assemble yourself, a kind of serverless solutions today, meaning not using ready to go services like Google Cloud Run or Azure Container Apps or Lambda AWS or whatever you're using, right? You're probably going to make some default choices. Like you're going to use Kubernetes for serverless because you're in KubeCon. You're probably going to choose Knative because it's an engine that allows you to scale your applications automatically into Q requests when it scales to zero and quite a few other things. If you're not familiar with it, if you have any, let's start with this. Feel free to interrupt me at any time if you're not familiar with something I'm talking about. Just let me know. You don't need to wait for the end of the talk. What else, right? You need a way to build your applications to create those containers that will be run by Knative. You're probably going to pick something like KaniCore, maybe ShipWrite or something like that that works in that Kubernetes cluster, that takes that code, builds container images. You're going to try to figure out how your applications can communicate with each other and how they can communicate with other things like databases. You're probably going to pick Dapper. I think that you had at least one session about Dapper before and so on and so forth, right? Now, since I'm too lazy, I'm not going to assemble all those tools. What I'm going to use today is something called Open Function. Any of you familiar with Open Function? No? Only a few? Anyways, think of Open Function as being a wrapper and some glue between projects I mentioned and quite a few others, right? It collects best-of-breeds type of tools that collectively provide some kind of serverless functionality. And here is my definition using Open Function, right? It might look scary, kind of more than five lines of YAML, but it's actually relatively simple, considering what it does, right? Among important things there, I'm saying, OK, you see that image over there? There is an image. I want you to, and this is not, I'm not saying here, I want you to run this image. I want you to build this image and run it based on my source code, right? Here are the image credentials, because you need to push that image to some registry, right? And you cannot do it without some credentials. We will later need to figure out how are we going to get those credentials, but I'm getting there. This is the build section that I'm saying, OK, so build, use builder.go because this is now using build.box. There are many different ways you can configure it. And since my application is written in Go, if you use Go builder to build my container image, couple of environment variables doesn't matter. What doesn't matter here is that use this repository. That's where your source code of your application is. Get it from there, build that image. This is the branch main. A few things like I want to use Dapper to communicate with, let's say, a database, which is good. Simplifies everything. And nothing really matters much. OK, maybe a port, 8080. My application, the code of my application will be running on port 8080, right? There are a few other things, but that's a gist of it, right? This thing is doing a lot. And those 30 something lines of YAML should be relatively straightforward for anybody, any person, no matter how experienced you are in those tools to define this, right? It's relatively easy straightforward considering what it does. Now, so that's kind of great. But then we have a couple of questions, right? Because here's the secret. Serverless still have servers, right? That's a very unfortunate name for that. So we need to figure out where will that application run, right? I need a place where it will run. I need a Kubernetes cluster, in my case. But it's not only that. It's not only about having a Kubernetes cluster. That Kubernetes cluster needs to have stuff installed. It needs to have Knative. It needs to have ShipWrite, so on and so forth. A bunch of things needs to be configured. Some secrets. You remember that push secret? I need to have secrets so that I know how do I push to a registry, right? How do I pull from a registry? How do I do this? How do I do that, right? So it's not just having a cluster. It's about having, let's say, serverless ready cluster with everything that is required for that simple YAML to run somehow. And another thing that is important is that there is no such thing as stateless applications, right? Such a thing does not exist. A part of your application might be stateless, but states always exist somehow, somewhere. So we're probably going to need some form of a database, or maybe file system, right? So we need to figure out how am I going to enable developers also to do that? How they can say, not only that I want to run this serverless application, I want it to connect to a database. And by the way, that database should be, let's say, Postgres or something, right? I want to make it easy for everybody. So instead of just focusing now, again, I'm repeating, this is not using Google Cloud Run, right? This is us assembling serverless solution. And for us to have serverless solution, we need a way to run serverless applications. We need a place where we are going to run serverless applications because there are servers. And we need some kind of database as a service, because database is always there. It was one way or another, right? And to make it slightly more complicated, all of that needs to be somehow secure, right? At least a bit secure. Just a tiny bit of security. Anyway, so let's, by the way, I don't like speaking much of slides. I like showing things. So this will be more a demo than a talk, but you can ask me anything. Let's start talking about, first, where are we going to run application? And remember, I'm making everything as a service, right? I want everybody to be able to do whatever they need to do. So I have here defined AWS-2 YAML, right? This is my definition of a Kubernetes cluster. And I'm now developer, right? Everybody can write this because somebody else, some other me, defined this as a service as well. Basically, essentially, what I'm saying here, by the way, I'm using cross-plane for this. If you're not familiar with cross-plane, then you should be ashamed and leave the room immediately. Kidding, don't leave. Be ashamed only. So what I'm saying here is that I created a new API interfacing my Kubernetes cluster in UCRD. That is called cluster claim. Anybody can create cluster claim. And with labels, that somebody can define, OK, where do you want that cluster? In this case, I want it in AWS. I want it to be EKS, right? I have implementations. We call it compositions. It could be in Azure, right? Or it could be in Google, or whatever I chose as an operator, the other me, to implement, right? Now, I need certain things in that cluster, or parameters of that cluster. I want it to be medium-sized because I don't know AWS. I don't know what's T2, something, something. But I do know that it should be bigger than small and smaller than big. I want to start with three nodes. I want to have those namespaces in that cluster. And now comes the important part. I want to specify which applications should be automatically running that cluster, right? Without me doing additional this or that. And I need open function. That's what I showed you before. And I will need external secrets. Now, external secrets are important because in that newly created cluster, I need some credentials. You remember that registry, right? And it's somehow to push it. I need that secret automagery to be there. And so I want the cluster to use external secrets operator, or ESO. That essentially, the short version of it is get something called registry out from made up by a secret manager in this case. Convert that into Kubernetes secret in that cluster that does not yet exist. Put it in this namespace. And it's going to be Docker config JSON because that's how you configure secret to be container image registry friendly. And that newly created cluster should get AWS credentials from the existing cluster, from the management cluster itself, right? Now, if you look at the slides, I don't have much time. But you will see here all the magic, all the YAML. There is a repository that I used to define all that. What's happening behind the scenes? It's thousands of lines of YAML. I don't want to depress you that early in KubeCon. But you can check it afterwards, right? What does matter is that I defined what is behind that interface, right? All the subnets and VPCs and cluster and this or that. So what I'm going to do is just a second. OK. So what I'm going to do is create the cluster. Kube, Cattle, dash, dash, namespace. Now I'm right now looking at the management cluster. This is not where my application will be running. This is just a server from where we go and do stuff. I want to apply whatever is defined in this file, the file that I showed you. Boom. I injected it into management cluster as a custom resource. And I can do something like trace, cluster claim, cluster, the name is 2, namespace 18. There we go. And you can see that all the stuff that is required for that cluster to run both AWS resources and the applications in that cluster and this or that, all that is happening right now. And it will be eventually done. Now, I don't have enough time. It takes approximately 20 minutes to create the AWS cluster itself. So I will skip this. Actually, I will delete what I just created. And the reason is very simple. Why I will delete? And the reason is this. I actually already created a cluster last night before I went out drinking. So the same thing here, but fully operational, except for open function, which is going insane for reasons I cannot explain. But imagine that that's not happening. My cluster is up and running. It's ready. It's waiting for me. Brilliant, right? And I can prove it to you. I can do something like export. Oh. What's missing? No, no, it's here. Over there, that's not my problem. I have no idea what's going on. It will eventually work, eventual consistency. It wouldn't be the first time I explain it without screen. Hey, there we go. Brilliant. Thank you. OK. So I'm going to point my Kube cutlet to the newly created cluster, the one that I created before I went drinking. And if you take a look at the cross-plane system namespace, exactly, if you go get secrets, you will see that credentials for my AWS were automatically copied. I can communicate with my secrets manager, which is great. If I go and say Kube cutlet will get cluster secret stores, this is external secrets manager. It's already configured to communicate with my AWS secrets manager, which is also great. If I go to the production namespace, kubectl-namespace-production, get external secrets. One external secret was created. Two of them ignored the first one. That's a surprise for later. Push secret was created, and the secret was pulled from secret manager. And I can push secret. I can now push my application stuff like, I'm ready. I'm ready to deploy that open function manifest to this cluster, almost. Another thing I need is a database. There is always a database, somehow, somewhere. And I prepared another one. It's still cross-plane. They will be not cross-plane soon. This is following similar pattern. Hey, developers, whenever you want to create a database, you don't need to open Jira tickets, because I hate Jira. I believe that Jira should disappear from the planet. Create something called SQL claim. It's easy. Define some parameters like, no. Here, say, which database you want. Postgres, easy. Excellent. Define the version size. Which database is you want inside of that PostgreSQL server? Secrets, this is important. Because this database will be managed from the management cluster. But I don't need the authentication there. I need somewhere else. I need it in that newly created cluster. So move the secret over there to the management cluster. And finally, create the schema for my database. Everything is simple manifest. By the way, for the schema, I'm using Atlas operator. Cool tool as well. I'm using many tools today. Anyway, if I go back to the management cluster and I do kubectl apply, I don't have time. You know that I will show you how to apply it, and then say that I created the database last night as well. So I'm going to crossplane beta trace SQL claim. Imagine that I created that apply that manifest right now. I did it last night. And you would get something like this. All the AWS resources and all the secrets moved around. And everything you need for the database is ready for you as a developer. You will never look at this if you're a developer, right? This is for ops people just to know what's happening behind the scenes. Now to prove you that all that happened, I'm going to export what am I? DB, no, PG user. There we go. I'm going to get the username and the password and the host from the secret that was generated after the database was created. And I'm going to kubectl run. I'm going to run post PSQL client in a container inside of my cluster. And I'm going to execute. No, actually, you know what? I'm too lazy. I'm going to copy the command here. There we go. Look at that. Going to connect to the database server from this container. And if I do L, you remember how I specify the schema and everything? And the databases, I specified my DB was created in that server. That's great. I didn't have to do anything. If I connect to that, no. There we go. If I connect to that database inside that Postgres, there we go. And if I do DT, you will see tables, comments, and videos to tables created, those are the tables that specify the schema. Everything just works. Very easy for developers, very easy for if you want to shift left. If you're a platform engineer, whatever you are. So OK. So now I'm ready, right? I have a cluster that people will pretend doesn't exist because I'm doing serverless. And I have a database that could have been created by anybody. And the only thing I need now is to deploy that function that I showed you at the very beginning. Function YAML. This is what you saw before. Very easy. The image doesn't exist, right? I'm just giving it my code and nothing else. So I'm going to say kubectl. No, which cluster am I now? My main production cluster? Probably not. Export kubectl, going to switch to the new cluster, just to be sure. OK. kubectl, dash dash, namespace, production, apply, whatever is defined in function.yaml, right? Actually, you should get fired if you ever execute a command like this. You should literally be fired. If you're in my company, I would fire you if they would allow me to fire people. You should be pushing it to Git, right? You should be doing Git of Sargo CD, Flux, all the jazz, right? Don't directly cluster directly. This is for demo purposes only. Don't do this at home. kubectl, namespace, get functions, right? You will see now it's building. It's building the image because it fetched already the source code from my Git repository. I'm a developer again now. It is building my image. It will take probably a couple of more seconds, maybe a minute, something like this. Eventually, it will work, probably. And until that happens, you will need to enjoy uncomfortable silence because I don't know what to tell you while waiting for this. I'm just going to go bold and pretend that you're not here and just repeat the same command over and over and over. Again, but then I'm very skilled in Linux. And then I'm going to say, I can do this, right? But then you run into a problem, then I cannot watch the screen now. I need to watch you. And I don't know what to say. OK, let's do this. I have a couple of commands here prepared. I'm going to copy it, right? I'm going to define the URL of the application. It follows a certain pattern, cool. And I'm going to see now, is it working? Come on. This is very, very disappointing. Eventually, when it works, I should be able to execute this command. This command will go to my application and it will send a request to me to send something into a database. And if it all works, and by the way, what I'm not showing you actually, I'll show you later, right? That top-end function application, among other things, has dabbled. It already knows how to connect to the database, so I don't need to think about secrets in none of those things. As a user, let's see whether it works. It works. I just don't know how to design applications. If it's empty, means no errors. You don't get confused with that. What can I say? And there we go, right? And I send another request to the application to retrieve all the data from the database. And you can see that it retrieved what I got there, right? So it works. And I did very little. I mean, I showed you a lot. What makes you think that I'm finished? I'm kidding. No, you can do it. That's fine. So let's see something more interesting, right? Does this namespace get pods, right? How many pods do I have in this application running right now? One, right? So what I'm going to do is this. Echo, I forgot all the app URL. What is the URL of the application? This one. I'm going to take this URL. And I'm going to go to the DDoCify. I don't know if any of you use DDoCify. It's a performance testing tool. Very cool. You should be using it, if you're not. I'm going to create a new test, performance test. I'm going to call it, is this big enough? Do you see it? Yes, you do. Test one. I'm going to send requests to this address. And I'm going to say that I don't want your help. I'm going to how much? 100,000. Going bold here. I'm going to send requests from everywhere in the world. And mostly because they gave me a limited credit, so I don't need to pay for it myself. And I'm going to start the, no. I need to give it a name, test one. There we go. And I'm going to start the testing, right? And you will see that now it is sending requests to my application. This is very poorly written application. It will ultimately fail for sure. If it doesn't, I will get pleasantly surprised. Anyways, it's sending requests. And the reason why I'm doing all that, I want to show you the pods. You remember, there was one pod of the application. Now there are many pods of the application, right? It scales up and down. Now the fact that it's crashing, that's my code. That's not fault of K and 80, what open function, right? But it scales automatically up. And eventually it will scale down when the requests stop coming. It just thrills, right? It goes up. It goes down. And what else can I show you? Yeah, those are the pods, cool. I can show you the code of the application because it's very, very complex. No, it's not really. But what I do want to show you is that my code of the application is using dapper. So it just says, I don't know where the database is and stuff like that. But just connect to it wherever it is because the secret is already mounted when I create the cluster and all that stuff, right? And now I'm finished. I don't know if I confused or helped you in any form or way, probably the former. But anyways, whatever time is left, you can ask any questions. I'm probably supposed to tell you that I work in Upbound. I'm giving away books tomorrow. If you want to come, happy hour. Just come and you get the book for free. I have a YouTube channel, that's it. I was probably supposed to tell you also something about Upbound, but I forgot. Anyways, questions? Somebody? Nobody? Anybody? Close the, don't let them out. Any questions? Who? You? Go. There is application. For sure there is, I think that there are two containers. If I'm not mistaken, application in dapper. Dapper attaches itself. And when an application sends a request to the database, it actually goes to the site card and then dapper figures out whatever that is, kind of magic type of stuff. Yes, please. Oh, when it starts with so, it never ends up well. I'm kidding. Go. Lambda. Yeah, yeah, yeah. So there are reasons for you. So the question just in case others, why did I do it myself instead of using service like Lambda? Or my first option is always to use something as a service. So if you can afford, don't use Lambda. But if you can afford Google Cloud Run or, no, I'm kidding. Lambda is fine. If you can afford something as a service, like I'm using database as a service, do it always. Now there are some reasons why you might want to set it up yourself. You have more control over what's happening. It tends to be cheaper. And it tends to maybe better integrate with what you have inside of your cluster, because not everything might be serverless and so on and so forth. But today, I'm not trying to judge, kind of like do it self-managed or service, just showing that you can do it as a self-managed option as well for whatever reasons, especially if you're not in cloud. But if you're not in cloud, go to cloud. Yes, definitely. In private cloud, you're definitely not going to use public Lambda. Yes. Anybody else? Yes, see a hand there. I'm blinded by the light, so it's not that I'm ignoring you. So the question is, and correct me if I'm wrong, what are the limits of scalability and so on and so forth of Knative? Basically, you can configure it yourself to begin with. By default, for example, it creates a replica for every 100 requests, not immediately, but over a certain period of time. So you can configure that. You can configure quite a few other things. You can specify, like, never go below this number or never go above that. So you can say, 50 replicas is my maximum. Don't go over that. So it's highly configurable Knative itself. And then the limits are also on the hardware side, but probably your cluster, that's less of a limit, because probably your cluster is coming out of scaling, at least if you're using clusters of service. And that's important, right? Since unlike functions as a service, this is like containers as a service. So even if you need to wait for a minute until a new node is created for additional capacity, your existing pods can still probably handle that additional load for a while, so it should be quite fine. Anybody else? No, I'm not allowed to speak anymore. Thank you so much.