 All right, so quick question, developers, how many developers do we have in the room? Okay, that's a good, that's 50% more, 60, yeah? Like, operation people, like, you know, grading clusters and all that stuff. Good, good, good, good. How many K-native users do we have in the room? Okay, okay, interesting, okay, okay. Who does nothing, is your... Who is holidays here, yes? Let's go hang out after. Yeah, there you go. All right, so we will be doing some K-native project updates and then I will be doing some demos, like our own functions. So if you're a developer, maybe that's your thing. We are looking for more developers. So if you're a developer in the room and you want to get involved with different languages, that would be great, right? Yep, so that's one of the things that we're doing. Yeah, are we ready? Not yet, I think that we need to wait. This is like a very on time, like, yeah. Can't start early and leave early. No, because there is the live stream. Oh, I see. So, yeah. Unfortunately, as you can see, the two of us are here, but look, he's not here. He's based in Japan. He's pretty much the mastermind behind the functions and stuff that I will be showing. So, yeah, if you can go to the K-native Slack and give some appreciation there, that will be highly appreciated. Yeah, CNCF Slack. CNCF Slack, there you go. CNCF Slack, there you go. Ah, well. And thank you for coming. Like, if you are not a K-native user and you want to learn more about K-native, I think that this is the right place. Yeah. We'll be showing kind of how it works. And if you're really close with your neighbor that you're sitting with, you could like stack on top of each other, you could let some more people in. Some more people in. That's a really interesting suggestion, I would say, yeah. Unfortunately, we cannot out-the-scale the room, right? Ah, yeah. That's a sad thing. We'll scale it to zero after the talk. Afterward, yeah, there you go. Not the thing. Not the thing for rooms. Let me put a timer. How many here is like first CubeCon ever? Oh, wow. Great. It's good. Yeah, that's really good. Yeah. So, if you haven't been in a maintainer session before, this is a very specific kind of session around like projects. And we do basically try to provide an update of where the project is, what we are up to, and what's coming. And also trying to show and highlight the latest features and stuff that it's been released already and the maturity process inside the CNCF as well. Yeah, we've done other talks in other CubeCons like for Intro to KeyNative and things like that. We'll do like a high-level thing, but ultimately say go back and you can find resources on the CNCF YouTube channel. There you go. And we have worked pretty hard on the getting started guys. So it should be pretty easy to get started with KeyNative. So yeah, if you check the site, you can get started there. That's good. One more minute and we will get started. And it's funny because we're waiting, but like then the time will go so fast and we will probably not be able to do everything that we wanted to do. But it is what it is until the next one. All right. I look at the stream. Should we start? Yes. Let's do it. Hey everyone, welcome to the KeyNative Functions deep dive. It's gonna jump in. Yeah. So my name is Mauricio Salatino. I am working with the KeyNative team for like most, like almost two years. Big fan of the project since it was announced. And yeah, I'm just working for a company that's called Diagrid. We do that present that I will be showing a little bit here with KeyNative. And I'm also, you know, in the social networks if you want to reach out. I am doing a lot of open source work. I have a lot of friends in the open source community and it's a pleasure for me to be here presenting about KeyNative here with Dave. Another legend. Look at that picture. Yes, that is me. I was much younger and that's why I have, it's not a garbage bag, it's a poncho. Hey, my name's Dave. I've been a KeyNative contributor for many years now. I am the serving lead and I'm on the technical oversight committee. Currently I'm at Broadcom, which previously was VMware. So that's why I kind of have like here, KeyNative, VMware, Broadcom. Cause like I don't know who knows what. I also added my social stuff there. So feel free to add me or just harass me if you care to. And that's Luke. He's working as a principal engineer for Red Hat. He's based in Tokyo, Japan, very, very far away. We want him to be here, but he couldn't make it this time. So we send like a lot of appreciation for him because he's kind of like the mastermind behind the functions project. Yeah, we didn't want to delete the slide. Yeah. So quick agenda, just so you know, we're going to do kind of a quick KeyNative overview. We'll do some quick project updates. So you'll see that KeyNative has a bunch of sub projects, functions being one of them, but there's a few others. Do function overview, the demo, and if you have time, I'll kind of cover like how function kind of pairs well with the serving sub project that we have. So to start, KeyNative was founded in 2018. It's been incubating in the CNCF since 2022. And kind of like the way we frame it is like it's a bunch of open source building blocks. You can use it to build the serverless things. And then you'll kind of see when I kind of dig down into it, like what that really means. I kind of highlighted, I think, like the main sub components here, and the way to think about it is they can work independently with each other, or sorry, independently, or they can work together. If you use essentially everything together, then kind of what you can get out of the box is essentially like a function as a service running on Kubernetes that can like scale to zero. So you can see like serving will scale, your work was based on traffic. Eventing does connecting sources to those running containers and it doesn't necessarily have to work with KeyNative serving. You can target and ship events to Kubernetes deployments, Kubernetes services, or any other kind of workloads that are addressable. The client is all the like function work and to create and update those things. And functions is essentially kind of like what we're highlighting here. This is like a programming model where you have, where it's not opinionated about the framework you use when you kind of develop these functions, but we do try to make it easier to deploy and build around and test these things locally. Quick project updates. The user experience group is something that's been kicked up again. They are looking for user interviews. So yeah, I would say even if you're a first time user, highly recommend go to this URL, sign up. It would be great to see roadblocks people hit when they read our docs, go through our tutorials and things like that. So can people take a screenshot of this slide? Yeah, that would be super, super appreciated too. Yeah, we're really trying to make the onboarding super smooth and straightforward. And it's kind of funny. There you go. Yeah, as part of that, one fun thing that you did was design a new mascot. So this is our new mascot quack. There you go. We don't have stickers for it. Next time we will have stickers, that's for sure. Yes. And if you contribute, you get stickers too. And maybe a T-shirt. Trying to figure that out. Yeah, and the user working group is also trying to source contributors as well to help with the initiatives they have, especially. Good, good, good. The client update. So client is like I mentioned, it's kind of like the CLI for KeyNative and like the function stuff. One thing that they actually did in the last kind of quarter was have like a 48 hour function hackathon. So the structure of that was you set up a whole bunch of issues that were very like tailored to new users to contribute. And then there's like a cool presentation that everyone did afterwards. And it was all distributed and remote. So if you're interested in trying to get involved in KeyNative, I think things like this, the events that kind of you hold is a good way to kind of like segue into the community and so forth like that. And we purposely do these things so that you have the support where it's not just some random issue you pick up, but it's actually like, hey, we'll have Zoom sessions. And then we can do mentorship that way. For eventing, like I mentioned, this is what connects like event sources, enrichment, and then shipping to your like KeyNative and servering workloads or regular Kubernetes workloads. One thing they're adding is like the TLS encryption between what is brokering the events, like figuring out where to send it to the actual workload. And they're really working on event discovery and they're trying to do, which I like this event catalog that's always been an issue in eventing, discovering what events and what attributes you can route things on. And kind of one thing I'm really happy about is we have like this RabbitMQ integration. And before it was made by meeting by some people at my team at Viewmware, now it's gonna be maintained by the actual Rabbit team that works on RabbitMQ, which is actually awesome. Do we have any RabbitMQ users here? Okay, so we have, oh, okay. Wow, a lot. There you go. I will let them know. So they're, if you go to the repo, it'll say like deprecated, but we're onboarding them right now, which is awesome. You also were involved with the initial writing. Just for people to know, right? Like that integration allows you to exchange event across different systems without actually using the RabbitMQ client. So you are just emitting events to HTTP endpoints, brokers, and then the routing happens on the KeyNative eventing layer. Yeah, like apologies that this isn't like an intro talk that explains this. Like, Rabbit isn't the only solution there. CapGun is the other one, and then I think there might be another bus. Okay, yeah. Yeah, good. I think CapGun, Rabbit are the main ones. For the security working group, so essentially as part of being part of the CNCF, they do a security audit. We've done that. We've addressed, I think there's like two issues, at least. Yeah. There's a blog post that details it. Security is also one of the things we could use contributors for, but I'm not an expert on that. The security expert is actually getting into an intro talk, but he's outside. And for steering, so if folks aren't familiar, KeyNative actually has like a governance structure. So we have a technical oversight committee. We have working groups. Those working groups have leads, and steering is kind of responsible for like the success of the project overall. So as part of, they essentially were the ones who helped get KeyNative into the CNCF and into the incubation stage. One thing we're doing now is they submitted the proposal for us to be a graduated project. So that's on its way there. Yeah. Cool, so. Should we get going? Yeah. Okay, let's do the, no, it's the painful part, right? All right, so. Yeah, I'm handing it off to Murcio to, I did the easy part. No, no, no, you need to help me here. You need to help me. Okay, so what I wanted to do, I don't know, is that big enough for the back? Yes. Okay, I can see people saying yes. So what I wanted to show is like, I wanted to show a little bit of the functions work that it's been going on lately in the project. There has been a lot of work into trying to improve like the experience for function developers. And KeyNative functions is basically this CLI tool that will help you to create functions in different languages and to basically deploy them into running clusters, right? The idea of going for creating a function to a function running in a cluster is kind of like the main idea behind this CLI. What I wanted to show you here is a little bit of the experience that we have today. And I wanted to show kind of like, the new kind of changes and the new approaches. And for that, we will be creating some functions in Go in this case. And that's why I was telling before that we need some developers, right? Like we created this experience is optimized for Go now. It actually can be implemented in many other languages and many other frameworks, but we need help from the community from those languages to make sure that the integration actually feels natural for the tools that you are already using. The whole idea here again is that I will be creating functions and we will see the abstraction layer that we have built to create this experience a little bit better, right? I need to run a couple of commands. I will just copy all these commands. But what I'm doing here is I'm just creating a function inside the directory. And I'm doing that by running this command that it's called func init dash l or language. And I'm using the go template here. Yeah, and kind of what Mitra was saying right now out of the box, we have node, Java, Go. So if you are an expert in some language, we don't support. Like I've been wanting to learn OCaml, but I don't know anything about that. I would be cool to have a function template for that. So if you're an expert, that would be awesome. Yeah, 100%. So that's how you create a function. Basically, that's how you initialize it. And you can see here in code that now I will probably have a Bonjour directory here and it has a bunch of files, including a func.jammel file. And then by default, we create like a simple function inside it with this kind of like handle method, which is basically receiving HTTP requests, right? But there is no HTTP server or anything here. And that's pretty much what's being provided by the function framework. I will delete this basic implementation in this example that it's basically just printing out, echoing the request that we are sending so we can create like a function from scratch. And I will do that by running this command, this command here. Yeah, and then kind of the one thing to add to is like, when you saw the handler, we don't have any external dependencies on any sort of like K-native library tooling. And what does that mean? Because there's a lot of function frameworks out there that require you to rewrite your app to be like a function using their proprietary libraries. So we try to keep it straightforward where in this example, we're using just a standard library, Golang's HTTP response writer and request, which kind of looks like an HTTP handler. And then the other bit is, we're also a little bit opinionated in our eventing module where we use cloud events. That's a standard that's in the CNCF like another project. Really, all it is is just a payload with like some specific headers. What those headers let you do is route traffic based on what's the content essentially. So in addition to the go template that Mertio used, you can also specify like a go cloud event template and then it'll do like a special handler if you're doing cloud events. If you're doing cloud events, exactly. So what I'm doing here is I've just created like a new file here. It's golvanger.go and basically what I'm copying here is the function code again, like for a very simple, simple, simple function that just prints to the output, right? I need to do some imports here. I've initialized the go module here, golvanger, and you will see that there is nothing much more in the function beside that handle method, right? Yeah, so he's got some extra stuff there. Yeah, exactly. So I can kind of highlight that. If you were to get rid of this alive and ready, you would just, all you really need is that handle method that alive and ready stuff does, it hooks into, if you're familiar with Kubernetes, there's a thing like pod readiness and pod liveliness. So kind of again, similar where if these functions are present, we will then wire that up to those lifecycle hooks. Yeah, but the handle function is pretty simple. It just receives like an HTTP writer and a request and then you can just do whatever you want in there. And as Dave mentioned, there is no dependency here to any of the functions at all. It's just a go simple program, right? And that basically what allows me to do is, it allows me to run the function locally using the host builder basically. So we can just run it locally without the need for containers. And I think that this has been kind of one of the main changes that we are doing now. So I can do something like fun run and let's see if this runs. You can see that the function started import 8080. So I can basically do HTTP localhost 8080. And there you go. So that's the function running locally without any container. But we all know that if we want to run this in Kubernetes, we will just need to create a container and deploy it into a cluster by creating some manifest to configure how this function will run inside the cluster, right? With functions, we can just stop this and we can actually run a single command funk deploy to deploy this into a Kubernetes cluster. I didn't mention that, but I have a cluster already, of course, I have a kind cluster here and I have no bots running in there, right? So if I do funk deploy and if everything works as expected, we should have a function deployed after running that command. So one thing to kind of add, if you kind of look at the details here, Bertil mentioned he's using the host builder. What does that really mean? It means he's using the local tool chain on your machine to build. If you need something maybe more hermetic, then you have additional options. So originally, I think you can use build packs as one. So actually, that will either build locally using I think the packs to your line. Or I think Red Hat has like a source to image tooling which is very similar to this. So you have like in addition to, there's no opinions about the handle function method, we have like optionality with what builder you use. Yeah, and it's kind of interesting, like just by default, you get all the architectures for that function being built already and published to my local registry in this case. This has been published into that registry. You can configure your own registry or Docker Hub or the GitHub repository, GitHub Container Registry too. And then again, at the end, you have that URL where you can just basically call it and just get back the function payload, right? That's pretty straightforward, like two commands, you created a simple go file. You got a soft clap over there. Like I was clapping on his wrist. It's a soft clap, soft clap, because we need to hold our houses. We will do more stuff. That's a hello world of course. So that's not even impressive at all. One thing to notice here is, and I think that this is really important, at least for me as a developer, what is that HTTPS thingy there? Like, I'm usually starting things locally, port 8080 HTTP, but here, like K&AD is already providing me certificates and HTTPS connection out of the box for the function. And again, for me as a developer, this is great. Yeah, it's kind of funny when we did the survey of who's new to KubeCon, who's new to KeyNative. In hindsight, we should have done an intro to KeyNative. Yes, exactly. But I guess that this shows also kind of like, interesting enough, we are going. Like the thing to add, the one thing that serving does, not only does it scale to zero, it allows the operator to configure a whole bunch of stuff that then separates that responsibility from developers. One is, for example, network programming. If you set up Istio or if you set up Contour or something else, then are you making your developers write Istio resources, Contour resources? Exactly. That's something KeyNative serving abstracts away. Likewise, we do the same with certificate management. If you install CertManager, KeyNative can do the programming of the CertManager and the operator just has to configure like a few things like the domain and that stuff. And then when you do the deployment, like for example, Richard just did the function stuff, it provisioned the certificate and then wired up all that network programming. Exactly. That's why, kind of like I mentioned earlier, these things can work independently. You can technically not deploy a function and bring your own container like Nginx or something. But when you do use a function like we integrated everything well together. Yeah, exactly. And as you can see, this is all about KeyNative serving, autoscaling, right? So the function is being downscaled to zero. I don't have any more bots running in my cluster after calling the function and not calling it again for a period of time. And I wanted to show one more thing. So I will just call the function again. So without to scale up again. And it's now, of course, it's kicking up like the autoscaler creating a new instance of that function which is already running. And now I can just do, for example, logs just to make sure that we see the life cycle endpoints that we have now, right? So this is Kubernetes checking with the function if it's ready and if it's alive. And I think it's important to mention here again that these kind of like hook points into the function life cycles are important because we are building more and more, right? Now we have hook points for when the function starts so you can execute some code, maybe connecting to infrastructure, maybe loading some data into the function memory so it actually can be used whenever the user is calling this handle method, right? Like the function from outside. Yeah, and if you're not familiar with the stuff with the like especially your readiness, like you don't want your container receiving traffic before it can actually process it. Exactly. And then you'll drop it and this really happens especially when people update a deployment, right? So I would definitely consider that it's like a best practice for Kubernetes. Yeah, exactly. Again, no dependencies with the function framework here at all. And that basically means that we can start doing things with these functions that are a little bit more interesting, right? Like things that you really want to do with functions maybe you want to call another function. So let's copy some more text here from my file because I can live code but like writing all these by hand live I would just mess it up for sure. So what we are doing here is just we're just calling another function using, you know the standard HTTP client from Go and where we are calling here it's just using again normal Go libraries, the net HTTP package and we are calling kind of another function that it's called croissants because we are like in Paris. So we need some croissants in the session. I've had too many honestly. You have too many. In the morning I eat like at least two or something. I cannot have too many croissants. So again, it's just a client just calling another function. As you can see, again, I don't have any pods but there is a function there in the cluster already get the K-native services and you can see that there is an endpoint they are waiting for requests. So if I do func deploy here what we can see is that, okay we'll build again the function because we made some changes. It understands that we need to build again and then just publish the next version of the function that we can call again. So if I do HTTPS and I call the function there you go, so we get some croissants, right? And this is basically chaining too many croissants there. Yeah, I can see that. Well, but the main idea here is that I call the function that was just started and this is calling another function that needs to start and it actually kind of works fine. Of course now that we have two instances it's going to be faster, right? Like we don't need to wait for the cold start of the functions but it's kind of changing the things. And when you can start doing these things the next thing that I wanted to show you is that you can now start adding some other frameworks to do common stuff. Like maybe you want to store croissants in a database because that's your thing. I haven't done it before in my life but maybe we can just try that out. What do you think? How do you plan on doing that? Well, so there are like different options, right? This is a very good question. The other different options, I can come here and add like my Redis client or my PostgreSQL client and just connect to it or I can use some other CNCF projects that basically allow developers to do the same but just calling APIs. We know that we can call rest endpoints already, right? So what I'm going to show now is I'm going to use the Dapper project which is another project that I'm working on which allows us to do that. Just we call endpoints and then we just interact with infrastructure without like pushing and adding dependencies into the function runtime here. So let's try to do that. It's not that complicated as it sounds. I will just install a Helm chart that gives me the Dapper control plane into my cluster. Dapper works by injecting sidecars to my applications, in this case my functions and we have built this integration between Knative functions and Dapper that by default Dapper functions, sorry, Knative functions are Dapper enabled by default. That basically means that if Dapper is installed in the cluster, the function will have access to these Dapper APIs and I will show you that in a second. The main idea here is that Dapper will expose you, you know, like HTTP or GRPC endpoints so you are free to choose which kind of like transport do you want to use but because I've just showed you how to get cross sans using HTTP, we will keep using HTTP for now but of course that you can use GRPC if you want to. Yeah, the other thing to add to is, so Knative is serving like I mentioned is kind of like you can bring your own container but I really should have said bring your own containers. So in theory you can bring your own sidecar, not necessarily Dapper but or like something else. Any other sidecar, yeah. As an example, a lot of people use the sidecar to do, like pull in some other extra information. Like if you do AI stuff, you can pull your model in as a sidecar to get a cache ability. Exactly. And then you kind of reach out to that as that helper. Exactly. The thing that I want to do now is I basically want to install a Redis instance into my cluster and I want to configure Dapper, the Dapper sidecar to be able to connect to that Redis instance. And I'm doing that with two basically with three files that are here. So the first one of course I have a Redis service. This is just a normal Kubernetes service for Redis and then I'm just creating a deployment for the Redis stack image. That will give me a Redis instance but again I don't want my application, my function to connect to that Redis instance directly by adding the Redis client that you can do of course. I want to use the Dapper abstraction so I can call an HTTPN point from my function to store some data into Redis or some croissants I should say. So in this case I have one Dapper component that it's called state store and again this is the Redis implementation. You can swap these for other state stores like PostgreSQL or managed services and this is where my Redis instance is running. Like in the cluster, Redis service and the default password that Redis creates. So let's apply that. Again, that's Dapper if you're interested just check it out but what this is doing is basically giving my functions the possibility to access this infrastructure by just doing HTTP requests and that's what I'm going to do next. I'm going to go through my function. Yeah, I was just gonna add like I mentioned before because we don't wanna be opinionated about like with our function handlers in library this essentially gives you portability, right? Between your data store. Exactly and we want to demonstrate that you can use basically that you can do whatever you want inside of your functions, right? But one common thing to do is to store data from functions and then retrieve data, right? Like just to store some state. And again, I just wanted to make sure that I can connect to my local instance just to show you that this was just installed. Let me see, you see that's an old test. Oh, I've never seen this dashboard. Yes, it's really nice to be honest, I'm happy. I'm not sure like about the license change that they announced yesterday but let's not go there. Politics here please. Yeah, let's not go there, let's not go there. So okay, so what I'm going to do now is I'm going to add to my Bonjour function the possibility to store data into Redis which is something that you probably want to do, not Redis but just to store data somewhere. And for that I'm just defining some variables about where the Dapper APIs are and then the name of the component that I'm using. And then I'm just doing an HTTP request to those APIs. Let me show you this which I think is important just to highlight the note that they was doing about like having all the sidecars alongside of our functions, right? So I think that that should be it. Do I need to import these too? Yes, so what I'm doing here is I'm doing another HTTP request to localhost in this case because again, this is sidecar so it will run in the same names piece as my function, my same networking, my same network as my function. So localhost, the port for Dapper and then I'm just doing kind of like a request, a post request to state slash the name of the state store that I'm using and then I'm sending basically a JSON payload here, right? It's like the key value store for this. So I'm sending Paris as a key and then value my croissants. And let's see if that kind of like actually works. So I want to change that, say the Bonjour and then find deploy. And then, you know, this is where the demo can just go super, super wrong. They're just doing HTTP request. It shouldn't be that hard. Overloading it with croissants. Yeah, maybe too many croissants for my machine but let's see. All right, so after the function gets built I should have a URL again for the function and I should be able to call the function and see if it's working or not. There you go. So I have the function and I have the new instance there running. Probably the old instance was downscaled already so I will just call it again. And of course we have the same message but now this state is going to be stored into my Redis instance. Let's refresh here. Where's the refresh button? Yes, there you go. So we have state store and I don't know if you can see that but we have some croissants. Some croissants, we have some croissants between quotes there. I don't know the quotes why but like something probably silly that I messed up but it's good. Now we have the croissants inside the database. I'm surprised it actually renders. It renders the emojis. Funny, funny business. Hey, I've seen this for the first time too. It's funny, funny, funny business, you know. But okay, just to finish kind of like the demo and just to highlight this and I just wanted to show also that you can add libraries into your functions because again you might want to retrieve these croissants to it them at some point probably after the session. So what I'm going to do here is I'm just going to create a second function. A second function that it's called Orboa of course again and I just will run all these things which is creating a directory, creating a function, removing the default handler and we will create a new one. So the scaffolding is there for your benefit not for the demo's benefit. Yeah, exactly. So there you go. I think that that's pretty much it. I'm inside the new function now but I can go here into my BS code and see if it's here. Yes, I don't have any go files so I will just create a new one and this is where I suffer. I was typing my yes, my French is not good enough I would say. So the next thing again, function, basic function thing. Again, you can see here I am not defining the liveness and readiness hooks but that's fine. Yeah, they're optional. They are optional. I will just deploy the first version just to make sure that I have a function that it's okay and then I will just add a library and see if I can retrieve the store back from my ready state store. You might going back to the code for a minute. Yes, let's go here. Yeah, I just want to highlight. So one thing I haven't covered yet is like there's this new method. Yes, that's important. In theory, you can get rid of the struct and just have a handle method. The reason why we kind of added the optionality to have this instance is then you're not necessarily using global state. So if you have a long initializing time, like as an example, if you have a handle method and you need to go and fetch that something when your function starts, that's essentially why we added this new method. So it's part of the function lifecycle. Yeah, exactly, which is pretty important when you're building tons of functions that are doing a lot of initialization. As an example, you could initialize like your sidecar client or a dapper client, Redis client, in that new method so you're not, because I think when you create it, it pings the thing. Yeah, yeah, exactly. So you can avoid that. Good point there, good point. Let's just call this new version and then let's add a library there just to get the croissants out of the Redis state store. Okay, that function is working. And then just to finish this off, let me show you this. Again, this is any library that you want. You can include. There is no restriction from the function, candidate function framework point of view, of course, that I need to use it. And then what I'm doing here is I'm using the dapper go SDK, which is a simplified way of consuming the dapper APIs. So instead of doing that HTTP request that I'm doing here that it's taking all this boilerplate to configure, I just can use the SDKs to just do the same. And it's a little bit more performant because it's using gRPC behind the covers. And it's just saving you from dealing with other topics that you might not want to be worried about when you are interacting with the dapper APIs, like all the details, all the headers, and all the things that you need to set up. So the only thing that I'm doing here is I'm doing get state in this case, which is going to go and retrieve state from the state store that I'm referencing to. And then I'm just defining the key that I want to pick up, right? So if I deploy this again, func deploy, one more thing that we want to add there. Of course, I need to do go mode tidy just to fetch the dependencies because I'm adding a new library to my go function in this case. And when the library is there, everything is fine. We have all the architectures being built, pushed to the registry, and a new version of the function is up and running. So when I call this, we should be able to get our croissants. And that's pretty much the demo, I think. What do you think? Awesome. It was really tough having to comment while you do all the work. So now you can finish up with all the slides. Oh, it's awesome, yeah. Good, good, good, good. Thank you, folks. So yeah, there's still some more slides that I can run through the last five minutes. We kind of highlighted the function life cycle alive, ready. We have the non-opinion function runtime, which means you're not pulling in any third-party dependencies in order to run the function. We use the host builder, but there's also build packs and source the image, and we have a build and deploy that you can do independently. So if you want to take your function but not run it on a creative serving, it's possible. And you can deploy it anywhere you want. Yeah, so I didn't show that, but you can do funk build just to create the container and then maybe create your own manifest to deploy that to a cluster, right? Like, it's completely independent. Here, I was showing kind of the entire thing, but behind the covers, we are creating all the manifest to deploy the function to the cluster. Yeah, and I'm gonna quickly run through some serving slides. So we kind of highlighted or uncovered all this, like you have a container in the URL, it auto-scales, it does certificate provisioning. What you didn't see is revision management and traffic splitting, every time you modify k-native service, it spits out revision. That lets you do canary rollouts and then rollback. And we could do automatic health checks, but it's better to have these alive methods than it actually ties to your application logic if you need to do initialization. Kind of the high level, like if you didn't use k-native serving, you'd have to create like horizontal models, scalars, deployments, services, ingresses, certificates, like there's like probably five other things. But as an example, under the hood, what funk deployed it was create this service, put your image in, contain port, and then you get the whole URL that has HBS. Okay, one thing that, the interesting stuff is, if you write your functions and you're using global state, you don't want concurrency. So one thing is, for k-native services, you can set the currency to one and then we limit the number of active requests that container can handle all at a time. Okay, I'm gonna run it quickly. So what does that mean? Hey, when you have concurrency one and I have three requests, I need three pods. If I have concurrency 80, one pod can handle all these three requests. I sold a slide from Google Cloud Run. It's also k-native API compatible. So you can run workloads on k-native on Kubernetes there. I don't work for Google, I work for VMware. But you can kind of see, if you do concurrency one, the green is the request count and then the blue is the instance count. So a higher concurrency will mean that you can have less pods handling traffic. And there's like some art to tuning concurrency. If you use low values, it's gonna spin out more instances and you can hit k Kubernetes limits much faster because you have limits on the number of pods that you can have in a node and so forth like that. But if you use greater than one, then it's gonna use more memory and CPU and you can't use global state. So you really wanna tune the concurrency based on your app. Let's see what else. Scale to zero, I kinda highlight, we have this component. When things go away, like SOS help me and then we scale things up. But one thing it also does, the serving stuff is, when you have a ton of requests coming in, our auto scaler detects this and if you also use a knob to tune what we call burst capacity, you can also have our thing, component that scales things from zero act as a shield, like a buffer. So that gives the Kubernetes and can of time to scale up your pods. This is very useful for apps that have long startup time. I won't say which language. Which language? Don't say the language. Yeah, and I kinda highlight the annotation there. And there's a blog post that came out this week that kinda demystifies how this activator component works. So it's very technical, deep dive. So if you go to a Canadian website blog you'll be able to see it. And thank you and leave feedback. We have 10 seconds, so let's count down. 10. Thank you so much. Thank you so much. If you have any questions, come in and say hello. We are still around. Yes, and for those that don't do anything I'll be outside. Yeah, let's go and not do anything. Good stuff. Hey, how are you? Good on.