 OK, so welcome to Cloud Native Live, where we dive into the code behind Cloud Native. So I'm your host, Mohamed Sharia, you may call me Mohamed Sharia, and I'm a science ambassador, and I will be your host tonight, right? So every week we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. So in today's session, I'm stoked to introduce Stephen and John, who will be presenting on deploying a Google Web application to Kubernetes. So this is an official live stream of the CNSF, and as such, is subject to the CNSF Code of Conduct. Please do not add anything to the chat or question that would be in violation of the Code of Conduct. Basically, please be respectful to all of your fellow participants and presenters. With that, I am going to hand it over to Stephen, and after that, John will join me. So I will hand it over to Stephen, and he will start the session. So hey, Stephen, how are you? I'm doing great. How are you? Yeah, I'm fine as well. So yeah, I guess we can start right now, and it's over the best. Awesome. Let's start. OK, so I'm sharing my screen. So I'm guessing that you have to flip. Yeah. Done. There we go. Thank you. So, everyone, thank you very much. Very first thing is I want to thank you sincerely for attending this session. This is our very first webinar, very first webinar of ZigZog. The mission of ZigZog is to simplify Kubernetes. Our first product, KubeFox, helped simplify many of the aspects of the pipeline, from development, to CICD, to testing, even extending to enterprise facilities like Zero Trust and Versioning, all without requiring DevOps. A couple of items. CNCF does not provide us with a list of attendees, so it would be wonderful if you would take a moment to provide us with feedback via info at zigzog.io. Again, this is our first webinar, and we'd be interested in any and all suggestions for improvement. And believe me, I know there will be areas where we can improve. If you experience any problems, please let us know and we'll try to help. If you like what you see and you'd like to participate in our journey, please let us know that as well. We'd be interested in engaging with you and welcome community involvement. To illustrate some of KubeFox's capabilities, John is first going to kick off with a hands-on walkthrough of building, deploying, and running a simple Go Web application natively. I'll then do the same thing, but with KubeFox. We'll have a Q&A session afterwards, but please reach out to us info at zigzog.io if questions remain. We promise to get to each and every one, even if we run out of time during the webinar. So without further ado, John, I'll pass it to you. Okay, awesome. So let me add John to the screen. Hey. Okay. So, yeah. Hey, John. How are you? Good, good. So, yeah, I guess we can start, right? Yep. Perfect. Let me add the screen to the stream. Okay. Great. Great. Yeah. So, hi, I'm John. You know, Stephen and I are both big advocates for Kubernetes. You know, we've used it at our last company and, you know, the power behind it is amazing. You know, it gives you a way to deploy and scale very complex software. But I think it's real superpower is that it provides a standard API. And it's an API not only around your deployment of software, but also around your infrastructure. And what that allows people to do is build these incredible tools on top of Kubernetes to help people do software development. But what we've found is there's a lot going on. There's a lot of configuration. You know, there's a lot of kind of getting tools integrated. And then you have to maintain everything. So what KubeFox is kind of trying to do is simplify all this. So we're trying to build kind of a very, very simple, opinionated into a tool that goes from development to testing to telemetry to versioning and kind of helps you along the way in just kind of a simple unified product. So I'm going to kind of start off and show you a little web app that we're going to deploy natively with Kubernetes resources and right out of Dockerfile. And then I'll kind of give an example of like trying to deploy it to two environments and show you where things kind of start getting a little bit stickier and hard. So all this is available in our examples repo, so GitHub. So if you fall behind, you know, there's a readme here with all the commands that I'm going to be running and a little bit of info about what's going on. Additionally, we have a discussion page on our GitHub. So if you want to ask questions or get in touch with us, that's another great way to do it. So and if you run into problems, please open an issue on GitHub. So here is the app. First thing I'm going to do is I'm going to start a kind cluster and we'll be using this kind of local cluster to do this demonstration. We really enjoy kind, we use it for development all the time. It's Kubernetes inside of Docker. So I'll go ahead and get that cluster going as I walk you through everything. So the app is broken down into two components. We have a front end and a back end. So the front end is very simple. So it's listening on a path here that is defined dynamically by an environment variable. So we're going to feed it in a sub path and then it's going to listen on hello. And when you hit that end point, all it's going to be doing is calling out to the back end here and getting a who to say hello to creating that message and returning it to the caller either as JSON, HTML or just a plain string. The back end is similarly very simple. It's just a go app running HTTP server. When it starts up, it reads the environment variable who. And then whenever someone calls it, it just returns that to the caller. So the first thing that we need to do is containerize these apps. And so the way we're going to do that today is with a Docker file. And so the Docker file is just kind of a set of instructions on how to build the app and how to put it into an image so that Kubernetes knows how to run it when we deploy it. So let's go ahead and build our two apps. So this will build the back end and this will build the front end. Great. So if you look at these commands, what's going on here is we're using the Docker build command. We're passing in a working directory the Docker file to use. And then we're just giving the image that we're creating an easy name that we could reference later when we go to do the payment. So now normally what would happen is you need to push these images somewhere that Kubernetes would have access to. So container registry. Most people are familiar with Docker Hub. GitHub also has a container registry. But for the sake of simplicity, what we're going to be doing instead is we're just going to side load this directly into kind. So kind is able to pull this image directly from our Docker payment that's running and loaded inside of the Kubernetes cluster that's hosting. So we'll go ahead and do that for the two images we just built. Now they're both available to our Kubernetes cluster. So the next thing we need to do is define the deployment. We basically need to tell Kubernetes how to run our app. And the way that's done in Kubernetes is a resource, which you see here. This is a deployment resource. And in this resource, we're telling Kubernetes the state that we wanted to realize. In this particular case, we're telling it that we want to run a container using this image. We want to mount a config map so that it becomes environment variables. And we define some information about the port that it's going to be using and its resource limitation. In addition to that, we're going to define a service. Very loosely, what this is going to be doing is giving us kind of a well-known DNS name so that we could communicate with the app. So for example, if we go back to the front end, you'll see that when it's calling to the back end, it's using this name, which is just the name of the service. So we could just take this resource applied and then Kubernetes is going to know how to run this container. The last thing we're going to be using is the config map. And this is just providing our environment variables. So you can see we have the subpath and who defined here. And then the front end and back end both mount these so that they can access those variables. So what we're going to do is we're going to create a namespace to kind of put our containers and config maps into. So we'll create a namespace named Hello World QA. And we'll go ahead and deploy, sorry, apply the config maps or our environments available. And then we'll take the two deployments to run our front and the back end. So you see it created our deployments and services for both components. So let's take a quick look at what's running in that namespace now. Great. So you can see our two pods are up and running, each one containing the one container for the front end and for the back end. At this point, normally what would happen is you'd set up a load balancer with kind of a public-facing endpoint. And that would route in using Kubernetes network overlay into your container to serve the Hello World app. For simplicity, though, we're going to just use a port forward. So what this is going to do is Kube cuddle is basically going to go out to the Kubernetes API and request a special kind of connection. And all the information we send through that will get forwarded out to the pod container, sorry, the container running on the pod. So that gives us a nice easy way to communicate with that container locally. So we'll go ahead and start that. And now we could test the app to make sure it's doing what we expected. Great. So exactly what we wanted. It's saying Hello World, which is the environment variable in QA. And so we're done. We built out the Dockerfile, the resources, and deployed it, and everything's running. But now let's imagine that we have another environment. And some of those environment variables need to change. How are we going to take care of that? So we could go in and just override this config map with our production environment configuration. Of course, that would mean we would no longer have QA. Instead, a common practice is just to create a new namespace and deploy your new environment there. So we'll go ahead and do that instead. So we'll create a new namespace named prod. And we'll do the same thing. We'll apply the config map with the prod environment variables. And we'll apply the two deployments to get our components running. So now let's take a look at what's going to be the output for this guy. So again, we're just going to start another port forward. This time we're going to use the local port 8889 and use the prod namespace to connect to the front end. So now we should be able to curl that and see that it is, in fact, using prod environment variables. And it is. So it's using the prod subpath. And it's coming back with universe as the who. So great. Now we have our two environments. And let's take a look at everything that's running. So in this case, when we do our Git pods with kubectl, we're going to use a selector so we can see pause across namespaces. And there you go. So you can see our two namespaces, each running our two components. So kind of in review, what we've done to get all this running, and there's actually quite a bit going on, right? I've kind of skimmed over a lot of things. So first, we had to develop our apps. Second, we had to go through and write the soccer file. And so that means you need to learn Docker and understand how a Docker file works and how the build process works. Then you need to go in and define all your Kubernetes resources. And you have to deploy those and make sure everything's running properly. And then you have to deal with having multiple deployments for your environment. So it uses a lot of resources, especially if you have a lot of environments or a lot of components. So Stephen's going to take it from here. And he's going to show you basically running the same app. But instead of doing it kind of natively, to use KubeFox to kind of help you to deploy it and show you some of the things we've been working on to help with some of these problems, the complexity, and the resource utilization of running multiple environments. So I'll pass it over to Stephen. OK, great. So yeah, let me add Stephen to the stream then. So I'll add it to the backstage then. OK. OK. So we're just going to continue on with the same cluster you were using with John. The first thing I'm going to do is I'm going to apply the KubeFox Helm chart. So let's do that here. So the KubeFox Helm chart starts the KubeFox operator on the Kubernetes cluster. The KubeFox operator is a strict Kubernetes operator. And it manages KubeFox platforms and applications. Loading KubeFox into its own namespace, KubeFox system, reduces clutter when we're looking at the application components. And by the way, I'm walking through pretty much wrote our quick start. And obviously, that's something on which we've spent time. And we want to make sure that you're successful as you walk through this exercise. If you get into a situation where you're kind of behind or you've used track, you can always revert back to the quick start at docs.kubefox.io and walk through it on your own time. And again, please don't hesitate to reach out if you experience any problems or if you have any questions. We'd be more than happy to help. So the next thing I'm going to do is I'm going to set an environment variable that gives us a little more information as we walk through some of our commands. And I'm going to create a working directory and just change to that working directory. Now I'm going to start the first part of the magic. I'm going to do a boxinette. So when you get this question, we are going to use it with a local kind cluster. So just answer why. The cluster's name is kind. You can just hit Enter there. And we want to initialize a Hello World app. So it realizes that it's likely that the reason that you're answering the question is the way that you are. And given where you're at, that what you're doing is actually working with a quick start. So it's trying to facilitate that. You can just hit Enter at the remote Git repo. We're not going to be doing that. We're going to be creating a local Git repo. And along those lines, let's take a look at what we have. So indeed, we have a Git repo. We've got a readme, which is largely blank. We have an app YAML. We have a components directory. We have a hack directory. The components have the go components, similar to the components that John was working with. So basically, these applications are highly analogous. And Fox is very sensitive to our repository. So the idea behind Cube Fox is to link together Git deployment, CICD, and runtime, so that we really facilitate this entire process and get you up and running much more quickly than you otherwise would be. Let's take a quick look at the components. So we have back end and front end. And we can take a look at the front end. And if you see here, we've got a message. We're going to be modifying this message and doing some interesting things to show you what versioning in the Cube Fox world looks like. So we'll be modifying that later. For now, let's drop back into our working directory. And we're going to do the next command in the QuickStart, which is we're going to take a look at the environment YAML for our application. And we're going to apply that environment YAML. So these are actually, from the perspective of the Cube Fox operator, these are actually CRs that are being applied. That's what you see down here when you see it being created. We did a keep cut all apply.f to write those into the system. And let's just take a quick look at them. So we have a prod YAML. And the prod YAML has a subpath of prod. So when we hit the endpoint, we'll be hitting it with slash prod slash hello. And the who variable is set to universe. And then we have a QA environment. And the QA environment has a subpath of QA and a who variable that's set to world. Now we're going to do our publish. And this is really the major part of the magic. So what's a publish doing? The publish is without you going out and building all the infrastructure for same. It's building the containers for our components. It's setting the state of Kubernetes with respect to the desired components. It's loading the containers into pods. And it's starting the platform that was just created if it's not already started. The platform start is a one-time operation. So the next time we do a publish, it's not going to be starting the platform. If we were running in the cloud, Fox Publish would push the containers to GHCR or any container registry that you specified. It's important to note that when we did the publish, we published a name. So our name is alpha. And when you get this question, we do want to create a KubeFox platform. We're going to call it demo. And we're going to accept KubeFox demo as our namespace. So that alpha, the published name, is the name of the deployment. And we can actually dynamically route requests to alpha at runtime. And that's part of how we're doing some of the things that we're doing. We allow you to run different versions of a release side by side. And all you're doing is you're putting in a query parameter to route a request to a specific deployment. So right now, we're deploying alpha. A little bit later, we're going to deploy beta. And we're going to be able to dynamically route to those different deployments. Let's take a look at the pods. So what we've got here is we've got our demo broker. So the broker is really the heart of KubeFox. That's a thing that's taking a look at all the requests, performing zero trust. And then we have NATs. And NATs is the default event bus for KubeFox. Finally, we have the two pods for our application. We have the front end and the back end. I'm going to save that so that we can take a look at it later, and we're going to do our next command. Now, the next command is a Fox proxy. Make sure that you start a new window to perform this operation. Can't tell you how many times I've just donked it into the existing window. It's a long-running process, so it's performing port forwarding. The Fox proxy is actually a clever utility genre to facilitate networking setup. It finds the port of the KubeFox broker, establishes port forwarding from the kind cluster to that pod, and then binds the local host port, in this case, 8080, to the kind cluster. After launching the Fox proxy, return to the prior terminal window. Now we're going to start doing some curls. So the first one we're going to do is we're going to hit our alpha deployment with our QA environment. And so we're getting hello world. If you remember our YAML, our environment YAML, which again is sitting as a CR in the cluster right now, the QA value for the who variable was world. So when we hit the alpha deployment with the environment of QA, we get hello world. And then we'll take a look at, we'll do the same thing for prod. And when we do that, remember the who variable for the prod environment was universe. So when we hit slash prod slash hello with the deployment of alpha and the prod environment, we get the hello universe response. Next thing we're going to do is a release. The release, which is very fast, is doing a few things. So we're saying that we want to release testing. So this would be very similar to what you would do in your shop. You have a release. You want to give it to QA. You want them to work on it. And we're saying we want you to use the QA environment. OK? It's important maybe to note at this point that QFox environments are incredibly lightweight. You could think of them as overlays. So this is not a conventional environment. Instead of us creating a physical construct that we call QA, or a physical construct that we call dev, we create an overlay and route the traffic dynamically. So even individual developers could easily create and leverage multiple independent environments. There are a lot of implications here productivity-wise. As you'll see, we don't need to redeploy if we make changes. So we did the release. By itself, Fox is grabbing the latest commit and building a release. Don't worry, by the way, you can specify a particular commit or a particular tag, et cetera, for you to say, hey, I want you to generate a release with this commit, with that environment. Now we can hit the QA Hello endpoint without query parameters. I'm going to do that again so that you can see it right at the top. And without the query parameters, we're getting Hello World, OK? That's because we released it. And now QFox knows that we want the default traffic that hits the endpoint slash QA slash Hello to go to the QA environment for the Hello application. And we get our Hello World again. Let's take another look at the pods. Remember that when John performed this operation, he actually created two different config maps and he redeployed again. So he ended up with four pods running. We actually have the same two pods running. So this is our backend. It's got the same hash and our front end with the same hash. So we're dynamically shaping the traffic so that we don't need to go back and perform a redeployment. Absolutely nothing has changed other than the fact that we can run around and access what we want to access. Now I'm going to dig into versioning a little bit. And this is a very simple example of versioning. But as we're going through this, I want you to imagine what the potential implications would be for even a prod environment. I'm going to go in and look at our components. I'm cheating. I was sitting at the directory where I know it's going to be populated. Remember, all you need to do is open the QFox Hello World folder with VS Code. And it's going to show you the same thing that I'm seeing right now. And let's make a change to this message on line 22. So I've saved that. And that's a super simple change. And I'm just going to commit it and publish beta. So again, that name that we're using for the publish command, it's a really powerful thing. Once a version is published, we can direct traffic to it simply by specifying the deployment and the query parameters, just like before. The other thing that you'll see here is that the back end is not being rebuilt. So Fox knows by looking at the repository, because we're sensitive to what's transpiring in the repository, Fox knows that nothing changed in the back end component. So it's skipping the build for the back end component. It's building only the front end component for the beta version. Now, when I curl the beta version, stick that back at the top for you. When I curl the beta version with the environment of QA, I'm getting what I expect. I'm getting the new version of the code. And I'm getting the QA environment. So here's my new message for the beta version. Now I'll do the same thing for prod beta. And when I look at that, so I'm hitting, the deployment I'm hitting is the beta version of the product and the prod environment. So I'm getting hello universe with my new message. But what happened to alpha? Well, alpha is still there. So I can also hit the prior version of the product. So these things are running side by side. How is that possible? Well, let's take a look at the pods again. I must have forgotten to save the second version. But the second version is the same as the first version. We did take a look at them, if you recall. So I've got the same back end, right? Same hash. And the alpha version of the front end still running. And I have a new version of the front end running. That's my beta version. So what we've got here is the system is running these two versions side by side as if they were in their own independent sandboxes. I can interact with the alpha version as if the beta version didn't exist and vice versa. And that's an extraordinarily powerful thing. So imagine the implications for something like a Canary deployment or even an AB deployment. I can set up another DNS for a particular customer and direct their traffic to a particular version that's running a UAT or even prod. It's an extraordinarily powerful capability. Let's do one more thing. What we're going to do here is we're going to tag the new version and release it with a QA environment, which is similar to what you would do in a normal shop, right? We're basically saying, hey, this is my testing version. This is my release candidate. Release it to QA. And when they hit the slash QA slash hello endpoint, because it is a release, they should see the new version of the product, the beta version of the product. Then we're going to revert. What we're doing here is we're going through and we're checking out the prior version of the product. We're going to release that into production with the prod environment. So let's go ahead and do that. And as you see, that's very, very fast. Now, what I should see here when I hit QA hello is the new version of the product for QA. And that's exactly what I've gotten. Again, I didn't have to add query parameters this time, because once I perform a release, I'm defaulting traffic for the release to that new version of the product with the QA environment. And I say with as opposed to in, because Kube Fox environments are really dynamic things. Now, if I hit the prod environment, what do I get? Well, I get the prior version of the product. And again, it's a release, so I didn't have to add query parameters. I could, but I'm hitting the slash prod slash hello prior version of the product with the prod environment. And so that really concludes the code walkthrough. I actually thought this was going to take longer than it did. But hey, we're new to the deal. So now we'll jump into a Q&A, so I'll pass it back to Nitul. Yeah. So yeah, I guess if you guys have any questions, feel free to actually drop it down into LinkedIn or over our YouTube live. So awesome. So also, in the meanwhile, I guess I can add John to the stream as well, because it's the Q&A. Awesome, right? OK, so the same question always goes on to all of the guest speakers. So is there any way to contribute to the project, basically, for the open source? And is there any opportunity for anyone to contribute? Yeah, absolutely. So it's an open source product. It is, you can see the repositories. You can just go to github.com slash zigzag slash kubefox. We welcome community involvement. And in fact, I should have done a little spiel to reiterate that. What I said at the beginning, we're very interested in people and being involved in the community. We made a very conscious choice to make this an open source project. We welcome not only your contributions, but we welcome any suggestions, criticisms, any ideas that you may have. We're very much interested in people getting engaged with us. Yeah, awesome. OK, so I guess there's a question popping up. So yeah, let me add. So the question from Brian is, can you talk a little more about filament-free collection in kubefox and how it might interact with other trace collectors like Zegger and New Relic? You want me to take this one, Zena? Yeah, go ahead. OK, so telemetry is really important to us. And part of the things that you're getting out of the box with kubefox is that we're taking care of injecting all the parameters you need to correlate the telemetry. So we're injecting the trace IDs to your logs, to your traces, et cetera, et cetera, allowing you to link everything together. And we're using the open telemetry spec. So we're able to pump out all of our telemetry and the open travel entry format. So anything that is able to take that format in and use it is going to be functional with this information. So for example, New Relic has a collector for open telemetry that all converted into New Relic's format. Right now, we're working on an integration with signals to pump telemetry there. And just so you know, our product is very alpha. We're kind of working through this telemetry is going to be our big focus for our next release. But our long-term goal with telemetry is kind of something a little bit more interesting as well. So not only kind of the traditional use cases for telemetry of viewing logs and seeing what's going on with your system, but because we're so tightly integrated and because you're going to be using our SDK to develop your software and because we're event sourcing and recording events and archiving them for you, what that allows us to do is if something goes wrong, basically, in our web app, you'll be able to click a single button. It'll pull up the traces and logs for all the components that are involved with that request, be able to tie you into the specific version of the code that was used for that request and help you link the spans to the code that was running. And of course, if you have permissions to do all of this, you'll actually be able to see the events that occurred. You'll actually be able to see the requests and responses that are passed between the components down to the database where the database came back with, et cetera. So it gives you this really powerful interface to kind of figure out what's going on and understand the problem. The other side of that too is for QA. Because we're recording events and the way we're recording events, one of our big plans for QA, and again, this is a tool that we're currently developing, is to allow you to kind of a replay of a set of events to help you kind of diagnose issues or to help you kind of do integration testing with some of the new components as you develop them. So I hope I answered your question, Brian. Yeah, so I'm gonna just piggyback on that a little bit. So the first thing that we'll be providing is span-based data that will show you in the context of component, in the context of application, in the context of the overall ecosystem, exactly what's transpiring within the scope of each individual component. And as John said, and I just wanna reiterate, I'm not gonna really say anything different than what John said, QFox is event-driven, but because of the way that we're coping with events and because we archive events, we can flip that whole thing around and turn it into an event-sourced system. So as he mentioned, we can replay events and you can ascertain, you know, you can even use those types of things for your testing. Okay, I'm afraid. So I guess it's answered your question, Brian. So yeah, so if there are any questions feel free to add on here because it's your chance to interact with the team, right? Okay, so this is I guess something. Okay, so also as we are going to wait for like one or two minutes more, like if there are any questions going to pop up, we're going to take it. But other than this, like what I would like to ask you guys is like anything, any kind of you, anyone, someone wants to use this kind of give box to something they should keep in mind or who should actually use this case or user case something basically. You guys have mentioned this right now. You have seen us kind of from the sketch, but as a somebody kind of who should use this and kind of yeah, something they should keep in mind while using give box or something. Is there anything kind of something? Yeah. Yeah, so I think it's, I'm really glad that you asked him. It's a really good, it's a really good topic. I tend to think that especially in the beginning, and maybe even as we mature, that Cube Fox provides some of the greatest value in Dev environments. So with some of the people with whom I've spoken, the challenges that they face are, you know, I have teammate that's working on the product and team B that's working on the product. And teammate is working on version one and team B is working on version two. And even if there's just slight divergence, they quickly get to the point where they think, forget it, I'm just gonna build some new nodes, build a new cluster, deploy a new product. And this quickly gets out of control because it really, it makes it very easy for divergence to occur, right? It takes a great deal of orchestration to try to reign that in. On the one hand, you could say, okay, so the people, I can work with my teams and I could have DevOps heavily involved and I can figure out, hey, what components can I share? What components can I share? What's the blast radius, et cetera, et cetera at all. And orchestrate all that, right? Create config maps that map to a particular instances of whatever, okay? Or if they use KubeFox, they simply don't worry about those things. Instead of them trying to orchestrate these things manually and it really is a manual process. What really happens is KubeFox goes and figures out, hey, I've got a 50 component deployment and three of the components have changed. What do I actually need to deploy? And it's gonna deploy 53 pods, okay? It's gonna deploy 53 containers that manifest in pods. And to each one of those, in each one of those ecosystems, I'm simply gonna specify what deployment I want to use is. You know, I want to use the new one. I want to use the old one. And KubeFox is gonna dynamically handle the routing at runtime. There's another thing that I neglected to mention. Everything that you saw, when I was going through doing the QuickStar, everything that you saw was zero trust, okay? All the component to component interactions are zero trust. Each one of them is independently validated. So not only did you build and deploy multiple versions of an applications very easily and simply, but you also built and deployed them in a zero trust fashion without doing any work in DevOps without having any DevOps expertise. And that's really where we want to be is we want to make this process simpler and simpler and simpler for people and for teams. And we're very much from the beginning, John's been motivated to stay focused on the developer mentality. And he's right. And that's where we've been focused. Okay, that's a really good answer as well, right? So awesome. So I guess we can end with another question which has come from Brian as well. So the question is for large scale systems that can generate millions of events for actions in the system. Okay, so data volumes for events is going to be massive which you were mentioning. So what are some mechanisms to reduce storage costs here? Thanks, Brian. Yeah, I mean, luckily storage costs are usually pretty cheap, but so a couple of things are going on. So obviously you're able to set limits on how long you want to retain these events. You'll by default, we hang on to them for three days. But you could adjust that number, maybe you only want to store them for a couple of hours. In the future, you're going to be able to set attributes of the event. So maybe you only want to store the events if an error occurred or for a particular customer. Or maybe you don't want to store the events at all, maybe that's a function, part of the KubeFox functionality that you're not really interested in. So in those cases, we're able to just not store those events at all. And I think longer term, if there are customers who are really interested in long-term archiving, it would be something that would be very easy for us to basically just replay that stream out to an archiver and have those get packaged and compressed and uploaded to something like S3. So I can imagine there's a lot of industries where that is important to have this type of information archived forever. The other thing I kind of want to touch on that we had to mention is because of our architecture, it also allows to be able to do things like auditing very easily because you could create an odder that sits on top and has access all these events and create audit records. So functionality like that is really important, but you're right, Brian, something that's, internet scale is gonna be producing a lot and a lot of events, but yeah, there's ways to manage this storage. Yeah, so I'll just piggyback on John again. I'm glad he mentioned auditing. John and I both have a pretty deep background in medical informatics and auditing in those types of situations is extremely important. And because of our architecture, it facilitates the ability to provide a very nice spin-off of auditing, something we haven't done yet, but which would not be difficult for us to do. Okay, so I guess that answers your question, Brian. So yeah, so I guess we can end the wrap up decision. So that was really an awesome session from both of you guys, yeah? So yeah, so, oh, okay, another question before we end up. So let's answer the question. So another question is like, do you guys envision the Q-Fox events changing how we write logs within the application code? Yeah, so I think that's a deep question. Looks like Shane. For one, we would want to make it really easy and it would be very easy for us to package information associated with log messages with your own log messages. I don't know that, we would want to make it very, very simple for you to migrate an application over to Q-Fox and automatically you get the benefit of what we're doing. So for instance, we're spinning off a span-based telemetry and we would also be spinning off span-based logs, right? So we would tack on the information without you having to worry about it, tack on the information associated with those log messages so that you could associate them with a span and see, okay, so here's what transpired during our, when we were running component X and here are the log messages that transpired during the same period. John may have some comments on that as well. But it's a really interesting question actually. It's something I hadn't considered but I'm kind of gonna make some assumptions here about what you're thinking Shane is, it's almost like the auditor, right? It's somehow looking at these events and starting to standardize logs to give the user insight into what's going on and that's something that would definitely be possible. The other thing to remember is all of these are events are passing through our brokers or the broker runs as a dam set. So we have these centralized points to kind of help implement features like this as we go down the road. So yeah, that's an interesting question Shane, thanks for asking. Yeah. So yeah, awesome, I guess that is it. So yeah, I guess we can wrap up our session now. So thank you so much John and Stephen for your awesome session and it was very insightful and yeah, I guess we can end it. So yeah, so let me then add you to the backstage and we end the session. So thank you so much John and Stephen, see you soon. Thanks everyone. Yeah, thanks Ben. Okay, so we're just going to end our session. So okay, thanks everyone for joining the latest episode of Cloud Native Live. We enjoyed the interaction and questions from you guys and thanks for joining us today and we hope to see you soon. So bye-bye from...