 and that way we will hopefully have a little time at the end. Hi everyone, thanks for joining us today for CNCF's live webinar, Running Distributed Load Test with the Grafana K6 operator. I'm Libby Schultz and I'll be moderating today's webinar. I'm gonna read our code of conduct and then hand over to Paul Baylog, developer advocate with Grafana Labs. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There is a chat box on the right hand side of your screen. Please feel free to drop your questions there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such as subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. And basically please be respectful of all of your fellow participants and presenters. Please also note that recordings and slides will be posted later today to the online programs page at community.cncf.io under online programs. They're also available via your registration link and the recording will be available on our online programs YouTube playlist. With that, I will hand things over to Paul to kick off today's presentation. All right. Well, thank you very much and welcome everyone. Hopefully, you know, for me it's lunchtime. So hopefully everyone's already had their lunch and don't have growling stomachs to distract them from all this information that you're gonna be getting here today. So, but yeah, so again, thank you, Libby. My name's Paul Baylog. I am a developer advocate on the K6 open source project with Grafana Labs. You can contact me at javaducky on Twitter or I'm now also with the same moniker on Mastodon. So anyway, so let's go ahead and we'll go ahead and get right into this. Now I'm on a massive, my screen is hugely in large so I'm hoping that everybody will be able to see all the aspects of the demonstration fine. So hopefully. So anyway, if we go ahead and start it off, let's go ahead and start with, you know, what is K6? Now I was at KubeCon two weeks ago in Detroit and I was working the Grafana booth and one of the issues was, was that really people don't know what exactly is K6 or that we were actually part of Grafana Labs. So we're part of the CNCF. So if you look here on the landscape, that's us in that upper right hand corner there. We're in there, we are formerly known as Load Impact. We've been open source since 2016. We have about coming up on 19,000 GitHub stars. So we've got a couple likes out there and we really promote the whole shift left testing movement. I have our, the K6 repository, again, we're fully open source and we're always, by the way, looking for additional contributors and to help out. Now with Grafana, we were acquired in 2021, June of 2021 and we are now under that umbrella. Now, when we were acquired Raj Dutt, he's the CEO for Grafana Labs. He mentioned about, you know, this was a perfect match. This was a match made in heaven. I mean, really, because you figure with K6, we're on the front end. We're trying to prevent and detect before your production, you know, before your software goes out to production and with the Grafana Labs, other, you know, the other items in the Grafana space, that gives you that observability to be able to view what's going on on your system. So we're gonna create the problems. Grafana will let you see the problems. Now, this right here is the most important slide in this entire deck. So if you get nothing else out of this discussion, hopefully you get this, that K6 is a reliability testing tool. Now, our main forte, what we've really been known for is load testing, but we're more than that. So if you think of reliability testing as just kind of this umbrella term, there's several different types of testing that are included in that, of which load testing is one. So you can utilize K6, you know, for contract testing, you can do chaos testing, which that's a fun thing always, right? Who doesn't wanna try to break things and see what happens? We even have product where you can do browser testing, actual end-to-end testing, you know, controlling a Chrome browser and, you know, seeing the, you know, like the old, you know, the piano, the automated piano. So just a little bit more about us, you know, padding ourselves on the back. ThoughtWorks had this nice thing to say about us. You know, we are, we pride ourselves in being the easiest tool for developers, testers, SREs to utilize. That's big in our DNA is that we really wanna make sure that we have a low barrier to entry. Now, I will just add to this, it's not on this slide, but ThoughtWorks with their, it was just here in October now, their latest radar, we're actually moved into the adopt ring of the tools platform, or platform quadrant. So kudos to the K6 developer team. So again, you know, reliability testing. So our big thing is, again, that we are open source and we are very much into the whole open source concept. We wanna promote that. We wanna bring more people in, all that good stuff. So makes us fit in perfectly with CNCF. So it's what we do, you know, we wanted to make again, sure that our tools are scriptable. Cause again, we wanna make sure that we can be used in automation. We wanna make sure that you can use your testing in your CI CD platform so that you can pass fail based on, you know, certain thresholds that you set. We are performance. So our application is written and go. We do interpretive of some JavaScript test cases and we're very extensible. We have an extensions platform, which is actually what I work with primarily or an extensions framework, I should say. But we invite go developers to join with us and work on expanding the capabilities of K6 where they compiled in. So as new protocols are, you know, thought up or created, we can include those and then you can do load testing with that or I should say reliability testing. That's again, that is the key. So I kind of alluded to it that, you know, here if we look at this stack, the actual tests that you create are right are actually written in JavaScript. Now that goes through an interpreter that is written and go. It's another open source project called Goja. And then that will allow you to utilize these K6 extensions where you can incorporate again different protocols and different products. And all that is written, or I'm sorry, all that is run in the go runtime. So that was the quick and wanting to make sure that we have the time here. So now let's go ahead and go into what is load testing? All right, so when you go out there and start Googling things and that the internet will tell you that load testing is all about putting demand on a system and measuring its response. So now again, this is where we find ourselves in that whole shift left area. So K6, again, we're at the developer side. We're kind of at the front end and then you're gonna create some scenarios, this high demand. And then with the Grafana stack, you can measure those responses and make sure that your systems are gonna be running smoothly after that Superbowl ad. So now if we talk real quick, just a quick mention about some of the myths about load testing. So typically you hear about load testing, you think it's for large companies that, oh, mom and pop shop, they're not gonna be able to do anything with load testing. That's ridiculous, but no, that is not the case. And here as well, expensive to do, it doesn't have to be. I mean, obviously you can recreate your entire production environment and pay a lot of money and test against that, but you don't have to. You don't have to test just in production, test it beforehand. Again, like we wanna incorporate things into your CI CD pipeline. So you can run a limited test or just, and samples and make sure things are running as expected and that they'll run at higher load. And now here we have the regular little hockey stick chart here. So what we wanna do with the load testing is that we're trying to find that portion where you're gonna start scaling up. You wanna find that breaking point where all of a sudden your user experience is gonna get worse. Your response times are gonna increase at a certain point. So you wanna make sure that you know how to handle that when it does happen. And with defining SLOs, now SLOs, there's SLOs, SLAs, SLIs, these are all your service level, in this case, objectives with the O's. So you wanna define those and then you can run your test and make sure that your systems are operating within the expected thresholds. So you wanna make sure with that. You can use different types of load tests. There's many different types. So now in this case, I just have four small examples, very brief, just your typical one is to just apply an average load and just see how things behave over a few minutes. A spike test is a very traditional test. That's your Black Friday scenario where your usage is gonna go way up and you just wanna make sure that things are handled in those spikes. The soak test. Now in this case, it may not be able to see that but this is a test that you would run actually for over eight hour period or even longer, actually. This is gonna be the thing that's gonna find your memory leaks or maybe your handling resources incorrectly, things aren't being returned to a pool. A soak test is where you're really gonna see that over a long period of time. And then with the breakpoint test, this is where you gradually increase things to find that elbow or whatever in the hockey stick where things are just going to fall off a cliff, essentially. So these are the different types of tests that you can do with case six. Now in the implementation, I won't go into the details on each of these but you can basically pattern or shape your activity using what we call executors. And we have various types that have different behaviors. Now this is actually a little snippet there on the right side showing some of the configuration for tests. So now you can actually have layers of testing going on. So you can basically add noise in the background, the activity noise in the background of another test that you're trying to do in the foreground and mix and match. It's, there's loads of options and sky's the limit essentially. So now all of that is the background for what is case six just on its own. Now, normally we run these as a binary. This is just gonna be a single binary, it could be on a quality engineer or desktop. They're actually doing things directly. It could be in your CI CD pipeline, just in a build pattern doing that. But now with the case six operator, this allows us to actually distribute load across multiple instances of case six. Now just one thing to note is that we have had some users actually simulate 40,000 virtual users on a single machine. So that's, simulating 40,000 different users running through a similar, could be an authentication flow or something but all that happening at a single time. So with case six operator, you can even expand that further and distribute that load across multiple machines. So for PODs you could have maybe 160,000 virtual users simulated. Now, one of the things that we really like to promote is that you can run, it's not a right once run anywhere type of a thing. In a way, I guess it really is. But so your test scripts again are written in JavaScript and that way then we kind of figured that's kind of a lowest common denominator possibly so that maybe your developers can create test scripts easily, QE engineers can test things or write things easily. It's what we're going for. But now the difference here now is that with whether you're running the binary directly, a single instance, or you're running in Kubernetes with multiple instances running the script at the same time, or if you're using our SaaS product, our cloud product, you don't make any changes to the scripts. Okay, so the same script will run in each of those environments without modification. And here just to kind of show this the case a little bit more in Kubernetes, you know, here I have an example with a Kubernetes cluster that has four worker nodes. And then in this case, we're having two pods on each worker and all those are running one single script. They each take a fair portion of that script and then they all run at the same time and then all the activities being aggregated so that you can see it on the single pane of class. All right, and actually this slide I could have probably taken out but this just kind of shows demonstrates what each of the pods are gonna be doing in that they'll pull in their configuration, they'll apply the script options that are on the test script that are in the JavaScript, they'll apply the environment variables and then it's just as if you have four different instances of the same binary. Now again, with K6 operator, we do some, there's not, it splits up the actual number of virtual users and how many iterations are gonna happen. It's gonna split that across how many parallel instances that you're looking for in your request. So let's say check the time here. Okay, perfect. So hopefully I didn't just fly through that too awful fast but like I said, I wanted to make sure that I had enough time to really demonstrate this and go through, collect any questions that folks are having and actually let me take a look at the chat. I've been, I've been bad. I asked Libby to interrupt me in case if any questions did come up so. We're all good. All right, sweet. All right, well then let's pop into the demo. Now, I burned some incense earlier this morning. So hopefully the demo gods will be appeased and all things just work. So would be the ideal situation. So all right, so let me go ahead and switch over here. And by the way too, I have the demo I have in the GitHub repository which I'll be sharing the URLs at the end so that you can actually do the exact same process that I'll be going through here as well. So let me go ahead and pull up my IDE. Now in this case, I'm kind of in the go world so I have learned to love, go land. So I have my demo in this for now. So, and there's some of the tooling I have listed in here as well. So you'd be able to go through this like as a line by line and then actually recreate this. And just, I guess for a, just for a little bit of the housekeeping in that I'll be what I'm gonna do is I'm gonna actually be running the K6 operator in a Kubernetes cluster on my machine that will just be in Docker. I'm using a K3D, it'll be multiple workers in that. And I'm actually going to use a customized K6 binary with using those extensions that I talked about earlier. And I'm actually going to output my metrics in live, lives during the testing up to Grafana cloud. And now I am using the free forever Grafana cloud. So there's no cost associated with running this test. And then I'll show you some of the K6 cloud as well. Just a very brief, but just the free features. So all that, and when I say free features at the cost of an email, email address. All right, so we'll go ahead and we'll pop in here and I do have those prerequisites listed as well. So now I've already downloaded the actual source code for the operator. And as I mentioned, I'm going to be using this Prometheus output. And that is not by default in the K6 binary. Normally it's just a, we just admit a summary to the console. Like I said, this is typically run as a binary on some single machine. And then the output would be just displayed directly there. So now, because we are using output to Prometheus, right now we have that as a extension. So it does require custom compilation step. And again, that's totally Dockerized in this example. So as long as you have Docker, you should be good. You can run all this, you can build this. So, and again, here in this step, this is actually where I would go and pull down I create a custom image of K6 that includes those, that extra extension. And then I push it up to my own personal Docker hub repository. That's one key is that this has to be, K6 has to be embedded into an image and that image has to be in something publicly accessible. So I guess I can go ahead and run those. So I have them already locally. But just for demo, I will go ahead and run through these steps again and hopefully it doesn't take up too much time. My apologies if they do, but it gives me time for a quick drink. All right, but yeah. So I'll go ahead and just describe what's happening here though is that with this Docker file, I'm using a separate build stage. So I'm pulling down the source code for the extension. And I'm utilizing a utility that K6 has called XK6, which anyone familiar with the catty server, there's something very similar. In fact, we originally started with a fork of XCATTY, but it's just a way to actually build a new go binary, including these modules. Let's see. Okay. Yeah, this is gonna be a little worrisome here, but let's see. The gods were appeased. I'm telling you for sure they were. So yeah, it should hopefully be just a moment. If not, I can skip through this, kill it. I do have the image pushed up to my Docker hub already. So we may have to just skip this. Normally it doesn't take a minute and a half to compile the go binary. It's much better than that. So all right, I'm gonna go ahead and kill this. There we go. All right, and now we'll go ahead and... I have this local image, okay, as long as I didn't just destroy it. Let's see. Let me switch over here. All right. All right, so let me go ahead and list about my Docker images, and hopefully this is here. Yes, okay, so this is the image that I just built. We'll just say that for cases of the demonstration. So I have this now locally, and now I can go ahead. In this project, I have these test scripts. So if we look here, so simple.js, this is probably, this is a very simple test case, and hopefully folks can read that well enough. But here's where I'm defining our options. So here I'm gonna say, this test case is gonna have 10 virtual users. So I'm gonna simulate 10 people doing this over a duration of 10 seconds. Here I'm going to throw an exception or basically kill the process or return a non-zero return code from my build to say if the rate does not exceed 10 requests per second then to fail the tests. Now I'm not going to exceed that. So we won't have to worry, but the test is going to just hit this test URL that we happen to have on the K6 website. So at testk6.io. So I have this script here, which I'm going to run this directly from my Docker image. So if I go ahead, and I'm gonna run it from this console here. And this is how normally things are run on the local machine. This will just execute for 10 seconds here. And then we will see the actual output from the test. Okay, all right. This is better here with it wide. But yeah, so we'll see that, okay in that 10 seconds, we went ahead and we created 1494 actual HTTP requests against that website. We see here that, again, the virtual users we had were 10 for the duration of the test. And everything was successful. Let's see. Yeah. So we were all good. Everything returned to 200. So we were fine with that. And we reached a rate of 148 requests per second, so which far exceeds the 10 in the threshold. So that just shows, that's the kind of the normal experience that someone would have on their desktop. So now I'm gonna go ahead and create my Kubernetes cluster. So again, I'm using K3D, which is an awesome project, but that will actually create a cluster, Kubernetes cluster with in this case, three different K3S nodes inside of it. Now this will just take a moment here again and it'll be fully started. And then I will use another, once this has started, I will use another awesome project called K9S to actually look at our cluster. So if you'll see here, we have some of our containers creating just for the overall system. I'll pop into namespaces here now and then we'll see that these are, we have these namespaces available. So now for our testing, I'm going to go ahead and I'm going to create a different, oh, let's go ahead and install the operator first. How about that? And let me show you preview. Okay, I'm gonna go ahead and do this and we'll see that there is no K6 resource just yet. All right, so I'm gonna go ahead and pop in here. I'm going to actually install the operator and now do this. All right, pop into there. All right, and here's all these resources. So these were actually just created. We utilize underneath the covers. Oh boy, customize. Sorry, mind went for a moment. So customize to create all your resources and then we push those in. So now if I go ahead and check for the K6 resource, I see that it's actually showing there but there are no instances of that resource just yet. Now think of the resources actually in this case as being a trigger for a load test. So now in this case, I'm gonna go ahead and I'm gonna create my demo namespace and I'll just leave K6 up here running. So it created my K6 demo namespace. I'm going to create now a config map which contains all the test scripts. So now again, these are different scripts for actual different load tests that I've created and have available in the project. Again, when you download this, you'll get all these and I'm going to bundle those up into a single config map. So that config map will just be basically a repository. Now ideally in normal use, you would apply good GitOps practices and maybe as scripts are altered, have those committed into a Git repository and then have GitHub actions that will then any changes to main branch will then be recreate this config map in your Kubernetes cluster. So we have that in there now. We'll see that there's the namespace and now if I go into config maps, we'll see there's my test scripts and again, this is just all those different scripts. So I could edit it from here as well, but do the GitOps thing, do it in Git, that would be better. All right, so now that we have those set up, let me go back to my cheat sheet here. Where's my read me? All right, we'll just go that way. All right, now as I mentioned, I set up everything in Grafana Cloud. Now I'm not going to go through and describe how to create a free account, but in the Grafana website, you can go there directly, create a free account. Similarly with K6, you can go to app.k6.io or even on the k6.io anywhere really, you should be able to find the URL to be able to go in there and then create your free account in K6. Then we'll go through those here in a moment. So now when you do that, you'll want to create obviously some API keys and since you do not want to commit those in any kind of resources that are in GitHub, of course, I'm going to have them here. I have my script, which I will actually create some secrets to have these environment variables set up as config maps and secrets. So let me go ahead and run that. Let's see, let's see, let's run, okay. So now I have for my accounts, I have my secrets in there. And so here's Prometheus config, so that shows that there are my secrets available and let's say we can go ahead and pop into this and say that they are there created. All right, so now that we have those set up, we can go ahead and actually trigger a distributed load test. So again, as I mentioned, I ran that example directly with the image of a single instance. Now those, again, that shows that these particular scripts that are in here, again, I didn't modify them. There was no, the K6 operator will run the exact same test scripts that I did with the direct single instance. So there's no modifications there. The only difference will be that we will use these resources to actually determine how many, what our parallelism is. So in this case, I'm gonna go and use this one here to output to Grafana cloud. Okay, and I'm gonna have it here that we're gonna have four, four pods will be created and distribute the load that's in that load test. Let's see, and here it just points to the config map we're having containing all the scripts and I'm gonna tell it to run simple JS. So again, that was all strictly in that config map that I just loaded a few minutes ago. Arguments to the actual K6 binary itself. Now here, I'm just adding something in here just to distinguish between the script executions. So I'm using this tag in there to be able to have a, you know, create this custom label in Prometheus to say that this is my test ID. Everything is gonna have this name of K6 output Grafana cloud. That's just kind of making things easier to tie back. I'm giving the name of my custom image. So obviously you can use this one but you wouldn't be able to modify it. Not unless you did some underhanded things and got to my account, so please don't, no. But, and this determines that the binary should output to Prometheus remote. So we use the Prometheus remote right. And then also coming from the environment variables we'll be pulling in config secrets and the URL and then our secrets for our API and accounts and all of that. So we can actually write. All right, let me get back to my cheat sheet here. Okay, so now one thing that's, well here, I'll just, we'll run this. So I'm gonna use just a your normal cube cuddle and cube cuddle command again to apply that resource. So the K6 output Grafana cloud into the K6 demo namespace. So I'm going to go ahead and I'm gonna go ahead and display pods here. So I'm in the K6 demo namespace right now. There's no pods in here. I'm going to go ahead and use cube cuddle to trigger the resource. So now we'll start seeing that the whole life cycle of what the operator is doing. So, obviously with the creation of the resource now it's going to go in there. It's gonna create the initializer which is going to inspect your script. And then it's going to determine how many pods to create. And it will actually spawn up another one we should see. Yeah, here we go. The starter, which is gonna look at the script and then determine say that, oh, here in this script, let's see here, we're running this simple one. Says, okay, we wanted to run 10 virtual users. So this is gonna divvy up amongst the pods, the 10 users. So one pod will get two users, another pod will get two users and then the other two will get three each. So we have the full 10 virtual users are accounted for. And then each will run for the duration of 10 seconds. So obviously we see here that that's already been completed. So now if we pop into this, we can go ahead and look at any one of these at the logs. And then we'll see that here in this case, this particular pod was one of the two that got three users. And it was responsible for 460 requests all of which were successful. And then similarly we could go to any of the other pods. So I'm just going to here. And this was one that got two. So all the virtual users were accounted for and everything was run successfully. So now you can also do things too, where if with the Grafana free cloud, if I had to say what is it Grafana agent or a Prom tail is the other project, I could have had these logs also going up to Loki, which is our log aggregation service. And then we could see all that output directly in Grafana as well, but I've only done the Prometheus, the metrics output. So let's go ahead and pop over to Grafana. All right, and then I've created a couple dashboards. This is not actually in the project in the source code repository, but if we look here now, we'll see that here is that test ID that I noted on there that's actually being put to the command line of each K6. So that way we can pull into there and then just, and now I'll be the first to admit my Grafana skills are lacking. Grafana foo is not my best. Uh-oh, are we not, are we having audio issues? Nope, I can hear you just fine. Okay. All right, who had me worried there. It's, that would be a bad thing. K6- I do have a question if you want. Sure, yeah. Read it. Sure. So just to recap, K6 could be triggered from my CI CD pipeline and trigger the load testing that requires a custom image previously built. Is that right? That is correct. Yes. Yes. So what I just did here with the actually creating the resource. So if you were running in Kubernetes, obviously, you could have your CI CD pipeline actually apply that resource to trigger the test. And then you could have things that are, you know, into your observability stack, let's say, just in case if you're not using Grafana or if you're using, you know, you could be using Datadog, you could be using really anything else. Grafana is in the big tent, so we want to be able to play with everyone nicely. So yeah, you could have things that are checking into that and checking the state and then, you know, you can fail a test. Now, if you're using the K6 binary directly in your CI CD, you can get that non-zero return and in that way then you could fail a build directly. So there's definitely options. So now, but yeah, so coming back here in my test results, we'll see that overall between the four instances of the pod, we have 1521 requests. The P95 for those requests were 71 milliseconds. And then I can drill into this. And again, like I said, my Grafana foo is lacking. So please don't laugh at some of my dashboards. I'm learning. But yeah, so here's the results. This is digging into the actual test run here. And I think actually we need this piece here. We can look at this and then we'll see all the request rate, you know, as it was running through. So it kind of had a little bit of a ramp up in a way, but the gray line here is the number of VUs altogether. So again, this is, we had the 10 virtual users simulated. And then this line, yeah, for the response time. So it looks like they're 60 milliseconds there, according to that, those points. And actually, I don't think I have these, yeah, and real fine points, but there you go. Yeah, 60.3 milliseconds going across there. So that was a very simple example. So again, that was just doing this, going through hitting one website, you know, as much as possible in 10 seconds from 10 different users. Let's see, oh, looks like it's testing with K6 operator limited to applications within the same Kubernetes environment. No, I mean, because in this case, my target system, the system being, the load being generated on is, yeah, is external to what I'm running the operator in. So it does not have to be in the same unless, you know, unless there are certain restrictions on like hitting a, you know, maybe a ingress isn't publicly accessible, or, you know, if you have things like that where maybe a special service account or whatever, but yeah, no, there's no real restrictions. Let's see, one through here, could allow you to load test private endpoints, yeah. Right, yes, yes, and you could output to cloud, yep. And the K6 cloud, yeah, let's go ahead and show that now. Oops, let's see, yes, so yeah, so that's where a lot of the things going on with the, about Git commands in place of API calls, yeah, some of the things that you can do with the extensions. So now the extensions ecosystem, which again, that's what I work more with. So I'm a developer background and primarily in Java for like 25 years or whatever, like, wow, I'm old, but I've been in the go area for the last four years roughly. But anyway, but yeah, so I work primarily with go developers to extend or to enhance the integrations that we have in our, in our extensions. So you can do things like, you know, you could actually embed the Git API so that, you know, from your actual scripts, your test scripts, you could, you could do Git commands in your test script. So, you know, if you wanted to actually test Git itself, you could do that. And we do actually have a repository out there called XK6 Git, if I'm recalling it correctly, that will let you do that. I see a mention there about XK6 chaos. Yes, that's very similar. It's, we have these custom extensions so that those extensions allow you to use these, the custom tooling that you create within your JavaScript. We even have one for XK6 Kubernetes where you can do things from your test script. You can actually say, oh yeah, create a unique namespace, maybe create something, a random namespace to run these things in and install and maybe do a update to the config map or things like that. And in the, you know, you can have chaos spawn by killing pods randomly or, you know, see what happened. So, a lot of stuff, a lot of stuff. Let's see, can K6 test against application running outside of Kubernetes cluster and in VMs? Yeah, again, as long as it can hit the URLs, that's all up for grabs. Let's see. Yes, yeah, chaos, there's multiple things. K6 chaos is actually a JavaScript library which has different chaos experiments that utilize the XK6 Kubernetes extension to actually do things directly with Kubernetes. So, all right. So I think I got things there. Oh yeah, GRPC endpoints. Yes, natively with the K6 binary, it does support GRPC. It also supports things like, you know, other protocols with web sockets. Let's see, what else? I mean, there's even things in there with extensions where I mean, you can test, you can load test an SMTP server if you really want to. So yeah, it's a large ecosystem. There's lots of options. So yeah, so let me take a look back here. I think the questions have slowed down or cooled off there. So, okay, let me go ahead and I'm gonna run another example. So this is gonna be a little bit more, I don't know, it's not a massive example. It's still pretty simple, but this will try to somewhat describe a spike. So now I mentioned about these executors. Oh, I better hurry up through here. We have these executors that can define different shapes. So in this case, I'm using ramping arrival rate. And what that does is it allows you to do things in stages. So in this case, what I'm doing is I'm saying I'm based on arrival rate. So I'm gonna be looking specifically at request per second. So I'm gonna start off with 10 requests per second. I'm gonna maintain that rate over 10 seconds. And then I'm gonna bump up to 150 requests per second. And I'm gonna do that within five seconds. So it's gonna go from 10 to 150 in the matter of five seconds. And then once it reaches that level, it will remain there at 10 seconds. And then I have it start coming down a little bit, but not as sharp as what the initial entry spike was. So we can go ahead and we can run this script as well. So now all I have to do is now on this particular resource. So I was outputting to Grafana Cloud. So in this case, I'm gonna change this to be, I'm gonna say do my doorbuster sale. We're gonna go ahead and we can do, let's make it six. Change something. I'm gonna go ahead and change this name so that in my dashboard it reflects correctly. Let's say, you know, my K6 of a cloud doorbuster. Okay. All right, so everything else is the same. I'm just gonna have it run a different script. So again, I have to recreate that resource. Now, this is one of the things is that it appears, and this is kind of early days with the operator as well. So, you know, if there's any issues, be feel free to write them up, you know, any contributions feel free to contribute. We're always looking for additional work hands on these things, but with this, I'm gonna have to delete that actual this particular instance of the K6 resource. So there we go, I'm gonna go ahead and delete that because just a simple change won't be detected and it wouldn't read launch it. So if I come back here, sorry if I'm popping around too much here. So I'm gonna go ahead and apply the same resource again and then that will then recreate it there. Okay, I'm gonna go over, switch over to pods and then now there we go. We'll see that, yeah, we're running six different pods now. Those are all going so I can come back here. Let's see, let me keep refreshing here because we should start seeing this coming in. There we go. The doorbuster now is showing up. Looking here to the details and then yeah, it's, again, I could probably with work, I can make this a prettier graph as this is coming through, but yeah, so this is all going through Prometheus now. All right, so now I next I wanted to show you real quick the Grafana cloud. So I've already set this up. And again, I'm using the free one, the free version. So no subscriptions is what they, we call them in K6. So let me go ahead and run that. So I'm gonna go do this K6 cloud. So yeah, this is mine using the free version. Now I'll have to point out here that parallelism is one. With the free tier, you can only have one single instance basically running. Otherwise it won't quite work. So it's not as fun, unfortunately. So if I go ahead and get to my read me and let's see. I thought I had, where's my, all right, I will just recreate that. All right, so this one's gonna go to K6 cloud. I was hoping I had my little copy paste help. But alas, I had to manually do it horrible. Oh, the humanity. So yeah, so now if I come in here, this is an old run here. We should see once that container starts running, that I'm just simply outputting results now to my K6 cloud account and here they are. So again, this is the free one and this is where, again, this gives you a much nicer dashboards and results than what I could do with my Grafana skills. So let's see. Let's see, while that's running there, does it deploy as a deployment or a job? Actually, neither the, so the actual, because it's an operator, all you had to do is create the resource and then the operator is listening for the creation of those resources. So it'll create pods directly. So you don't, it's not based on any deployment resource or a specific job. It's not cron-based or anything like that. The scheduling is strictly by creation of the resource, you know, pushing that resource up. And yeah, well the pods do die, but they don't go away. So if you noticed here that they do linger on, so I would have to go and delete the resource, the K6 resource, which would actually then clean up everything. And from my use of it so far, I actually threw this script together to just make it a little bit nicer, which would actually go and delete the resource if it was previously existing. That way then you wouldn't have to worry about what I had mentioned about. It seems like the pods not getting, the job not re-triggering if it already existed or just changed. So again, that's probably just kind of, yeah, could be fixed. So let's see. All right, so we've got that. Now I do want to show this too, because this is one of the fun things. So obviously we're, I'm looking at two different, we're looking at two different SAS providers, two different SAS solutions, K6 cloud, and then we have the Grafana cloud, right? Well, since we're all now under the same umbrella, we actually even have this now. So in your case, I'm sorry, in your Grafana, you can actually add the Grafana or K6 app. So what that does is that recreates the K6 experience inside of Grafana. So now you can have everything co-located in your observability platform. So now if we look here, you'll see that here's my simple test. So this is that old one that I had. And then this is here the new one that was just, we just ran. So we can drill into there. And then now we have even nicer graphs than what I was able to create. So kudos to our K6 developers who actually created this app for Grafana. So you can plug that in and use that for free. Now this does use, as a data source, it's actually using the K6 cloud. So it's actually pulling metrics directly from K6 cloud into Grafana this way, just the visualization. All right, and let's see. I'm trying to keep an eye on the time and it looks like we got like six minutes left. And let's see. I did notice I missed a question here from Manuel. Terms of usability, the reason that we must build the image and not load JS code test dynamically is because it must be compiled to go language. Yes, it does have to be compiled. And that's because also because we are using the Prometheus output, which is an extension. Now, well, you know, don't tell anybody just between you, me, and the rest of the internet. We are working now that K6 is under the Grafana Labs umbrella. We are working toward Prometheus becoming the default output for K6. So if you're not doing just console, it'll be embedded in there. So you can just say on a command line argument, you can say the output is going to be Prometheus, provide obviously the remote right endpoint to target, and then it'll just happen. You won't have this extra compile stuff that we have right now. But that's, we're still working on that because there were some of the problems with the Prometheus output right now was the histogram support. So that is currently being worked on, that is very close to being complete. And so with right now, we just released version 0.41 of K6. I'm not gonna say, you know, what version, but I'm just gonna say it should be soon. But we are, that is being worked on. So you won't need this compilation step. We try to make it as easy as possible with the XK6, but it still can be a little bit hands-on and some people do run into some issues, you know, at times. So that's why I'm trying to put together some of the Docker documentation that may be making a little bit easier, you know, so that as long as you have Docker, you don't have to worry about having a go runtime set up. You know, all that. So just, yeah, again, trying to reduce some of that friction, just make it easy to adopt. Let's see, so I think that pretty much covers everything that for the demo. Now I did want to go ahead and finalize some things with these slides here. Let's see, there we go. All right, so yeah, so now with all that, you know, where do we go from here? So again, hooked in with your observability, you know, you need to, you wanna bring in those, measure those four golden signals. So, you know, again, whether you're using Grafana or something like a New Relic or Datadog, you know, make sure that you're watching these elements, you know, that you're checking the latency, you're checking the amount of traffic, how your application behaves, you know, when the, it's saturated in, you know, of course, tracking the number of errors. By doing this upfront, you're gonna get a lot more, you know, a lot more bang for your buck by doing this testing upfront. Just a, whoops, there we go. I didn't realize I had my little, my animations on there, but yeah, so including the things, the checks to make sure that, you know, the network quality is good, that, you know, you're accommodating your Black Friday scenario, you know, you're not having all kinds of 500s and other errors going through, and that, you know, and also to check and make sure that your infrastructure is not under provision, that your, you know, your setups are enough to accommodate your needs. And again, that's pre-production. Include that shift left, bring it up front, just make sure everything's running, use your observability platform to watch and make sure that everything is going well while you're putting everything together, and that continues post-production. And with that, I think, oh, look at that, one minute to spare. I really do wanna thank everyone's for their participation, and my contact info is there. You can reach me on Twitter, again, with Mastodon as well, and LinkedIn, and the GitHub repositories listed there for the full demonstration. And thank you all. Thank you so much, Paul. Thank you everyone for joining us and for all your questions. You know exactly where to find Paul, and this presentation and slides will be available later today online. If you can use this registration link again or go to our online programs YouTube playlist. Thanks again for joining us. Thank you, Paul, and we'll see y'all at another online program CNCF Live webinar soon. Alrighty, thank you.