 Hello everyone, good morning. Welcome to the session record and replay API test case and data marks without writing code by Neha, Gupta and Shubham Jain. So without any further delay over to you Neha and Shubham. Thank you, Nidhi and thank you everyone for showing up. Good morning and good evenings because we have people all across the world. So I'm Neha and this is Shubham. Today we are going to talk about, you know, how you can easily use record and replay methodology to generate API test cases. So I'm just quickly going to share my screen. I'm just going to tell a little bit about ourselves and we are going to share our journey of, you know, automating the testing process in our previous teams at different organizations and how did we land up to a single approach, the pros and cons of different approaches that we tried for, you know, the automating the testing process. And at the end, Shubham is going to showcase a demo where, you know, he'll be showcasing how you can record and replay and create the API test cases without writing any code. And also, he's also going to showcase if you're using Selenium UI tests already, and you want to use a plugin or an extension to Selenium that can mark your back end, including the infrastructure calls. That is also that what he's going to showcase. So yeah, as I mentioned that we are both maintainers of Kip Loy, that's an open source API testing platform that virtualizes the infrastructure. So previous to this, we were working at, you know, Office of CTOs of some Indian SaaS companies like Lenscard and Farai. These are e-commerce and logistics SaaS. And our role was, you know, to experiment in every other two weeks. And we were releasing very fast, like almost, you know, iterating through the code in a single day itself. So the team was very small because it was meant to do experiments with new technology, new features, what people are expecting from that company. And one thing that was very limited was time. We had very strict timelines. And because of the strict timelines, we were able to do very limited testing, maybe a couple of happy flows. We wanted to do functional testing more because it was an experimental feature. Functional test cases should be okay. Later on, scalability things can, you know, work out. We can improve the scalability part. And with that limited testing, obviously, there were introduced regressions. So just to, you know, keep the releases sane, we only wanted three things to automate the testing process. One was, you know, to have the functional test cases, something that can, a tool or something that can, you know, bring out the real world scenarios from production. And we can test it function like we can do the functional testing of whatever we are releasing and see if there is a any regression introduced. And two, we wanted something that with which we can easily write and update the test cases. It shouldn't be hard because the time is limited, like I mentioned. And three, the last was that we needed something that can mock the infrastructure and my input is not required. So all that infrastructure orchestration can be done automatically. That is all we just wanted to move fast. And we explored a couple of solutions. So initially, we started with, you know, simple automation test suite that we write the functional test cases. And it should be a good to go that on every release, we write some sane amount of test cases. And it should be fine. But the reality was different, because whatever test cases we were writing, as you, you know, develop a new feature, or even if you, you know, make changes to the existing feature, those test cases needed to be updated. So they were very brittle in terms of reliability on, you know, if I can rely on my time consumed versus the test suite that I have already prepared. So that was quite painful. And not only that, when the single microservice talk to multiple other microservices, it became even more painful because there were shared test infrastructure. By that, I mean that, you know, as a QA, if I've written an automation suite, and I've set up a test database, and, you know, the test other microservice, which my application is talking to now somebody else comes and uses that same database, like another Q in the team, from my team only or the developer who is working on the same application. And they try to test with the same thing and, you know, changes a little bit of records. So that shared database again, was broken in different scenarios when, you know, multiple people are working on the same thing. So again, that was not working out for us really. It was time consuming. But it was, you know, it was brittle, as well. So what we planned to do was somebody told us, hey, why don't you test in production? And we were like, hey, are you crazy? Why would we test in production? Why would you say that? It was a very scary thought to be with. So when we actually, you know, deep dive and you know, explore more about testing in production, it actually made sense. So when we actually talk about testing or automating the testing, we want to create the best possible simulation of production environment at the end when we are making a Q environment. So if we can test in production, that's the best case scenario. So we're like, okay, let's explore how people are testing in production today. So we discovered that, you know, people are doing the record replay, the shadow testing, but, you know, these methodologies like tap and compare. So we explored each and every one of it. I'm going to talk about all those now. So first was, you know, shadow testing that you have a stable application running in production, which is serving traffic request and response. And what you do is for the same application, you create a new deployment version of that application, you deploy it, you do not release it in production. And what you do is you mirror the traffic. I mean, you duplicate the traffic to the new version of the application in production itself, and compare the responses. And if the responses are great, you know, it's good to go. Now, this kind of approach works well if you are using a stateless application, like, you know, audio streaming or something. But it doesn't work when your application has dependencies. So, you know, we were confused that with the different microservices, even internally that our application is going to talk to or third party services, how are my, how is my application be to who, what is it going to talk to? I cannot just make it connect to the production databases, right? All the post and put calls, delete calls, all the mutations will fail, right? So we are really curious how are people doing that. And we really discovered that some people are actually connecting their application, new version to the production database itself. And we were like blown away. How is that possible? So you discovered more. Now, what happened is the limitation with this was that your API request needs to be idempotent. If you guarantee the idempotency in your APIs, then it means that you can use this approach. By idempotency, I mean that if you do the same request twice, the initial behavior of the, of the dependency should not change. For example, if I say that an API, like, update the balance to rupees 200, then it's an idempotent request. But if I say the existing balance is 100, but plus 100 in the balance, then it is a non idempotent request. But because if it is done twice, the balance would be updated from 100 to 200 and to 300 then. So our APIs were non idempotent. So we went on to exploring more approaches. And then we discovered that, you know, what people are doing is that you filter out the reads from the get APIs from production, the get APIs read from the database, and you compare the results. And for the write APIs, you do the testing locally or, you know, write continue with the automation scripts. But we were not satisfied because, you know, get APIs were fine to test, but post APIs were even more important. So we went on to exploring more approaches. And we, you know, thought of creating this shadow database, you know, that if we have a production database, why not create a, you know, replica of it, the database dash, and this replica would be kept in sync in real time that, you know, if you're serving a request here, the database call would be made, and all these changes would be reflected here. So you might have figured out that the limitation with this was initially it was it was sounding like a good idea. But the limitation is that when you replay the same request here, as well as here on the application v2, there is a replication lag between database and shadow database. So let's say, you know, if you're making a mutation call here, it is replicating while this duplicated request is also trying to make the same change. You might get a different result with that. And in some times basis on the parallelity, in some times you might get the same response and you might be able to compare it, but in many times, you might not be able to get it. So it was not really a sound approach in that case. Also it was expensive to keep the, you know, databases replica and keep containing them in real time. So not a very sound approach to go head with. So what we did was we thought that why in real time, if replication lag is a major problem, how about we create something like a shadow database in later in, like in real time, but we test it later in time. So we thought of testing it later. And what we did was creating a database snapshot to a non-production environment. So it was like you're replicating the production, the simulation to a non-production environment. And there was a shadow database there was a shadow database we tried to replay. So we recorded this, these user traffic API calls all the, which was served to application V1. And we replayed that to application V2. And we were expecting it to work. It did really work well. But the problem was eventually this shadow database gets out of date from the production database, then you have to do all that operations. So it was very operational heavy approach that you have to record, you have to replay, you have to keep the shadow or snapshot DB maintained. So later on, it will start breaking if you don't maintain it. So I mean, that was a very operational heavy and expensive approach. So we didn't go ahead with it. And that's when we thought of creating a virtual database. But before that, I'm just going to summarize. So what we did was so far, we understood that testing in production is good. What are the pros with the record replay methodology that it's a local approach that you can easily replay or real world traffic to your new version of application and see how it is going to work. The edge cases are not really covered. But it gives a good coverage because you discover new scenarios and you go through all those API test cases or the different use cases that your users are performing on the application in production. So you get the coverage easily covered, some good amount of coverage. But the cons were that one dependency states, the infrastructure is not that easy to mock or to, if you're creating a shadow database or any mock, you need to maintain that very regularly. So that was causing flakiness in the test cases. That was a problem. And this kind of approach was suitable for load test that you want to record and replay somewhere a certain amount of traffic and you see how your application is behaving under high load. But for functional test cases, it was not that good because if any of the API starts failing, then all the user traffic calls on that API starts failing. And you have to debug each and every API call, which is very hard to do because there are so many. So if it could be reduced to a few number, it is a good functional test approach as well. And the right mutations are very tricky to handle. So we were here, the shadow database or the snapshot DB, I would say, the snapshot DB approach, it was kind of fine, but not long lasting. So we moved ahead with the virtual database that I was talking about. Now with virtual database, what happens is when you record the API calls, you record the stubs of this database or any of the dependency that your application is talking to in production. I'm just going to give an example with it. And this virtual database approach is finally what we went ahead with. For example, if I'm recording a production environment request, let's say, get games for user Thompson. And this application is talking to Mongo database, reading two tables, and getting me the games that user Thompson's play. All this is recorded from production. Now, when you right now do it in test environment, you replay the same request, get games for user Thompson. And the new version of the application and your test database is there in the test environment, but user Thompson doesn't exist. So the application response is going to be different. You cannot just directly compare it and say that, okay, you know, my test suite or record replay is automated. So how do we really do that? So by virtual database, I mean that, you know, when you record this production environment call, you record these stubs of Thompson, cricket, volleyball, Garum and boxing. And when you replay it, you just provide these entries as a database instead of you know, making a complete database or setting up that or writing a data mock like a JSON or something. You don't really need to do that. And that's what Kepler also does, that it mocks this or any other test service or any other microservice that your application might be talking to here. And it just returns the same thing to the application, just like a production environment dependency would do. And see, try to see that, how would your application behave if it got the same response? Is there any regression introduced? So if your application responds the same way it was responding in production, then we are good to go for that test case. So this is how, you know, the virtual database or the virtual test infrastructure could be done easily while, you know, recording and replaying and not really leading to set up the test environment. Now, Shubham is going to talk about architecture and how Kepler is doing it. And, you know, we'll also showcase a demo. Over to you, Shubham. Thank you, Nia. Yeah, coming to the architecture. As Nia mentioned, what we had to settle with is once you install the Kepler SDK into your application, everything, every incoming and outgoing call is going to be recorded, right? So that's going to be along with, so let's say, once you make the API requests, those get recorded along with all the infrastructure calls, those could be the databases or it could be the caches or it could be external or internal microservices. And once that is captured, like since we're capturing all the infrastructure calls, those can be replayed anywhere. It can be replayed in your CI environment. It can be replayed locally. It can be replayed in any test environment if that's needed. And we can pick and choose what we want to mock and what we don't want to mock. So that is one of the interesting things of this platform. So you don't need a whole test environment. You want to mock the entire environment and just test the application and the test. You can do that. If you want to mock certain things, which are very hard to replicate in your test environments, you can do that. So that way, your tests and the system in the test is tested really solid. And now quickly, I'll walk you all through the demo. To install Kepler, you can just go to GitHub and Kepler is open source. So you can feel free to clone this repository and you can use Docker compose up. We also have a binary, but to use that binary, you will also have to install Mongo. So if you're using Docker, then that's an easier way to spin up the Kepler instance locally. If you want to run it, let's say in a large Kubernetes cluster, then we also have an option of using a hemp chart where you can install it in Kubernetes. So I currently already have Kepler running locally. So basically, once you clone it, all you have to do is Docker compose up and that should spin up Kepler for you. In case, I mean, if you're one of the contributors like me, then you can obviously use the developer compose, which helps you make local code changes and then you, I mean, make that reflect in your Kepler installation. Once you do that, you can open localhost 8081. And that's where Kepler would be running. We have two sections, test cases and test runs. In test case, you can see on the side, we have different test weights. In your case, this might be empty when you first install Kepler. So yeah, you'll have to first create a test case and then you would have test cases as well as test runs. Now to create a test case, you'll have to install the SDK. Currently, we have full support for Go and we also now support many functionalities in Java as well as JavaScript. In fact, one thing which I'm also going to show is we also have a Selenium extension, which works, which inserts the JavaScript SDK into the front end in runtime. So I'll be showing that, but before that, I'll first quickly show how we can generate API test cases. So for that, we'll be using a sample Go application. You don't have to write any code. It's just a one-time integration effort that you have to, I mean, anybody in the team can do along with you into the code base. So that sample which I'm going to show you is available in the samples Go repository. So yeah, that's available here. And once you clone it, it would look something like this. To use it, we also need to use the, we can use Docker compose up to quickly bring up the Postgres instance. Or you could also just install Postgres locally and use that. Once Postgres is up, the sample application is a URL shortener. It does, it has four methods. So when you do a put, you can put a URL, for example, it could be Selenium.com. And you can get back the short end URL. Once you get the short end URL, it will go back to Selenium.com. So that's the basic functionality of a URL shortener. And also you can update that URL, like you can change maybe, you know, change the short end URL to a different URL, or you could delete the URL. Right. So yeah, so let's quickly see how that works. So to begin with, we can quickly, so here I'll be using hopscotch to quickly, you know, show you how this works. I mean, to quickly do all these API requests, you could also use code or post map, whatever you're comfortable with. And first, let me also start the application. Perfect. So the application is started, Kepler is running. Now, whatever request I'm going to perform, they will be basically captured along with the API cons. So we are going to be our shortener. First, we'll do a post here. We can post github.com. And in fact, let's just post selenium.com. So selenium.com, that's what we're going to post. We did that, as you can see, we have a timestamp as well as a short end URL. So if I open the short end URL, this should redirect to Selenium. So this is basically the HTML page. If I put it here, you can see this goes to Selenium. Now maybe I want to change the URL. So I can put it here. And I can change this to, I don't know, maybe Selenium ID. So a different URL. And I can post that here. Yep, this is also done. Yeah, so rows affected one. And I could delete the URL altogether. So I've performed a bunch of ways. What happens if I try to get something which, you know, for the, I mean, if I change the short end URL, invalid URL, right? So that's another case. Now, if I go to Kepler once more, I can see I have the selenium demo. And I have a bunch of these calls captured, right, versus the post that we did that we didn't get, then we update the URL, we delete the URL. We also try to get the URL and, you know, it was a 404. I can, you know, go deeper into these test weeks. For example, I can go into the post. I can see the request response. All of those details. What's interesting is dependencies, right? So here, I can see all the database queries which happened. It's not just a visualization tool to see the database queries or your traces, but it's actually going to mock all this database queries at runtime. So I don't need to have the database running when I'm running this test suite against my application. Another thing to note is we can also see an interesting parameter called noise. So timestamp as which, you know, is a time-based variable. So as you can see here in the response, it's always going to be different. So it's automatically flagged as a noisy parameter, which will keep on changing. So when I'm going to do a future test, it automatically gets ignored. You can obviously change this, or you can manually annotate fields that you don't want to test. All of that is possible. Now, once this is done, now I can quickly run this test case. To do that, I can go to, so I can stop this application and there are two ways to do it. Either we can set an environment variable or we can also have a test file. So in this case, we made a native integration with go test. In case, let's say if you're using a Java application, it would, there's a native integration with jUnit. So the benefit is that if, let's say the developer has written any of the test cases, these test cases are going to run along with it. And you'll also be able to see all the code coverage and other details that we're able to see right now with general jUnit tests or with, in this case, go test. So when I'm going to run this along with coverage, and yeah, I can also shut down my database because I don't need that anymore. You can see it, there's a five minutes delay just to ensure that the application is instantiated. There are seven test cases and all of them are fast. I can also see that there's a 70% coverage, not just that, because it's a native integration with go test. I can go to file by file and I can see line by line coverage in my ID because IntelliJ or, you know, JetBrainsID, so here I am as a Goland, these natively support go test and jUnit. So basically the green basically shows this line is covered, red shows it's not covered, yellow wins, you know, partially covered. So yeah, all of that is also done. Now, let's try to do a code change. So, you know, something which will fail a test case. So one way we could do that is we can maybe, you know, somebody made a mistake or an intentional change, they changed the URL, the URL pattern from URL to URLs. As soon as I do this, I can run the test case again, the test suite. And you can see one of the test cases failing. Now to go into further detail, I could go to the UI to see why, you know, what really happened. And as you can see, I'll have to refresh this. Yeah, perfect. The second last, I mean, the second is the one which we ran where there was no error. The latest one is on top. And here we can see one, there's a one failed test case. If I go to the failed test case, I can see that I can see a quickly diff that you know what the response, instead of URL, I'm getting your so you can easily see that's flat. Okay, during this testing, during this test case, I did not need a database to run. All of that was automatically mocked by Kepler. And also, you know, if like we talked about the noise parameter, body dot t is a noisy parameter. So even though the timestamp is different, it is not flat, because timestamp is always going to be different. It's always going to be noisy. So yeah, I mean, now what if so this could either be a correct change when basically something that we're expecting, or it could it could actually be something as a part, right? So let's say it is something that we're expecting, right? In that case, I can just click on normalizing it. So I didn't have to go and update any test suite, I just have to click and normalize it. Once I normalize it for all future test cases, it's going to be considered URLs as the de facto. So if I go back and I try to, you know, run this test, again, yeah, as you can see, all the tests are passing, right? So I can also go and see in my UI. So this particular test which was failing, now the URLs and, you know, that's the expected natural response. So everything's working fine. Everything is matching. Yeah, so this is on the API testing side, right? So Kepler, if you notice, it's not just testing the APIs. What makes it special is the infrastructure side, like we are able to record infrastructure calls like databases or external APIs, and we are able to replay that. So we felt, you know what, let's take this one step further. So we started working on a Selenium extension. So we feel Selenium ID is a really great record replay tool. So we made a Selenium extension for that. You can, like we are still in the early release. And we would love, I mean, if you would love to try it, I would love to, I mean, you can go to the Git repository. And if you would like to get notified when it is released stably or in public beta, you can go to the Kepler landing page and submit your email here, right? Meanwhile, for the open source version, you can access, you can, you know, clone Kepler. And once you're done, you can go to the Kepler browser extension. The code would be available here, right? And once you clone it, you can load it into Chrome. So I'll show you how you can do that. So currently it only supports Chrome, right? Once you have, and you also need to install the Selenium ID. So let's say once you've installed the Selenium ID version three, I know there's also version four coming out. We're working with, you know, the developers of the Selenium ID four to work on the version four plugin. So it's going to be out also along with Selenium v4. So here we'll be using Selenium v3, the Selenium ID v3. And yeah, once you have the ID installed, you can go to Chrome. Yeah, so there's a Chrome window. So we can go to Chrome extensions and load unpack extension. And you can go to the Kepler plugin directory inside dist. So once you go here, you can, you know, easily load this plugin. So all you have to do is select and once you do select the Kepler plugin would be loaded, right? So that's the basic installation. You don't have to change anything in your code. So yeah, one of the benefits here is while in the backend side, you'll have to insert the SDK. This extension does it automatically. In fact, it doesn't even have to be your own application. I can just use any application. Right now we support XHR. Once you also support fetch request, you can literally use any application and you can capture the backend traffic along with capturing all the Selenium ID extension. So let me show you that. So once this is done, let me just close this. I'll open Selenium ID. So let me create another test. Or in fact, let me create a new project. Or, okay, let's do this. So Selenium form, right? Once I add this test case, one thing to notice you by default, you can get an untitled test case. Please don't use that because we need the test ID to map the infrastructure calls to the test cases. So always create a name test. And then we can hit record. Now, just to show what we're doing, I'll go quickly to the network tab. And as you can see, when I'm going to type any query, right? So for each query, there's a new XHR request. I open this, I can, you know, say OSS is awesome. You can see a bunch of API calls happening. And I can go to the next page. Now, once all of this is done, right, I can now stop recording. So one thing I get to notice, you should not stop the browser when this happens, right? So always stop recording. If you do it, then we will not be able to save it to the Kepler database. And once this is done, you know, we can just quickly replay this current test, right? So as you can see, all these API calls which are happening earlier, not happening anymore, because that is automatically mocked. So I'm, it's my hands up. It's all doing, it's all using Selenium ID. So I'll just play it once again. So the ID is running the test. It's opening Google.com is doing all those key presses that just recorded everything. The API calls are automatically done by Kepler. So right now, it's not talking to Google at all, right? So the use case here is that you can use this along with your, you know, so let's say you want to test the front end in isolation. I mean, these calls are basically analytics calls. So you can ignore that. All of the other calls are automatically mocked. So let's say you have your backend and you're using this, you want to test your front end in isolation from the backend, you can easily do that. You want to mock certain APIs. You can easily do that and all on the fly. You don't have to write any of these. So I hope this gives a good overview of what Kepler can do. Kepler is basically a virtual infrastructure platform, right? To ensure that you don't have to deal with all the uncertainties and complexities of test environments. Yep. Thanks. I hope the demo was helpful. I'll quickly go back to the slide to conclude. So right now, like I said, we have experimental support for Java TypeScript along with Go. So we have full support for Go and we're adding support Java JavaScript as well as front ends. We have a UI where you can edit and visualize the test reports. We have integration with native test tooling. We can also automatically detect and ignore time sensitive data, which you can obviously edit. Future work. So we're working on contract testing. I think that's going to be interesting. It can help surface a lot of problems. If you think about it, we already support it. It's just we are going to make that more robust and obvious. Recording from live environments, like I just showed you, right? So make that even easier to use. It could be, let's say, a very high throughput environment. So right now, let's say you have millions of requests every day. Recording from the environment requires a lot of performance tuning. So we want to make it scalable enough to handle those situations. We want to make Java TypeScript and basically Java and TypeScript SDK stable along with the Chrome extension. So if the Chrome extension looks interesting, the Selenium extension, please, you know, I mean, you can still update it about it by signing up for the newsletter. You can go to Kepler at IO. It's right on the front. Or you can also just try it locally and you can start our Chrome extension repository. You'll keep on getting updates on that. Also agent implementation, like Neha talked about, there are some huge advantages of an agent implementation. So we're working how we could work closely with the open telemetry platform to have a great agent implementation, which makes installation even easier. So you don't have to make any of those code changes, which I talked about, to install the SDK. So recording tests, writing tests is no code, but the integration still requires some code changes. And yeah, generate more tests from existing tests. So if I have a bunch of tests, can I infer that and generate more test cases? So that way, we can maybe increase the coverage of our test cases. So that's another use case. Thank you. I hope the presentation and demo was useful. It was really fun presenting and showing what we have been doing at Kepler. We are happy to take any questions now. Thanks, Shivam. Thanks, Neha. We have a few questions in the Q&A section. So we can take up those. So the first question I'm seeing is from is, so this replica was also allowing you to access the PI data of customers. If not, how were you making sure PI data did not get exposed while testing? And then great question. Security and privacy are cornerstones of today's products. Currently in the open source version, I mean Kepler is fully open source, currently in Kepler. Yes, I mean, everything is captured. So that's also going to be PI data, that's for sure. However, we are adding filters where you can annotate data to ensure that access is minimized. Right? So have that kind of access control where people, where the data is obfuscated from people that you don't want to have access to. So yeah, that's something that we're definitely working. Yeah, thanks for asking the question. It's a great question. Something that we've been keenly working on. Another anonymous question, how we can edit test case to use data from another test to another. So I think the idea is that how can we, if you have test data, how can we create on the test from it? So I think that's similar to what we talked about, that how to generate more test cases from existing tests. So essentially, once you have, let's say test data and some dependencies, you can go and you can edit any test case and you can create a new test case. So currently we already support that. So you can go and you can edit, there's an edit button. Similar to how we normalize, there's an edit button which you can use to you know, edit that test case. So another question we have is would Kepler work with .NET applications where the underlying APIs are not true REST APIs and have heavy object passed in the API. So the Kepler backend, like the Kepler SDK, if you want to test a .NET application, that is not supported yet. So we don't have a .NET SDK yet. What however you could do is if let's say you're using the browser extension and you would like to mark a .NET application and you want to test the front end in isolation, that is currently possible. So your .NET, your backend could be written in .NET and your front end, you know, which is you know, JavaScript, the extension would automatically insert the SDK there and there it is language agnostic. And even if the payloads are heavy, I don't think I don't see a major problem in supporting that. So if you want to use Kepler for mocking your back end from your front end, like using the Selenium ID, you could even do that if your application is written in .NET. I think another thing is also in case of Selenium integration, would Kepler work in mocking if executing Selenium script using testNG and triggered by Maven commands? I think that's also a great question. We have honestly not tested that outside of Selenium in Selenium IDE. So for example, if you're writing your own Selenium scripts, we can definitely give an integration where whenever you are writing any Selenium code, like here we were recording it, but when you are like writing a Selenium code, you can record the infrastructure calls and isolation and use an identifier in Selenium. So that way what's going to happen is whenever you are going to run that actual Selenium code, the backend will be handled by Kepler automatically. So that way you can run it along with your Maven commands and your existing CI pipelines. Thank you. That's a great question as well. So yeah, another cool question we have is, is it possible to apply load using these cases? So I think the question is around load testing. In our case, load testing, we could like one potential way, we don't support load testing right now by the way, but one thing that's possible is we could probably export these test cases to K6 or Jmeter where you could use all of these requests for load testing. So that is something that we are working on. However, right now we're not planning to support because to do a large-scale load testing, there are many things which tools like Selenium support where you could have multiple runners spread across your Kubernetes clusters which can generate a truly high amount of load and from different IPs and different user agent strings. So those kind of capabilities I think is where tools like K6 would shine. What we would do is we would allow you to export that so that you can choose any load testing platform of your choice and use that. Yep, thanks. Thanks for the questions. Thanks Shubham and Neha for the wonderful session and for sharing your experience with us.