 Hello everyone. So, introductions I believe already have been made. I'm Abhishek and presenting with me today is my colleague Mark and we are going to talk about the testing challenges that we face with microservices and continuous deployments. So, we'll give a few minutes for people to settle in, but to set some context, why are we here? So, Mark and I are part of a team at Red Hat that is focused on building, deploying, managing a service, a cloud-based external service. And if you guys attended the keynote session earlier in the day today, we talked about the SD org, the service delivery organization building public-facing service as part of cloud.redhat.com. And we are one of the teams that is building one of those services behind that. And as part of building that service, using microservices and trying to get to continuous deployments, we faced several challenges, came across certain hurdles and we are here to share some of those challenges and share some of our experience over with you. In terms of the agenda, we'll start off with looking at some of the software development models, the testing challenges that are inherent to them, and how microservices and continuous deployments play a role and exacerbate those situations in some cases. We'll take a step back and take a broader look at the goals around quality assurance that we had for our teams, for our services in particular, and then go look at some of the depth QE collaboration and the process changes that we had to make in order to achieve our goals around quality. Finally, we'll take a look at the test automation and the tooling that we had to develop in order to achieve these goals and we'll wrap up with a demo. So if you look at the traditional software development models, one of the common ones, waterfall, over here, the phases are sequential. So we move from one phase to another in sequence. And the goal is to make sure that the requirements are clearly defined and the design is fully nailed down before you move on to your development and testing phases and so on. With prototyping, we use development of the prototype and evolve the prototype in order to help clarify and define and evolve the requirements. Spiral is something similar to prototyping with some additional phases around risk assessment, evaluation, and so on, but the general idea is the same. In all of these models, the goal is to get the software being developed, the functionality being produced, more aligned with the requirements and the focus is not so much on getting development and QE more aligned. With regards to agile, it is designed by its nature to get the development and testing phases a little closer to each other. In the best case scenario, still, you can have software being released every sprint. And in the worst case scenario, you can expect anything from one to three, four more stabilization, quote-unquote, sprints before software is released. In practice, what we saw was even if we had, in the best case scenario, software being released at the end of the sprint, we had three-week sprints. And for the first seven or so working days within the sprint, you would have the design and development followed by the remaining portion of the sprint taken up by testing, bug fixes, stabilization, and hopefully a release at the end. The focus of agile typically is around iterative deployment, development and deployment, where your requirements cannot be nailed down for the next six to eight months a year, and you want to allow your development teams to pivot in case of changes in requirements. So given this out-of-phase nature of development and testing, where development goes in first, followed by testing, and hence the out-of-phase nature, what are some of the challenges that result from this? One, there's increased turnaround time for testing and verification because they are out-of-phase. This is especially true in case of, or rather exacerbated in case of globally spread out teams, where in our case, specifically, we have development as well as QE teams spread all across the world in North America, EMEA, APAC, all over. We have issues around effort duplication with test automation. So in our case, development teams did their own test automation around unit tests, some integration testing. QE teams built their own test cases, own test automation around integration tests, and system test, E2E test, UI testing, anything, but they both existed in their own separate little worlds and completely duplicated work. All of this results in higher turnaround time if we want to, for software updates and releases, if we want to wait for full automation and full quality assurance testing. And in cases when we are faced with tighter deadlines, pressure to release early, we run the risk of releasing without proper testing and run the risk of introducing bugs as a result. So now, how does microservices play a role in all of this? Well, microservices can help your development teams get more productive. That is because your development teams, individuals or smaller teams are focused on smaller chunks of functionality. Each microservice encompasses smaller chunks of functionality, and it becomes easier to get development done to introduce new features, to make changes, and even onboard new members into your team because the surface area for adoption for anybody coming onto the team to make changes is small. But with each microservice, you also get more moving parts. Now, you don't have one big monolith. You have 10, 15, any number of microservices and all of these are moving parts. Each of these microservices are creating on their own schedule, creating more testing overhead. So well, if there's more testing overhead, what about continuous deployments? I mean, the whole premise of microservices was faster development, faster deployments, and eventually continuous deployments. So what about that? Each service in a microservice, while it needs to be tested independently, it also needs to be tested in an integrated fashion. So you can't just get away from testing just a single microservice because it has touchpoints with every other microservice or many other microservices within your overall application or service. And all of those need to be tested in conjunction as a unit in your pre-prod environments and integration staging, what not, before you can actually release that particular microservice. So consequently, what we see is some of the benefits that we were promised of microservices are not fully realized. So what are some of the possible options? Can we aim for full automation testing? Well, that's a good idea, but it takes time and doesn't happen on day one. Especially when you're building and starting from scratch a new service based on microservices, the general idea is to first build and then focus on quality, especially in case of when you're just getting started, your first target is to have some functionality input in place, go for alpha testing, go for beta testing with friendlies, with internal users. Quality is not a pressure, not a focus from day one. And even if you specifically make quality and full automation a focus on day one, that will just get you delayed releases, missing deadlines, risking, go to market, and so on and so forth. Well, so should developers take over testing? Well, developers are already implementing unit tests and other forms of integration or automation testing. Can they pick up the full slack and just combine QE and developers and have one single common team? Well, there's a reason why developers don't review our own code, and there's a reason why we're not good at finding our own bugs. No matter how good of a code you write, no matter how many unit tests and automated tests you write, you put the code in front of QE, they will find bugs. So that is certain. Especially true, it's in a shout out to our own QE teams, these folks are awesome. You put something in front of them, they will find bugs. So the general idea here is you shouldn't throw away the years and years of QE expertise that various teams have built, and we should still try to leverage that. So how do we leverage the QE expertise but still get them more in phase with the development process? So when faced with this particular question at our end, we took a step back and looked at some of the broader goals that we had around quality assurance for our own service. And we came up with four different, you know, high level goals. Nothing groundbreaking, just common stuff. One, we wanted to improve the quality of the code being pushed. Two, we wanted to improve the reliability of the service that is deployed in production at any given point in time. Three, we wanted to enable frequent and faster deployments. And lastly, we wanted to reduce the effort duplication between our development and QE teams. So we now have these goals, how do we achieve them? So we identified four different areas or aspects of quality assurance, as we call them, where we wanted to focus, where we wanted to, you know, well, essentially just, you know, where we wanted to focus and spend our energy on in order to help us achieve some of these goals. First off, defining code quality guidelines and best practices. Second, automation testing, focusing on both pre-merge, post-merge, automation testing. Third, production testing, testing the service that we have in production and monitoring that. And lastly, bug triage. And we'll go over each of these areas in detail next. So what do we mean when we talk about coding standards and best practices? Over here, the focus is not so much on the functionality and the accuracy of the code being pushed, but more on the quality of the code being pushed. And this talks more about things like code format, the code structure. So in our cases, our microservices were based on Golang and, you know, some of the things that we can use to help us with this particular area are things like go format or go lint, you know, some of the linting tools. Next, we can look at some of the language-specific guidelines, you know, isms in various languages, for example, Go isms, or team project standards and best practices for writing code and so on and so forth. Next up is around code reviews and test execution. So we wanted to make sure that we have two-length, and we built the two-length required in order to conduct reviews, make sure reviews are happening, make sure reviews are happening, or other reviews are being done by the people that have that domain expertise or that functional knowledge in that particular area, and then the tests are being executed via CI, and tests are passing before a pull request can emerge, and making sure that always happens and is not violated. Lastly, we wanted to focus on code coverage, specifically around A, measuring code coverage, and B, making sure that with each pull request, we increase or maintain our level of code coverage, and we don't regress from that perspective. Next up was automation testing, and, you know, just divvied up into two different categories, pre-merge and post-merge, pre-merge, executed against each pull request, and targeting things like unit tests, functional and component testing, API contract testing, and integration testing, pre-merge, possibly with some of the dependencies mocked for that. Post-merge would be things where the testing is done in a pre-prod environment like integration or QE or staging, whatever you might have, and targeting testing like integration testing, end-to-end testing, system testing, performance, UI testing, and so forth. The third focus area that we identified was around production testing and monitoring. So what we wanted to do was make sure that we conduct regular testing against the service that we have deployed in production and conduct things like smoke testing. So it doesn't need to be full regression testing, full bone thorough testing, but rather a subset of your functionality that you have and test via smoke testing. This would help ensure the stability and the reliability of the service that we have deployed in production and can help, as you can imagine, identify and even potentially fix some of the bugs before users ever see them. In some rare cases, it also helps catch changes to some of the functionality in external services that you may be integrated with, which you do not have control over, you may not have a corresponding staging environment for those external vendor managed services, things like that. With regards to monitoring, we identified the need to monitor for errors that any end user interacting with your service via APIs, CLI, UI, whichever way, if they run into some errors accessing your functionality, those errors need to be logged and be accessible to you for analysis and helping fix bugs and identify them. In this particular case, there are a number of error monitoring tools that you can use like New Relics and Tree, and we use something on those lines as well and define thresholds, alert so that the development team or SRE, whoever can be alerted in case of errors and issues that actual users are hitting, and we can help debug them, fix them before actual end users even report those issues on their own. Last focus area was around bug triage. Bugs can be identified by end users, by monitoring tools, no matter which way bugs are identified. One thing is fairly common. We've all come across the random bug filed by end users where this is not working. We don't have any details. So lacking essential key details and the objective of bug triage is to make sure that A, we make sure that the bug is valid, we reproduce it, we add key details to aid with debugging and fixing the bug, and all of this is to help ensure that the bugs can be debugged and fixed as efficiently as possible and as quickly as possible. So given these goals, what were some of the changes that were required at our end? So first off, we take a look at some of the process changes. So we had our Dev and QE collaboration as one of the main focus areas for identifying and executing some of the process changes and putting in some process changes in place to help overcome some of that out of phase nature of development and testing. So previously, QE used to get involved in the planning phase in a particular sprint. With the new process, we are getting them involved in the grooming phase so that before a story or before some work is picked up in a sprint, QE gets the heads up and plans accordingly around A, understanding what the functionality is and being ready for building the test automation for that. Two previously, QE would create the test cases for stories that were added to a sprint. So that remains the same, but additionally what they do now is break up the stories in the new process into pre-merge and post-merge buckets. And even for post-merge, things like integration testing, system testing, UI testing, performance testing, break up the stories into different buckets or the test cases into different buckets so that they can be automated appropriately and accordingly. Developers previously would implement unit tests, some component tests, some integration tests, but they would do so based on test cases that they would self-identify. There was no input being taken from QE. In the new process, while QE is given the responsibility of defining the test cases and development team collaborates with the QE teams and they jointly work on automating the same set, the same repository of test cases and create a common repository of test automation against those. Previously, QE would start their test automation activities, whether it is end-to-end or system test. Once the feature is developed and it's deployed in a pre-prod environment. Now, given that some of that manual testing is taken away from them because the development team is helping implement some of those automation as well, the QE team is freed up relatively to pick up some of that test automation around system testing, end-to-end testing alongside feature development. Especially if your functionality is primarily API-driven, if your console is making API calls rather than not, it makes it easier once your APIs are defined to start building automation around that. Next up, we took a look at backwards compatibility. Now, this does not necessarily tie into your deaf QE collaboration but helps play a major role in achieve continuous deployments, faster deployments. So, API changes need to be backwards compatible. That's obvious. This allows not only your UI and your CLI and your clients, but also your other services within the microservices framework that are consuming your APIs and your services to catch up to the changes that you're pushing so that you can more effectively and independently deploy your own changes without causing regression and issues in other services and essentially your application. Also, this avoids getting into the whole coordinated releases thing where effectively you are negating the benefits of microservices and getting back to more of a monolith by focusing on coordinated releases of all your microservices at the same time. Second, we also took a look at some of the database changes with regards to schema changes and so forth, and we identified the need in some cases to adopt a two phase migration for our database changes. So, for example, when adding a new column to a database table, you can in phase one introduce that particular column and make that nullable optional so that your consuming service can actually continue to not pass that particular field and not fail and then when the code has caught up, make that particular field required one, make it in a phase two migration. So, what this ensures is, especially in our case, that we can leverage in the face of DV migrations, we can still continue to do rolling upgrades, rolling releases of our service in a scaled up environment without taking downtime. Lastly, we reviewed some of our background jobs and in our case, we have quite a few background jobs that are scheduled at different points of time of the day and it's not feasible really for us to make sure that our software releases and updates happen at a time when there are no background jobs running. So, we need to make sure that background jobs can a, handle the underlying software service changing and updating underneath them and either fail fast or continue to work as expected. Next up, we looked at automation around promotion of the service from into stage, stage to prod. Now, even if we don't go there and we certainly aren't there yet with regards to automated promotion, thinking about this particular aspect helps ensure that your, you know, any changes that you're making to your product, to your process, to your tooling is in the right direction and definitely helpful making sure continuous deployments happen more efficiently. In this particular case, integration tests are created as part of our future development. So, we're making, you know, steps in the right direction already and passage of these integration tests can be used to promote from into stage. In stage, from stage to prod, you can have additional testing like performance testing or some additional testing like UI testing be thrown into the mix if you want higher levels of reliability and quality to go from stage to prod. The next thing we realized was when you're testing, if you need to build automation testing and your functionality is primarily API driven, building all of that functionality and tasking the development team to do that is easy. But when you're dealing with the UI, our team definitely, our development team does not have the expertise to build UI test automation with, you know, they don't have the expertise around Selenium or any other tools. So, how do we achieve continuous deployments and automated promotion when dealing with UI changes? So, one of the things that we considered was feature gates, taking, making use of some tools around getting features, getting access to these UI components till they get more time to be thoroughly tested and, you know, are ready for prime time. In this regards, we can use either feature flags or in some simple cases we have also leveraged authorization if you're set up for that to get access to certain functionality and this could be done both for UI changes as well as for the backend changes, allowing certain features more time to soak in if either the automation has not caught up or they are just evolving and being developed still. So, taking a look back at some of the aspects of quality assurance that we discussed earlier, given some of these process changes, the code quality guidelines and best practices, this is something that the development team in our case helps drive. The automation testing, the pre-merged testing and the automation of that is driven by the development team whereas the post-merge is something that the development team and QE share responsibility. What this helps drive is you have a common set of test cases, you have a common set of test automation and you have a common mechanism to execute them, monitor the results, success and failure to gauge the reliability and the stability of the code being pushed and essentially of making sure that you are pushing and deploying high quality code. Thirdly, with regards to production testing and monitoring, error monitoring is being driven by the development team whereas smoke testing is owned by the QE team. Now, in a previous service that we were managing and maintaining and developing, the operations team used to write some scripts to test some key functionality and you run those scripts in production. Now, while that works, it was yet another team with yet another independently arrived at scripts and test automation that you have to manage and maintain. If functionality is added, updated, you have one more surface area to go coordinate with versus if you are already doing automation testing in int and stage, there is no reason why the same testing cannot be conducted in production and A, save on effort duplication and B, make sure that the test automation running in production is managed and maintained and updated alongside future development as well. Lastly, bug triage is owned by the QE teams because A, they are already set up to create those tests and B, it also adds a valuable feedback loop from bugs over back into the test cases so that we prevent regressions in the future. So, if we now revisit some of the testing challenges that we identified earlier with this process, Dev and QE teams, even though they are spread across the world still, we are not getting them any closer, but them working on a common set of test cases and test automation on the same backlog sort of gets them more aligned with development teams and then, you know, we can leverage the same process that we have in place for just working with a remote team and they are still in phase rather than out of phase. The effort duplication is reduced for test automation because they are working on the same set. Additional turnaround time is reduced and the software updates can be, you know, released faster and more efficiently and quality is not sacrificed because test automation is not or is a priority and not ignored. So, given all of these goals and some of the process changes that we have discussed, how do we leverage tools in order to help us achieve some of these changes that we just talked about and to discuss that and go over that I will hand it over to Mark. Thank you, Abhishek. Clicker. Great. Is this working? Okay. What were some of the requirements we gathered around this automation tool for testing that we would create? Number one, we should not recreate the wheel. Every language, every platform has got a testing library or framework built into it or around it. There are high quality expectations and matchers and so on and we should not recreate any of these things. Additionally, developers are used to using these tools. We should make them as familiar as possible to reduce friction and to increase adoption. But we do have to add some new features. So, we wanted to add the ability to label your tests arbitrarily because you'll be using your tests in many ways such as monitoring, as smoke testing, as integration testing, etc. We wanted to be able to run our tests as performance tests in scale, which means run them massively in parallel. We wanted the ability to have all of our errors reported and tracing back to those test failures. To the same tool, we'll get that in a moment, the same tool we use for our site reliability engineers. All the results from all the tests should be captured and we should store them so we can analyze them over time and analyze trends. Lastly, we can use those trends to possibly promote and otherwise create smart automation tools around how we're doing. Are we getting better? Are we getting worse? And all the data can tell us that. So how do we group with labels and why do we group with labels? We want, instead of having a single test like we're used to today, we'd like to have that same test run many times in many different environments. So we want the ability to label tests arbitrarily so we can slice them and dice them in many different ways. Because we want to use them in more than one place, we have to be able to tag them with more than one label. And then, of course, we need the ability to filter and execute those tests by those labels. Labels are arbitrary and they can be used for anything. In our case here, this is an actual test case and we're flagging it as a performance test. It's a monitoring test. This particular one is read only so we know it's safe for production. And a component here, this is our telemetry component that we're testing. And in this particular case, these are all applicable. And I'm sure we can think of more labels and, again, they're arbitrary. The test function here, which is currently ellipsed out, is very familiar Go. If you're familiar with Goisms and Go testing, this is the same testing definition you use in anywhere else. And here's the body of that test. If you're used to Go testing, here's an expectation, for example, and so on. It's just an example of how we can use a single test in a familiar way. So how do we get to use these same tests for performance? If you can run a test once, you can run it many, many, many times in parallel. And if you can run it many times in parallel, that's the width we're talking about. And then, of course, there's also depth. If you have 10 actors, so to speak, running the tests in parallel, each actor can run it many times a piece. Hence, we added width and depth to our test cases. All tests are, again, measured and stored for elapsed times and success failure. Success failure, of course, is ultimately important for testing. But likewise, we need to include all of the elapsed times for tests. We want to have that trend analysis to see how we're doing over time. All of our errors are going to be reported to the same error monitoring tool that our site reliability engineers use. In our case, it's sentry. Results for all these tests should be stored in a consumable fashion so that our smart automation tools can go ahead and use them. And they're easier to parse. Ours is a big JSON payload. And all of that data needs to be stored historically so we can analyze it over time and see our trends. So how do we do it? First, we leverage Go testing library to make it both familiar and, of course, use what they already have built in for us. However, we need to invoke the tests differently. You consider that Go test is just the command line interface to the testing library in Go, just as R spec or J unit or so on. We needed to wrap the tests ourselves so we can invoke them ourselves. Once we can invoke them ourselves, we can make the effort make a container that would wrap them and run those tests in a container. Once we have a container for our tests, we can then run them on an OpenShift cluster. And because it containerized, well, we're using the Kubernetes operator framework, more or less, as the control plane to run these tests. And I think we're ready for some videos now. How does it actually work in practice? OK, so upper left is a definition of what one of our tests look like. In this particular case, we're using the config map only because of a technical implementation detail that, at the time, we couldn't use custom resource definitions in the Kubernetes operator framework. If you're familiar with those, we're using the config maps, which is just a grab bag of stuff. But it works just as well. In the bottom left, we'll show you the pods that are running. We run these on an OpenShift cluster in a container. Therefore, we're going to run them in pods. And we can run many, many pods that run them in parallel. And we'll watch as we create them in the upper right. And we can watch some logs in the lower right. In this particular case, we're going to post that upper left one, posting on the right side. And we'll see that the containers are being created and they're running. And on the lower right side, we'll watch. We can tell the logs in the pod and watch as these tests actually just go ahead and do their thing. And likewise, we can watch as the testing pods themselves when they're done. They'll tear themselves down and do its thing. And that's already done. It's a pretty quick one. In this particular case, we ran things. We have width and depth of one. So we're running just a single pod. We're running all the tests once. It's probably good. It's your normal integration test, your normal unit test, your normal whatever. However you want to run it. And we see that just one pod was created, one pod ran, torn down, and the logs, et cetera. In the upper right, all the results themselves are stored on the test. So the test is both spec and status for the config map, I should say, is both spec and status for the test. If you're used to Kubernetes and OpenShift, you're used to spec and status as being on the objects themselves. So the results themselves are stored on the objects. If you look at the data on the upper right, you'll see our pods element of the config map will now show all the pods that ran. In this case, just one. And it contains all the tests that ran. We have access review, access token, cluster registration. These are actual pieces of our API that we use. And you'll see a list of elapsed times for each of the tests. Because it's only once, they ran all the tests, so the label's all. All of our tests are in there currently, and all they only ran them once, you only see one elapsed time. In our next video, we'll show you that we can run them many times. Same setup. In the upper left, you'll see that we have a width and depth set, so we want to run four pods wide with this. And each pod will run the test five times a piece, just as an example. And we can see the pods being created, and a single pod is being tailed on the lower right. But all the tests, all the pods are running the same tests and have similar logs. If we look at the upper right, we'll see that the, well, there they go, there's the results already. They were empty, but now they've got all their results in them. In this particular case, our label was telemetry. We're testing one specific component. So we're only gonna label those tests that are labeled with that component label. And you'll see that our pod results, all the pods are in there. And likewise, the elapsed times, we have, what, five depths, so each pod random five times. We have every pod and every test result, every run stored in that JSON blob. And lastly, what errors look like? In our particular case, currently, we can have a more elaborate data format, but currently we have just elapsed times stored as the results of these tests. If the results are not a possible duration, it means it's a failure, and that would be an error message in its place. In this particular example, our access token test has a missing token error. It's just an example, we can drill into that and go see why this token is missing and so on. But in the end, it either is a possible duration for success and how long it took, or it's an error message. Sure, the question is what framework are we using to solve and handle some of the challenges that Abhishek first presented? That's what we're making with this tool. So if we wanna have some things that are integration tests pre-emerged, if we wanna have some monitoring tests that are post-merge, if we create this suite of tests and label them accordingly, some of the tests could be used for integration pre-merge, and likewise some can be used for monitoring afterwards and smoke testing post-production. So that way we want to use the single test base that we create in many ways, and that's the slicing and dicing by label. Also, not touched upon during the demo or the design of it, the tests themselves live in the code base where the code is. So in our case, our subscription and account management services have our tests in our code base, and likewise our cluster services team and so on, QE can make their own repo with their tests. The framework that we're creating, this tool that we're creating, pulls those in as dependencies, probes each of them for their tests, and then they're just all in there, right? So we'll have one image that's the pod that's running containing all the various tests from all those components pulled all into one place, and we can run them all then by label based on what we're trying to do in which environment. So for example, just to add a little more detail there, we are not, as Mark mentioned, reinventing the wheel. The test cases themselves are the same ones that you'll have that will be run by the native test runners in any particular language. What we are adding with this particular tool is the ability to slice and dice them, to run them, to get the results, to get the errors reported, and do interesting things with regards to automated promotions. For example, what was the result of the last three runs? If all of those three passed, promote something from stage to production. What was the elapsed time average of all the width and depth combined, 15 different executions of all the last 10 runs? If you're not regressing by more than 10% in terms of our scale test, load test, performance test, promote something. So those are some of the things that you can do which were missing if you simply run your regular tests through something like Jenkins or CI. All of these, the ability to A, execute them and leverage the results, harness those results, analyze those and take some interesting decisions based on that is what was missing. And then also the ability to run the same test in multiple environments and run them either as a single test or multiple tests for performance and scale. So all of those things were simple things that our tool helps and this tool is also not something that is replacing any of the existing test runners out there but just adding some additional metadata and the ability to perform these actions that we talked about. Any other questions? So the question is, if you don't throw away the QE expertise, what if you throw away the QE? So what exactly do you mean by the management side of it? Right, so this is more about process and reporting hierarchies and that causing rips around priorities and thereby adding additional overhead by way of difference in priorities. So in our case, yes, Android Hat, development teams and QE teams report into their own hierarchies but at the end of the day, they all report into somebody on the PNT side, on the products and technology side, on the engineering side because everybody as such is an engineer. With regards to specifically the challenge around different priorities, two things can happen. Either your QE team is a pool that is focused on testing multiple, more than one product, service, whatnot and hence is a single pool of QE engineers helping out for different products and services, different development teams and then competing with priorities or you can have your own dedicated one. In our case, we've had over the past years experimented with both. I've worked on development teams that have a pool of QE engineers and I've worked on development teams which have their own dedicated QE. In the end, anything and everything that you do over here is not gonna help with that particular problem. At the end of the day, you just have to work through that offline and nothing we do in terms of some of the things we discussed today will help with that but what does help is when they have time, in the very least, if QE can get you and these process changes can get you to a point where you have a single set of test cases that you're automating and a single set of test automation that you're building, then QE in some cycles, in some releases, some updates can take up 70% of the load and in other cycles can take up 30% of the load but if you're working on a common set of automation, you can have better luck with these problems than if both of them were doing their own test automation and working completely separately. I'm sorry, can you repeat that? The door is really distracting. Sure, that's a great question. How do we fail fast and not containerize a test, not run them on this big cluster? How do we avoid all that if the test can fail quickly? Yeah, it's a great question. The way we're reusing the Go testing library we're just repackaging how that Go test function is packaged because we take over the execution of that run time, we can run them as a unit test right in my local IDE on my local box. So I can run that as a unit test like I did before and I can repackage it to put it into a container to run it many times in parallel. We wanna use the same test many times for many purposes. So yes, we can do it fail fast on your local development for a unit test and we can do it much more in parallel for performance and so on. Okay, thank you. All right.