 Well, howdy and welcome everyone to yet another wonderful cloud native webinar. I'm here today with Jeff Kwan, Principal Software Engineer of Sympress, Nate Lee, VP of Sales and Co-Founder of SpeedScale, and I'm Taylor Dillazal, Head of Ecosystem here at the CNCF. Today we're gonna be talking about the topic of load testing and some things as it pertains to the technical issues and cultural issues as far as those topics. We're gonna talk about the importance and challenges of load testing, talk about some strategies and best practices, and then get into some innovations and what the future of load testing might look like. So with that, I would like to welcome both of you to the virtual stage. Nate, Jeff, is there anything that you want to kick things off with? Any good mantras, quotes, pieces of advice or things that you found out recently? Hello, Jeff Start. That's mantras. I used to say this once to an engineer, which was I foresee increasing complexity and I think that always seems to work. What about you, Nate? Yeah, no, it's, I think a lot of the problems are quite complex to begin, right? And I think I've talked with the engineering teams before too. It's like how to eat the elephant and it's always won by at a time. So yeah, I think it's breaking it down into manageable sizes. I had a really good mentor that always reminded folks that the time is always now, which was also just a very, very great tongue-in-cheek thing. But awesome, awesome. Well, great to have you here today. Really excited to dig in and talk a little bit more about just all these complexities and what the load testing landscape looks like. I think as folks start to look at this wider landscape or more and more people are starting to see that this is really important, right? This isn't something that makes sense to bolt on to the end of the process. It makes sense to really start thinking about from day one and as you build out your APIs, construct your applications, et cetera. Some folks have shared about challenges as far as complexity or cost Do you have any thoughts on that front, Jeff, as far as beginning to think about this or starting to rethink how you think about load testing? Yeah, I think load testing is one of those things where I feel like it's like security. You should always be thinking about it a little bit. I think a lot of times we wait too late. Oh, it's a couple once before we have to do it now. I think one thing that load testing needs to be brought to the forefront a little bit more and a lot of things that also make things a little bit more difficult is that the landscape of a lot of our services have changed. At SimPress, we went from a monolithic system to a microservices-based system and how we load test it as a monolith is not how we load test as a set of microservices. That's a great point. In terms of load tests, are you able to share with how often you might go about doing that or kind of what that process looks like today for you and your teams? Ideally, we would like to do load testing basically as part of our CIC pipeline. We have started to introduce that as part of our process. Most of our systems are still sort of manually tested in terms of load testing. We're moving in the direction of having it fully automated so that, hey, we can know when a change was made and everything on this particular endpoint is a lot slower. That brings up an interesting point. If you don't mind me jumping in, Taylor, I'm real curious, I think from the perspective of wanting to automate load tests, it's quite a mature thing. Just curious about SimPress' journey, it seems like a lot of organizations mark load testing as kind of like a nice to have. At what point did it become like a must-have for SimPress? So last year, we sort of did a... We were at the end of a longish journey of microservices. And I think one of the critical things was... So within SimPress, VistaPrint is a company within SimPress and brand new microservices based architecture and wanted to have confidence going into our first holiday season. So it was sort of like, hey, we got to do it now. We need to know how it's like. And so a couple of months before, it was wanted to know what is the state of our microservices, the landscape in terms of load test, like load, well, take it. And so that was where the push came from. And it was actually very nice because what happened was all squads had to submit load testing plans and then have to share them. And so it was kind of agnostic in terms of what tooling you used, but it was like, hey, please provide us some evidence that you've done this and that you have some confidence in your own services. Yeah, I think that's what we're seeing a lot as well in terms of the landscape is like, as these application architectures become more distributed, every API team kind of needs to do a little bit of due diligence. It's like, what's my TPS? Like my max request per second I can do. With a monolith, it's kind of hard to point fingers or usually I think like the database team was always blamed. It's like, oh, it's probably the database. But now it's like every API team can cause like cascading outages or latency or like the threads get held open. So it's incumbent on those teams to basically say, oh, yeah, I know I can do X, whatever. And it's interesting. I think from a Sympress point of view, you folks are definitely ahead of the curve when engineering is already volunteering like, hey, we should probably figure out what our breaking point is before something bad happens. Yeah, I mean, in the monolith culture, we definitely had a load testing sort of perspective. It was the type of tests because of the monolith also did come from a central sort of perspective too. And the testing styles were a little different in terms of how we accomplished load tests. Gotcha, gotcha, gotcha. I think that when it's really interesting to hear that, and I did like just the initial thoughts about, how do you eat the whole elephant one bite at a time? I'd love to hear from you, Nate, and then go to you, Jeff, before moving on to our next topic. As far as teams tackling, starting to thinking about these things, have you seen more success with people just saying, like, okay, load test everything. We're just gonna go really deep into this or have you seen more advantages of folks identifying problem areas or brand new projects? Maybe starting there, starting a little bit smaller one piece at a time. How do people attack the elephant on that front? Yeah, I think the approach makes sense. Like, you know, in many technical things, and many engineering leaders are already accustomed to the idea of like start small and prove it out and move bigger. But when it comes to load testing, that's easier said than done, because just the other day I was talking to, I think it was a retail company that were saying, oh, we do simple math to figure out if we're gonna scale. So we test on a 10th of the infrastructure, and then we assume that whatever first throughput we do, we can do 10XF, because it's a 10th of the infrastructure. And then I kind of joke like, well, in production, you know, theory tends to break down and nothing ever goes as expected in production. And so what I'm trying to say is chunking things up when it comes to load testing is difficult. Like how do I run 10X load without production scale in front? And how do I run this comprehensive load test earlier in the lifecycle without everyone being finished with their individual pieces of code, right? And so that gets into like load testing getting delayed and pushed to the end of the SDLC and not running into kind of like tenable timelines. It's like, oh, we got to get it out. And in a full end to an environment, your mean time to figuring out, well, this is the cause of this delay. You find delay point one, delay point two. And then maybe that's all the time you have, right? Because it's, you know, before you have to push out one, really, there's, you know, more severe delays. And so that's kind of like, I think mocking plays a big piece in being able to isolate components and really doing more of like, what I think of is like a component level load test. That can really help shift left the component level load test. And I think is what a lot of retailers might be doing this holiday season to make sure at least they can start with their most critical APIs and isolate those. But the complexity nowadays, we were talking to, I forget, I think it was a venture capital firm doing like a tools landscape analysis, but he said the tools are necessary nowadays because the number of services people are building is too many for engineers to reason about. Like it's just logically, it's like just too much for them to handle. And it's like, we can't expect engineers to know everything and all the connection points and all the schemas. It's just impossible now. It's counterintuitive, but mocking is really polite to do when it comes to API's great point on that front. I haven't heard it as polite. I think maybe that may appeal to people's ethical side. It's like, come on, don't be rude. Yeah. Don't be rude, mock me please. Open API spec probably. Jeff, what about you in terms of dicing things up or factoring in those concerns with your teams? How have you seen success? Has it been kind of trying to tackle it within specific boundings or scopes or what's your secret to success there for some press? I do feel like definitely the mocking or the isolation really helps because we did see instances last year where everyone's running lots of load tests and generating load throughout the entire ecosystem of services and that drives up the cost of just running the services because things will scale up. And it gives you more confidence when you're able to do it in isolation and go like, oh, hey, this service can take 5x load. I can give it 5x and it'll perform the way that it still does. And I can go, OK, well, that's good. Next service. And then I can do that and repeat that. And then I can get coverage, at least a sense of coverage amongst the services I have. And then you can start to do things like, OK, well, let's not mock this piece and then do the load and just verify. So it becomes kind of the slowly, iteratively expanding the chunks and making sure it's all working. Gotcha. Thank you. Really insightful. And I know there's no one size fits all with every team and so it's really interesting to see what folks do, whether it's using something like SpeedScale to have as real as possible data to be able to load test with, being able to record and then simulate some things like that really helpful and then also having an understanding as far as where your bottlenecks are, whether that be your database, your cache, or just even how your application is architected. So cool. Thank you. Thank you both so much. I had a quick question if I may interject. I'm like real curious. I mean, Taylor, you've got an engineering background and Jeff, your hands on keyboard engineering every day. Like when you hear the word mocking or stubbing, what is kind of the... What sort of reputation does mocking and stubbing have? I'm just curious as an aside. I mean, I think the thing is that like classically it is thought of in the unit test world, right? When you're writing the unit test. Yeah. But I think if we're looking at it from kind of looking at expanding it up into a service level, you're thinking about this classic black box sort of situation where your inputs and your outputs to the service, which is the black box, are simulated, right? So you have the input and output simulations. And that's how I sort of view those words. Got it. Yeah. When it comes to mocking for me, I think that when I've implemented that or used that in the various teams and roles that I've had, it's been mostly successful, but I do always kind of notice an ambivalence a little bit when starting with that because you're creating this perfect situation. And so a lot of folks are like, well, yeah, that's what's going to happen. But in a perfect state or within a, that's a laboratory setting with no dust. Once you get to the real world, we have no idea what might happen. The DNS records might be wrong. There might be a CPU that's tied up on 100%. There are those kinds of concerns that come into the mix. But not that it's, you know, not that that's enough reason to be like, never test. I absolutely disagree with that. But I think that there is, it's good to be aware of that and see that as like it's not a fail-safe insurance kind of policy or security policy on that front, but definitely a great place to get started. Even things like mocking out different cloud APIs and things like that, just to make sure everything binds or works well together. And then you can start to build a little bit better tests and things around that nature too. So I see it as a necessary need on that path to being able to have a more sustainable service. Yeah, that's kind of what I was actually, it's funny because you use that word mock and I thought it might be good to spend a couple minutes on it because it means so many different things to so many people. Like, you know, if you're using it for unit testing and then I've even gotten into some religious war is like, well, that's not true unit testing. Unit testing shouldn't require any dependencies. So now you're functional testing and I didn't even want to touch that. But then I think for some people mocking like you said, Taylor is like, well, it's not going to be real. And so then therefore I run into some situations where people are kind of throwing out the baby with the bathwater, so to speak is like, it won't be real. So we might as well not do it. It's like, well, what if it covers 80% of the scenarios that you're running into? It's still better than nothing, right? So I don't think you should kind of totally discount it. And then there's some prevailing, I think, bad reputations from the past where it's like most mocking tools require a lot of scripting and it's, they're very brittle, like they return, you know, five every time. And so it's like, well, now I need it to return six. I have to go in there and hard code six. And so it's got a bad rap in some ways as well. And so that anyways was just curious what you guys think echoes a lot of what I think I've heard across the board from people. But yeah, I think modernizing mocks in a way that it works for microservices could be a game changer. Yeah. Agreed, agreed. I think knowing where those strengths and weaknesses lie just as far as how to lay your stack and that testability or working with your SRE or platform teams is really important. I think on that topic, I do want to focus a little bit more on specific holiday seasons, things like Black Friday or Christmas or the holiday season, those kinds of sales and taking a look at what that means for what you see at speed scale as far as folks testing on that front and what Simpress goes through as far as dealing with that. I'm really keen to hear on some of the big mistakes or things that you think that people should be doing that most folks aren't taking into account and how to better set up load testing and setting up their strategy as we get into things like the holiday season. I would love to kick things off with you, Jeff, and hear about some of your learnings and things that you think more folks can be working on. I think one of the things, again, going back to sort of thinking about load testing as a constant concern, but even if you don't, you know, pretty much, even after holidays over, there's basically, you have one year to repair again. And so, sort of, there isn't really, don't think of it as, you know, a season. The whole year is the entire season to prepare. And so, you know, we typically kind of bear down closer to, like, you know, July and August is sort of when we sort of ramp up and think about the load testing thing again. But honestly, like, we are moving towards trying to have that load testing mindset constantly. And so, you know, one of the things that we need to think about is with load tests, especially for holiday season, you know, be very mindful of what the data might be because, you know, your traffic today might not be the same as the traffic during the holiday. And so that's sort of some of the things that we think about a little bit. Yeah, I mean, one of the other things that we also think about is sort of getting a baseline for critical services. So understanding, like, you know, this application here, you know, made it through holiday and, but we, you know, we load tested it, it can handle 5K sessions. Let's say we need, you know, like next holiday, maybe we'll be double. So we need to make sure it handles 10K sessions. And then sort of planning starts in January or in February. And not all these things can be done overnight. And so sort of thinking about an early planning for the architectural changes, thinking about like, oh, hey, we need to add cash into this one, add cash into that one. That's the sort of, you know, things to sort of help plan ahead so that, you know, when you start to really hit the load tests in September, you're not going to see any, you know, any last minute changes you might have to put in. Great point. Great point. I think that I've heard from folks, certain end users within the CNCF, some that may or may not be involved with the tax season, and everybody files those way ahead of time, right? So it's, it's, I've heard some really good stories about, you know, hitting this incredible amount of load, and that's a good problem to have, but you don't want folks to go down, right? You don't want to have those experiences. I think, Nate, my question to you is, as we see things like modern scalability and things of that nature, I like what Jeff said about maintaining a baseline and getting a good understanding as to what traffic you might see, you know, setting the correct expectation. When it comes to being able, you know, having these things like Fargate and other technologies that allow for this maximum scale, what are your thoughts? Does that mean that people are indefinitely safe that they shouldn't need to load tests? It's a loaded question, of course. Yeah, no, that was definitely the, the kind of the promise of cloud was like, oh, we're, we're ephemeral and we're self-healing and we're auto-scaling and all the buzzwords. And, you know, even the basic premise of cloud, like only on when you need it. And, oh, it's going to save you money and that was supposed to be such a big departure from VMs. But then I already see people doing the same thing they did with VMs, which is, hey, who's using this instance? I don't know. Let's turn it off and see who complains kind of thing. You know, that's how you can tell. And if nobody complains, you're using it. And that still happens in cloud, right? And so taking that step, a step further, like I think there's like circuit breaker patterns and load balancers and people think it's kind of the end all be all, and it can help to mask some problems. But, you know, for example, in Kubernetes, I believe if your CPU bound, like it can take down and cause like cluster wide issues, right? And so it's not actually a problem. I think I read an Airbnb case study, which is public where they upgraded Java and it was causing some memory issues. But because they were auto scaling, they just kept adding more and more and more pods and they didn't realize until they got like a five or six figure AWS bill. And then they realized, oh, we've got, we made some code changes and we upgraded Java and memory wasn't getting allocated like we thought it was. Instead, we were just spinning out more pods. So all that to say, I think people can have a false sense of comfort and there's still a lot of kind of optimizations and cloud savings that can take place, which I know is popular with everyone, trying to kind of count their pennies nowadays. And then, you know, my takeaway with what Jeff said with looking at traffic and stuff like that is like, listen, you don't have to be the best, you know, performance testing, performance engineering organization on day one. You can really take a look at kind of the 80, 20 of like, you know, what's the most likely stuff that's going to get hammered and what are the most common use cases or user journeys? Let's start load testing that and like, you know, hey, we may not be able to get to everything, but every little like, you know, major shopping flow that we could take off the table and be kind of safer about is still less risk we're exposing ourselves to. And it's hard to do. And I don't think anybody ever really feels like the job is done, but, you know, it's progress. Yeah. I think I think most engineers can kind of tell you, hey, if I ask you, what is the service that will keep you up at night? What are the ones that keep you up at night? And, you know, those are the ones you want to do first, right? Like, oh, that one, I don't know, you know, like, and that's exactly, you know, like target those. You'll have a good sense of what those are. Is that how you folks kind of got started when you were like, is it like, yeah, which, where do we get started with load testing? Which service should we start? Yeah, that's how I start. I mean, in terms of like privatization, I'll do this one first. This one, the rest is the first, you know, like, nobody can check out if this one doesn't work. Oh, yeah. Right. And, you know, that's simple. And that kind of go down that list like, oh, yeah, you know, it's going to be degraded experience if you don't get to this one, but, you know, it's not, you know, like, it's not going to keep me up super, super late at night. When it comes to developing that practice and just those overall behaviors, Jeff, I think was there, was there a formative or pivotal moment for you that made you really focus on being able to load test and to scale appropriately and get the right tooling and culture setup? Was it an outage that got you really convinced on this front? Or was it something that you just kind of noticed over time? I'd love to hear about your story there. And then going to unite in a different capacity. I think, I think it's something that, you know, one of the practices that we had at Vistaprint was people would become problem managers so they take a rotation of being on call. So when you kind of are on call, you get really a good sense of what happens when systems go down. And so you kind of get this urgency factor in your head that, hey, you know, like, I have to worry about this particular thing a little bit more because that's my service. And if my service goes down, then other things, like bad things happen, people get woken up, right? And so you don't want to get woken up, right? I think that's where the, that's where there's sort of like, that's how I sort of built over time like this. Okay, well, will this wake me up? Will someone page me when this goes down? And so you get that sense of like what to do to protect yourself from getting that 3 a.m. call, I think. The pain of being woken up is a very good motivator. I'm very happy to be in a role now where I don't have to monitor page or duty. So I know that pain very much. Yeah, I'd much rather read something like Good Night Moon or listen to an audible book between you and me. Nate, being on the other side of this, you know, being DP of sales and being the co-founder of speed scale, you know, I feel like that's really an important distinction because you saw this pain point that many folks had. What inspired you to go forth and create this company that focuses on these items? And then are there any interesting things that you're seeing right now too on that front in terms of adoption or more of a focus on this kind of testability? Yeah, no, I mean, great question. I mean, real briefly, like I was in sales engineering for like DevOps tools and then ended up becoming a product manager. So I had my own engineering backlog and kind of uptime SLAs as well. And then the most recent role I had was actually doing digital transformation, like consulting for a lot of these Fortune 500s like banks, credit card companies, e-commerce companies. And what my co-founders and I realized and they're from observability background like New Relic and Observe is that a lot of these companies don't know what a kind of an unlocker, a velocity unlocker mocking can be enabling parallel development, allowing you to get around restrictions. And it's predominantly because most of the mocking tools out there are open source script based mocking tools and they have their time in place. I'm not knocking those at all, but they can be a little bit brittle and kind of slow to develop. And so we didn't have a solution when we started Speed Scale. We were like, how can we make this easier? And so that's when we came up with the concept of like traffic based mocks. Like what if we model the behavior of the mock from traffic and there's an abundance of it and it can be much more realistic generated in minutes. And then of course, the other side effect was, yeah, we were feeling the pain like Jeff was. I think my co-founder Ken's got a story of like seeing all these alerts happen and there's a big streaming event happening and he's pulling his hair out. And the monitoring alerts are going off. And if anybody having been an SRE Taylor, it's like, you're seeing those dashboards. It's like, let's try to figure this out beforehand next time instead of during the day of the sale or the streaming event or the reservation day. It can really kind of drive the point home, but it's easier said than done when you're saying like, let's make sure we know our throughput capacities and such. And so that's where we were like, let's build up the tooling to allow you to simulate what's going to happen. Because at the end of the day, everything engineering and devs do is to prepare for that, the live game day, right? And then I hate using the term because it's so overused, but shift like shift left. If you can bring those conditions sooner, then I can run through those scenarios and get a better sense of like, am I ready or not? Or should we over provision a little bit here? Should we optimize the code there kind of thing? Cool. No, thank you so much for that. I think that that's really insightful to see how it's just life has kind of transformed and those concerns and that fact of we don't have to do it the hard way anymore too. As an additional note for anyone looking to join the SRE path as far as their career, if you really like the color green and even like the color red that much more, great, great top roll for you. As nothing to do with Christmas, you'll find out. It's funny. I'm curious to move into our next topic and talk about three things that you want to do now and kind of like really digging into the future of load testing at scale and some of the things that you have as far as your personal candid insights on that front. I think, Jeff, I'd love to start with you and talk about your experience and how you've changed your approach to testing as time has gone on. Any insights there? Let's see. My approach to testing has sort of, has changed a bit. I think, you know, if I were to dial back, like five or six years ago, you know, we were using open source tools. We were using, you know, some of us actually in the company did some homegrown tools and, you know, trying to do load testing was sort of tough. It was tough because the expertise and using the tools, you know, it was very, only sat in a very few number of people because it wasn't something we always did. It wasn't something on top of mind. So, you know, I was the one who was wrangling the tools together and I was the one who was trying to get, you know, you know, association into boxes to load the tools on or run the tests. And sort of the journey now is like, hey, you know, what is the, what is the easiest tool to pick so that, you know, I can get my entire team to do the work, right? Like to do the load testing, to do it on their local machine. And I think that that is the sort of like, you know, how can we move from scripted manual process that, you know, only a few people know how to do to sort of a documented, automated, both documented and automated process that, you know, is done all the time. And, you know, I think that that sort of and automating a lot of that too. And so, you know, that's sort of where we are that, you know, the journey we are moving towards automation where we're not quite there yet, completely automated. But, you know, we are getting really close, I think. I'm excited to see us move in that direction too. And I've seen similar things in my career as well, kind of starting with those initial scripts, was it, I think there was like Apache, I can't remember if it was gun or there's a lot of like gun drill, all those types of things too. There was bumblebees, which would, you know, it's been a lambda scripts and like really, really go after something. I've been really interested in tools like, hey, you know, K-6 and speed scale on those fronts as well. So I think that I agree, it's been a difficult kind of, difficult path in terms of being able to string these things together. We have a lot more support for orchestrators and observability and telemetry, but when it comes to actually delivering this load, a lot of cases and times you'd have to go and work at an Airbnb or an Intuit or Mercedes or an Apple or those kinds of companies to be able to experience this. There wasn't really that use case, unfortunately. You know, you had to be sitting in the seat being woken up at 3 a.m. in many cases, which is really unfortunate. So I like seeing that there's more comprehensive test suites in this ability to have this amount of load simulated, especially with real traffic too. I think that as time goes on, I'm kind of curious to see if we're going to have more data sets available to us that we could simulate, similar types of load, or be able to generate types of load tests for our specific applications and then pair that with our API interfaces, things of that nature. Nate, I would love to hear from you on that front, just kind of what your overall journey has been and what you think the future is going to be for load testing. Yeah, I mean, I think it mirrors a lot of what Jeff was saying, but I've typically been on the vendor side of things for better or for worse, but it did afford to be the opportunity to go into a lot of these different companies and see how they do or more importantly, don't test. And it's not for lack of trying. I mean, really, I think like a lot of the folks in like the QA, which traditionally known as a QA space has been gobbled up by these larger enterprises and then kind of driven into the ground. Apologies for being frank. So a lot of these open source tools have actually kind of risen to the task, but really what we were looking at was like, how do we automate the automation? And I know it sounds strange, but it's something that like when we were doing user interviews, building speed scale, it's like what the engineering leaders were really kind of concerned with is we can't, we just can't keep up, we're getting run ragged and the business is always going to want to push features out the door than like address technical debt. But it's a serious problem that really keeps us up at night. And when we're automating the automation, we knew a key piece that had to come with it was not only the test drivers or the load drivers, but also the environment management, which is why we took the traffic based approach because the traffic will allow you to develop the tests and allow you to develop the mocks, which kind of addresses the environment constraint. Because without the environment to run the tests in, and this is something I saw in my consulting background is you're kind of still at the same point stuck. It's like, but I think it's a journey for everyone. I think starting, there's nothing wrong starting with open source tooling and kind of figuring out like, here's the test drivers and here's kind of the environment or mocking tools. But I would like strongly urge whoever's, you know, when you're starting out on this journey to totally think about the test driver and the environment piece in tandem. Now, the hard part is the glue, right? Of putting all that all together within whatever, you know, GitHub Actions or Jenkins or CircleCI orchestration you're going to do. And it's, once you kind of stroll through that, I think you have a good picture like Taylor, you were saying like at Uber or Disney or where have you. That's really when you kind of figure out, okay, well, this is what I want in a perfect world. And then, you know, 2.0 of your testing framework becomes that much better. Yeah. One other thing I'd like to add is I think for me, the greatest confidence always came from using real data, like using, you know, and I've only had it. There's only a couple of tools that will help with captured real data so that you can replay it. And I think, you know, I always, whenever I had that situation, I felt much more confident in the results and where I was where, you know, where the application's at, like the state of it. I agree. I feel like it's great to see folks start with mocks and some open source tooling and then just kind of like, get a feel for things, understand the why behind, it is what they do. And then again, that mock, that's that perfect dust-free environment kind of scenario, but I'm really optimistic for seeing real data and then, you know, de-identified and atomized just kind of like safely processed and handled data on that front to be able to simulate a lot of that load for folks. I completely agree that, I feel like that brings the confidence up. It's gritty, just like the real world. And then we can actually make some, we can see things that we never expected to see in a lot of cases too as we go and test and kind of work through these things. I want to add a couple of like amusing antidotes, like we are antidotes, anecdotes, sorry, my kid was in the, went to the doctor, so I'm in the wrong mindset. Well, although maybe this is an antidote, but like, for example, the real data is always kind of surprising and we're always kind of amused by what happens. You know, for example, like systems that have erroneous conditions, but are still sending back, I'm healthy health check back to the load balancer, like that could actually happen or people expecting shopping patterns, like if there's a sale that happens and you expect everybody to start perusing and looking at the product indexes and then like, you know, the product detail pages and then checking out, right? That's how you'd expect the flow to go, but maybe everybody does the shopping the night before and then everybody, when the sale happens, actually I do this is like, I've already got my address and my credit card information entered and when the sale goes live, I just click checkout. And so now it's like, people aren't hammering the product index page like you expected, everybody's just hammering the checkout API and the load pattern doesn't go like this, it goes straight up. It goes across, right? And so looking at real traffic patterns to understand like, how should I test realistically? I think it's gonna be more and more of an important factor in which is why we're indexing so hard on like, using real de-identified course traffic. I think, yeah. Awesome, awesome. Yeah, that's a great point. And yeah, it's what is it? Life imitates art, not the other way around. Same thing with checkout pages. Yeah, never what you'd expect. Well, thank you so much both for joining today. I sincerely could keep this conversation going for hours. I really loved your insights and still tend to talk about. So I'll be in touch. We'll have some popcorn chats and things like that about that after the fact. Yeah, thank you for your insights on load testing and all of those strategies and stories. I think as we conclude, we'd love to urge everybody to continue their thinking, reflecting and learning on those lessons and sharing with others. Because I think that that's, we want to hear your stories. We want to hear about how you're going about thinking about these things. Yeah, thank you so much everybody. With that, I know that I had you open, but definitely would like to invite you to close too. Jeff would love to hear any closing thoughts from you and then Nate, if you wouldn't mind rounding us out before I say some final words. Yeah, final words. I think getting that load testing in the culture I think is very important. And I think it definitely pays dividends for the squads and the teams that think about it, because that is not a fire drill. And you can, it's very easily explained to other people on management, other engineers like, hey, here's what we're doing. And it's like, I have confidence. I think that that definitely helps build confidence within the squad, within the teams. But yeah, I think definitely getting this on top of mind for a lot of engineers helps everybody. It helps, you know. Yeah, great. And for me, I think just to piggyback on what Jeff said is like, yeah, performance is becoming more and more critical. And you don't have to think about as a discipline you have to jump into immediately and be the master of. I think like Jeff was saying, start with the most critical ones, the most like, you know, kind of centerpiece services that you have and think of driving load into that service and then isolating the environment for it or building a mini environment for it. And that's just a much more manageable problem than, how do I become, you know, the gift to load testing? Agreed, agreed. I think it'll be an interesting time seeing this get automated and getting more data on that front too. When not only we have to fill out performance reviews, but when we see our services do that too, it's going to be very interesting times. Awesome. Well, thank you both so much for folks interested in checking out Simpress's checkout page. Please check out simpress.com, C-I-M-P-R-E-S-S.com. And if you want to drive some traffic to your applications and get things tested, please check out speed scale, which is speedscaleallandword.com. And I'm sure both folks would love to have you introspect their services and make sure everything's looking good as far as their product pages and everything else. Until our next webinar, stay curious, keep exploring possibilities. Thank you and take care, everyone. Have a good one.