 Thanks, everyone. Thanks, everyone, for joining me today. So as Marcus mentioned, what I'm going to be talking about today is really all about massive parallelization of testing. And this is something that if you speak with any of the cloud testing vendors out there, you'll hear people talk about that, talk about the need to scale, to run in parallel, to run multiple browser and OS combinations, multiple tests, and do everything all at once. And what I'm going to talk about today is things to consider when you actually want to go do that. Before we start to talk about testing, let's talk about something everybody likes, money. So if I were to give everybody in this room 100 rupees, everyone going to be happy? No, not much. How about 1,000 rupees? 10,000 rupees? Anything? All right, what if we had the choice? People are going to choose 1,000 versus 10, right? More is better, isn't it? Most of the time, isn't more better? Same can be true about housing. So if money wasn't an object, you can leave it in either house A or house B. Exactly. It's human nature. We want more if we can get it, right? I would like more money. I'd like a bigger house. I'd like a newer house, a bigger car. Let's switch over and talk a little bit about software now. So software even outside of testing. APIs. So if I give you access to an API that has 10 methods, pretty good. API with 1,000 methods. Oh, now I can do more stuff. That's really cool. What if it's the wrong stuff? What if I give you APIs that do nothing that aren't useful to you? So like most things in the world, it kind of depends. Money's black or white. Housing typically black or white. Software testing. Not necessarily so black and white anymore. So let me give a quick introduction. My name's Dan Rabinowitz. I live in the US. I live in Connecticut, so a little state that people usually forget about near New York City. I've been with SOS for about a year. Prior to SOS, I've worked for CA, Rally, PEGA, IBM. I've spent a lot of my time, a lot of my career as a sales engineer working with some of the largest financial services and insurance customers in the world. So these are all the banks that are based in New York City, a lot of insurance companies as well. This is what I've been doing for the past 12 or 13 years. All right, on to testing. That's why everyone's here. We're at a Selenium conference. It's all about testing. Before we start to talk about testing, let's talk about building. So a little story back when I worked for Rally Software. Rally Software, for those of you who don't know it, it's an agile project management company. So I think like JIRA, JIRA was a big competitor, version one, you want to track your user stories, track your tasks, track all that information. And it helped you build software. And one of the things that we always coached our customers with was building the right thing. You want to make sure you're building the right thing that actually matters to customers. So when you start to think about this, you're building the right thing. And today, most people are doing agile and you're building and testing and building and testing and some of the testing may or may not have kept up. But if you're not starting by building the right thing, you're starting in the wrong place. All right, let's switch on to testing. How many of you got a T-shirt yesterday from SOS in the back? Okay, good. If you didn't get one, we're all out. Sorry. But the people back there and in SOS Labs, if you saw the T-shirt logo, it's test.allthethings. So that's our logo. We want you to test everything. And we want you to test everything in parallel. We want you to test it across multiple browsers. We have a mobile device solution so you can test across mobile devices. You can test across emulators and simulators. But who in here actually tests all those things? People who are writing tests, do you write tests for everything? I see some heads shaking. That's probably the right answer. Because you probably don't need to test all the right things. What you really wanna do when you think about testing, you have all the software you've built, you're using whether it's a Selenium grid, whether it's SOS, whether it's devices plugged into your laptop, you're executing those tests, you're trying to find those defects, and the corollary to building the right thing is test the right thing. So rather than just test everything, what I would argue is that our shirt should say that, which is test all the right things. So be very strategic about your testing and make sure you're testing what's necessary for your application to be delivered in a quality manner. That can be writing tests that cover a certain percentage of code. That can be testing on the appropriate platforms as well. All right, how do we know what to test? Talked about testing the right things, how do we figure that out? I'll use analytics. What we're looking at here is a snapshot from the saslabs.com webpage. This happens to be in some data from New Relic. Google Analytics will give you the same data. This was probably a few weeks ago, but week over week, this probably isn't gonna change too much. So seven day snapshot, we have about 120,000 visits from Windows, 70,000 visits from Mac, we can see browsers, there's 175,000 visits from Chrome, 20,000 from Firefox, and it kind of goes down from there. I wasn't able to get the operating system version, so Windows probably includes Windows 7, Windows 10, Windows 8, lots of flavors are Windows that are out there these days. We can also look at browsers and browser versions. I just chose Chrome and Firefox because they were the two most possible. What I think is very interesting here in looking at this just from an interest standpoint, if I take a look at Chrome, I see that Chrome 66, which was a new version when I took this data out, heavily in use, but the next most version wasn't version 65, it was version 51. So most likely there's an enterprise out there going to Sauce Labs, logging into our services that's locked on an older browser version. Firefox, much more like what you'd expect, there's Firefox 60, Firefox 59, Firefox 58, et cetera. But this is where you start to figure out what platforms you're gonna test on. Not necessarily what tests you're actually gonna write, and we'll talk about that in a second, but you have to know what platforms you're testing for. All right, so a little quiz here. Based on this data, and we'll go back and take a look, remember, we have Windows and Mac, they're high up there, Chrome and Firefox, there are two most popular browsers. Again, version 66 and 51, version 60 and 51. So if I said Windows 10 Chrome 66, do I write tests for that? Yes. Windows 81 and Firefox 60, yes. Windows 10 Chrome 64. No, I see a couple head shaking, that's good. Windows 10 Chrome 51. I see some, oh, this is tricky. Some yes, some no's. So the answer is yes, because remember, Chrome 51 was my second most utilized version of Chrome, so I wanna make sure that works for this particular application. All right, Windows 8, IE 10. Anyone, I don't see anything. People have to pay attention here. No, because people aren't, Internet Explorer 10 isn't really being used all that widely. Internet Explorer's not being used all that widely. All right, OS X Safari, maybe. So why maybe? So if I go back to my previous slides and we're gonna go zip past back, we can see that our Safari usage is actually our third most utilized test. So we have to think about that. Mac is heavily utilized, right? We see that as well, pardon me, we see that as well, but since Firefox is down on the test, this becomes one of those, it depends scenarios, where maybe I wanna test on Safari, maybe I don't, it depends on what we're testing for, how much time I have in that to write the test. There's a lot of factors that go into this. Let's zip back through this. Last one, Mac and Safari 11. Yes, because Safari 11 was up there as well. Firefox 50 on Mac. Everyone's tired of this? All right, we'll skip on, no. All right, so now we know what to test on. Let's try and figure out how we structure our tests. So imagine this big red bar with a gradient is 100% of our test. These are all the tests that we could ever possibly write. This represents our entire code base. Now we're talking about code coverage. This is our entire code base. How do we know what we're actually gonna test? Let's say we can write tests that cover 55% of that code, 55% of those functions, and those tests run in one hour. I could then go and write tests and get an extra 20% of coverage if I spend an extra three hours executing those tests. And then we have these guys at the bottom that are like the misfits of our code that we're not even care about. And I don't know why we wouldn't test them, but we have kind of that bucket at the bottom that's a big question mark. So by doing this, what we've actually started to do is we've said these tests are running an hour on this particular code. This code is absolutely critical to our business. This code that takes an extra, this 20% of our functions or 20% of our code that I can test for on another three hours, those are kind of important. And then this is the I don't care bucket. So these are the functions that might be deprecated. They might be things that you're gonna be removing from your application, but things that you don't necessarily have to write tests for. And by doing this, we have these different levels of importance. And we have names for these in the testing community. We tend to call these smoke or unit tests, tend to call these regression tests. And then that bottom piece I mentioned before, those are functions that are kind of deprecated. We might not care about. I work with a lot of different customers who separate that out and they say, I can write a smoke test that covers 50% of my code in 30 minutes. And I'm gonna run that in parallel across all my platforms. And oh, by the way, I have this massive regression suite that I run every night that takes eight hours. But I'm only getting another 10 or 15% code coverage. Well, why? That's the question that I would ask is, why do we have this? Why do we have these artificial buckets? When I think about testing and when I think about testing and working with my customers and things like that, every test I write should be important. There should be a reason to write that test. There should be a reason to execute that test. We shouldn't just write tests that are gonna run once a day as part of a regression suite because then they're kind of in that not important bucket. And I don't know about you, but if I'm actually working and I'm writing tests or I'm doing something for my employer, I want my work to matter and I want that work to be important. And it gets back to testing the right thing. Why would I test something that's only gonna be used 10% of the time? Why would I test something that's never gonna be used if it's in that bottom question mark, I don't care bucket? And all these things are important. All these things I've been talking about and testing the right thing, these are all key to remember when we talk about testing at scale. So even though I'm not saying go test everything, test all the things, you have to keep these in mind when you start to write your tests. All right, let's talk about parallelization. So this is what we're here about. This is kind of what I hinted at at the beginning of my talk. We do wanna run in parallel and we do wanna run at scale. Let's talk a little about test execution. So for anybody in the room who's seen a sauce presentation before, we have this slide in many of our decks. Historically, and kind of going back, people would execute tests in serial. They'd say I need to execute my test on Chrome, these three different Chrome versions, three different Firefox versions, three different Internet Explorer versions. Maybe I'm running them locally. Maybe I'm running them in a grid, but I'm running them one by one. I haven't put a lot of thought into parallelizing them. Maybe they're longer tests that go through like a checkout process or a purchase to checkout process on a website, something like that. You obviously get a value in, there's a big benefit when you parallelize these. So now instead of running serial, we actually have numbers on this. So if all these tests took one minute, the first side would take nine minutes, the parallel side would take a minute because we're running these in parallel across all these browsers, all these different versions. When you start to talk about testing at scale, if I'm gonna run 100,000 tests a day, 200,000 tests a day, even a million tests a day, which is where I have a couple of customers that I work with wanna get to that number, I can't do that serially. I can't execute a million tests serially because there aren't enough hours in the day. Even if my tests are very, very short, a million times very short is still a big number, whereas a million times something short, if I do it all at once, yeah, it's not too bad. Let's say you have these tests now. You have these tests, these 10 tests that run. We wanna scale that up. We wanna talk about scaling now. We're gonna talk about parallelizing. We can scale this up. I have, I don't know, maybe 80 or 90 there. We're going from 10 to 80 or 90, and I mentioned before, there are customers that I worked with, they're going from this 10 to hundreds, if not thousands of tests per day. How do we actually do that? So when you think about executing all of these tests and you're executing in parallel, you have to have the systems behind these systems to support that, or behind these tests to support that. Oftentimes you're talking about pre-production systems, whether it's a QA environment, whether it's a dev environment, whether it's a staging environment. These tests are gonna be executed somewhere. It could be a production environment well, though typically that's a little bit later in the cycle. What often happens, and this has happened at a couple of customers that I've worked with, you start to execute all these tests and the system's just meltdown. This is probably the least obvious failure mode that I've seen, but also the most common. People think, oh, I can write these tests and I use my best practices, and then I push the button on the executor and I immediately get a call from the network guy, or the server guy, or something, the guy who's maintaining this infrastructure in my data center because the systems can't handle it. This is a real story. There's a customer hardware in the financial services space and they didn't believe that we could scale on the SaaS side, so they're a SaaS customer. They said, we don't believe you can scale, we don't believe you can handle our needs to test 1,500 or 2,000 tests in parallel, so what we're gonna do, we're gonna throw as many tests at SaaS as we can in a shorter period of time, and they got up to about 1,800 or 1,900 concurrent sessions, so a concurrent session in SaaS speak is just a test executed in parallel. Within minutes, so we got notified on our side saying there's been up all these tests, but we didn't go down, we handled the load. What happened on their side, the person who pushed the button got a call within about 30 seconds from their infrastructure guys saying, stop, our pre-production servers can't handle this. So it's something to consider, especially as you start to move towards scale. The same can be said for selenium grids. So if you're running on an internal selenium grid today and you wanna start executing all these tests in parallel, you better make sure you have enough nodes in that selenium grid to handle it. We'll talk a little about test structure, so at SaaS and kind of in general, we talk about atomic and autonomous tests and they're critical to being able to test at scale as well. Atomic tests, we talk about testing a single feature, we talk about making them short and succinct. So if we think to kind of a purchase process on a website, I don't know how big Amazon is in India, Amazon's huge in the US, I think they're taking over the world. I wouldn't go and write a test script that logs in, searches for a product, adds it to a cart, goes to hit checkout, goes through the purchase process. That's a lengthy test and we're testing multiple features. We're testing our search, we're testing our add to cart, we're testing our login, we're testing our address input, we're testing our payment systems and we might even be testing APIs on the back end that would lead to shipping. That's a huge test. There are a lot of tests out there in the world today that do that. I see that with people who typically use UFT tools like that that haven't migrated to Selenium yet. They're very, very common. That's the kind of the opposite of an atomic test. An atomic test, I would test login. I would test search. I would test my payment API. And you could stub out the rest of that, all the other stuff that you don't care about. You can either mimic it, you can create stubs on the back and there are plenty of products out there that will help with test data and setting up configuring test environments. I know when I worked for CA, we helped position some of those as well in the market. Autonomous tests, these kind of go along with those atomic tests. You want a test that can be independent. What does it mean to be independent? So I don't want to rely on another test. If I have two tests that rely on each other, now we're testing two different things if test A fails and then it's gonna cause test B to fail. If test B fails, test A fails. We have this dependency between those two and that's bad. We don't want that to happen. We don't want that search to help the log in fail or something like that. If you think about why this is important for testing at scale, if I'm gonna scale up and run 100,000 tests at once, whether it's on multiple browser rest combinations, whether it's testing multiple parts of my application, if I start to have dependencies between those tests, things are gonna come crashing down pretty quickly. Let's take a look at a couple of examples here. So this is a simple test. I'm gonna test email search functionality. Log into an email account. I send an email to account one with subject one, two, three, body x, y, z. I wanna wait until that email appears in the inbox. I wanna search and then I should see an email with subject one, two, three in the search results. Who thinks this is atomic? Who thinks this is not atomic? Okay, I got one. Who doesn't think anything at all? All right, good, person in the front being honest. All right, I like that. So this is not atomic, right? So this is definitely not atomic because I'm relying on email delivery and then I'm relying on search. I'm doing multiple things in this test. You could also argue this is not autonomous, but if this is a single test, you need a paratest to be non-autonomous, but this is definitely not an atomic test. Let's take a look and see how we can change this to make this an atomic test. Second scenario, I wanna do the test this email search. I still have, I have email account one, I populate an email account one. So here I populate, I'm not sending an email. Subject one, two, three, body, x, y, z. I log into account one and then I search for that. This better? Atomic, not atomic? How many people think it's atomic now? How many people still think it's not atomic? No, it's atomic. Because I'm not sending an email, I'm only testing search functionality. So I'm using some type of an API command to populate that. I see some questions in the front. So I had a discussion with Titus who's here about whether or not this is atomic and we probably had the same thoughts that you did. This was a kind of stock example that I grabbed. So we can debate this, but it's certainly more atomic than the first test because we're not relying on email delivery. You could argue it's not atomic because I'm using an API command. That API command may or may not work, but it's definitely better than the first test. Is that better, Marcus? All right. All right, let's look at autonomous tests. So autonomous tests remind me of a pair of tests. So the first case, I want to test email search by subject. So again, in this case, we take our atomic example from the first one and we can debate whether it's truly atomic. But I want to populate that email account one, subject one, two, three, using my API command I log in I'm gonna search for conditions. Secondly, I'm gonna delete all the emails I logged in and then logged into account one I delete all the emails I should see that I have no emails left. These two tests, are they autonomous? Are they not autonomous? I see two hands and about a hundred people out here. Okay. So right, these are not autonomous tests because we're relying on an action from one to drive the second action. Let's flip that around a little bit. Now we have this same test or similar test. I populate email account one with an email subject one, two, three body XYZ using my post command via my API. I'm searching for that subject. So testing email search. Second test, I want to delete all those emails or delete all my emails. I populate account two with a hundred random emails via an API command, log into account two and I delete all those emails. How about this one, autonomous? Not autonomous? This one is. But why is this an autonomous example? In the first test, I'm testing email account one which is a separate email account. Sending all these commands in, I'm populating and I'm testing the search, I'm populating it. You could argue whether or not it's an atomic test but I'm populating all this in account one. The second test, I'm not testing account one, I'm testing account two. So account two, already has emails in it, I don't care about search, doesn't matter at all. The first one I'm doing some search, the second one I'm not and I'm testing the delete function. So these are two independent tests. Autonomous is close to being synonymous with independent and that's the key thing to think about when you're creating these tests. Let's talk a little bit about testing frameworks. So Selenium is the underlying protocol, uses web driver, we heard Simon talk about the future of Selenium and web driver yesterday, there are a lot of good things coming. Testing frameworks are key to utilizing Selenium, they're key to parallelizing tests as well. Many customers, so this first one about using in-house testing frameworks. So there are a lot of people that I've spoken with this week, we did some, I was visiting some customers earlier this week and all the people I've spoken to here today, new to Selenium, newer to automation, you might not have a testing framework, but there are also a lot of customers out there who are a little bit more mature and they already use frameworks in-house. So could be TestNG, could be JUnit, could be Protractor, could be their own custom framework, there are lots and lots of frameworks out there. If you work for an organization that already has an existing framework, please use that framework. Regardless if you think something is better, there's gonna be a big knowledge base built up behind that to help you write your tests. Parallelism is a must-have, obviously, or else I wouldn't be talking to you today. There are many frameworks out there that support parallelism natively. So if you look at a lot of our examples, we have a bunch of examples on the web, TestNG supports parallelism, JUnit supports parallelism natively, many frameworks out there do, many frameworks you can parallelize, but it's more difficult. So if you're in the position to choose a framework and you have that ability to choose, take a look at this. So synchronicity, I'm just using big words today, parallelism, synchronicity, so. Synchronous versus asynchronous testing. So this has to come down, this comes down to a lot to language. I see this a lot with customers today, customers who use JavaScript for testing. So JavaScript is inherently asynchronous. When I send a command, I'll get a response, I'll send another command to get a response, but I'm not guaranteed to the order. This actually came up last week with a financial customer I'm working with, and they said, my tests are failing, and think my screenshots are out of order and sauce, and this doesn't look right, why? And we looked at their code and had to do with this notion. So you can absolutely use a language like JavaScript, but just be aware of this when you're writing your tests. Java, other kind of higher order languages, they typically don't have this problem because you're sending a request and waiting for a response. Onboarding and documentation, so this is something that's often overlooked. I tend to work with a lot of customers that just go out and find the latest and hottest framework they can find in the open source community, and the front documentation might be quite slim, then they come to me and say, hey, why doesn't my test work? And the first thing I do typically is go look at documentation. Some are better than others. Regular maintenance, so this should be something that most people understand. You want a framework that's gonna be actively developed, you don't wanna use something and then get stuck because then it's on you to make sure that it works with newer versions of Selenium or meets your needs. The last point on here talking about the needs of Dev and QA, so this is getting into how you write your tests. So I have TDD and BDD up here right now. There are certainly frameworks and languages that are more suited for BDD or TDD than others. So BDD, things like anything based on the Gherkin language is certainly a lot easier for people to understand. If I'm a developer and I'm writing tests, maybe a framework that utilizes a language like Java or C-Sharp might be a little bit better to use, but there are things to think about when you consider a framework. Are all these necessary when we talk about parallelizing and testing at scale? Not really, but they're key to being able to create a maintainable test library that you can then execute repeatedly at scale. All right, let's take you through an example here. So I have to call this the go big or go home slide, or section. Here's an example that I've created. And this example, we can see this is in Java. This is a screenshot from Eclipse. Because I'm using TestNG, it has built-in support for parallelization. I'm utilizing the page object model here. I think my colleague Titus gave a discussion yesterday about some of these advanced topics in the page object model. I know he's pretty keen into that. But that basically means that I have my pages that are abstracted, separated from my tests. So we can see that here. I have my page called guinea pig page.java. Guinea pig page.java has the outline of my elements on the page. It's a web page. It goes through a particular URL. I have my tests abstracted as well. So my test base.java, which I'll show you in a few minutes, that's where I set up my desired capabilities. I set up the browser OS combinations that I want to test on. And I have two different tests here, even though they're three Java files. Follow link test, what that does is it goes to a particular page, clicks on a link, and make sure that it goes to the right page. Text input goes to a page, put some input in a text box, make sure that input is correct. Here's what that code looks like behind the scenes. So this follow link, we're gonna extend that test base class. You can see I create a driver. So this is using a remote web driver. That's what this web driver is. All this does is click on a link. So who in here thinks this is a good test? Good test? Bad test? I think this is a great test. So we use this in demos all the time, but if you think about the way tests should be written, they're atomic, they're autonomous. All this does is test one thing. And it might be very, very trivial, but so what? Maybe this test takes five seconds to execute. Okay, great. Now I can run it across all the different platforms that I wanna run it across without having to worry about any dependencies or taking too long or anything like that. Likewise, here's my second test. So this test, it goes and creates a driver. We're gonna get the, we're gonna get the common input. We're generating some random string here. That's what the string common input text is. We're gonna go to a page. We're gonna submit that comment, and then we're gonna assert that that comment was actually what we expected it to contain. Who in here thinks this is a good test? It's a pretty good test. Again, we're testing a single functionality. How do we actually scale? So we talked about frameworks. Again, this is for test NG utilizing Maven. We're specifying this in our POM file. This is just for demo purposes. Again, there are many ways to do this. The ways that you actually do this will depend on your framework. They'll depend on your language. For this example that I'll show you in a second, what we're doing here, we're parallelizing by method. So each one of those tests is a method. I'm saying I can run up to 1,000 methods in parallel if I had access to that many concurrent sessions. So whether it's a Selenium grid, whether it's my local machine, whether it's SAAS, I'm saying I wanna run up to 1,000 tests. This particular test example that I ran and we'll take a look at it, it runs about 100 tests concurrently, so not crazy. I could scale, I just leveraged what I had access to. So here's an example of this test and if I go and find the right button, we take a look at this video, I kick this off via Maven and this takes a second to build the project. And what we see here, this is gonna run that test suite. I hope it runs, it's a video, so I'm not relying on anything. And what we can see, and I'm pausing the video for a second, we got this little meter on the left hand side of SAAS and we can see we're up to 156 concurrent sessions out of 100 possible sessions. We allow for some bursting on our side, but this test scales very well. When I was testing this out, I actually got this up to quite a bit more, but just for the sake of making a video to show, I wanted to keep it kind of brief. If we let this run a little bit, we'll see these tests, they all start, so this is using SAAS, again, you could use a Selenium Grid, you could execute this wherever you want, provided you had the capacity. All these tests run pretty quickly because they're atomic, they're autonomous. We'll see on these little spinning icons, they start to finish. So they're executing, still running, they really go fast, even though it's taken a little while here. So these tests on SAAS, so SAAS is recording all the videos, all the screenshots, for those of you who talked to us in the booth, you heard a lot about what we do, you can certainly do this on your own as well. This isn't meant to be a big advertisement for SAAS. When this completes, we skip this along a little bit. We'll see our test complete. And for those of you that can't read the text up here, I know it might be a little bit small, but we actually used around 95 tests in a minute and 19 seconds. So if we think way back to that parallelization slide, each one of these tests takes about 15 seconds or so to run. If I wanted to run this in a minute and a half or a minute and 15 seconds, we're looking at, I don't know, eight or nine tests, but by parallelizing, making sure this test was atomic, making sure these tests were autonomous, I was able to run just under 100 tests in a minute and 20 seconds. So I utilized all those best practices, I kept those tests short. I thought about how I was going to create these tests, and I was able to do this. To kind of wrap this up and go back a little bit to the beginning, so we talked about testing, we've talked about our best practice principles, we've talked about how to structure our tests, we've talked about the need to have a backend, so I didn't talk about a backend that could support that. Fortunately, we execute that sample test a lot over the course of a day, that actually points to a GitHub page. So we probably just crush GitHub with lots and lots of requests, but fortunately they can handle it, they haven't yelled at us yet. If I were a company running a test like this and running this over and over again, I want to make sure those servers were adequate to me to handle the load, handle the API calls, the database calls, whatever it might be. And we think back and I'm testing the right thing. I want to test the right thing. I don't just want to test arbitrarily, I don't just want to test on platforms that I don't care about, I want to make sure I test the right thing. And lastly, I want to make sure I write good tests. Because if I don't write good tests, if I don't follow those best practice principles, then my tests are not going to succeed properly, they're not going to run properly, and things are going to come crashing down, or I'm not going to deliver quality code. For that, I know I'm a little bit early, but thank you very much for your time. We can use some Q and A if you'd like. Yep, the question right there. Hi, so you spoke about test frameworks, and you mentioned TestNG and JUnit as the test frameworks, but they are actually test runners, don't they? I mean, if you talk about test frameworks, you would talk about data-driven framework, or you would talk about Cypress or Robot, those are frameworks, right? So how do you differentiate between a framework and a test runner? So I think you're, I think we're getting into semantics a little bit here, so certainly I've heard that question before, a test framework versus a test runner. I think of a test framework as being comprehensive and inclusive of a test runner. When I think about a test framework, it allows me to parallelize for one, it allows me to both execute tests, so I think a test framework encompasses test runners. I do agree with your point though, there are distinctions between the two. Okay, and I just have one last question. What do you mean by synchronous and asynchronous? I mean, how does that come into picture when you execute a large amount of test cases? Does it slow down the time, or does the weights, implicit weights, explicit weights that you have within the test case, does that impact in any sort of way when you use JavaScript versus Java? Yep, so the way this come up with asynchronous versus asynchronous, I'm largely talking about JavaScript versus Java and things like that and the responses from the web pages. When you talk about parallelizing and parallelizing at scale and running tests at scale, I think it's more core to making sure your tests execute properly. So the issues that I see with this are customers that are building their tests, not necessarily running at scale, but all of a sudden things are executing out of order and then we have to go figure out why things are executing out of order. So utilizing weights is certainly helpful to making sure that tests load properly, but from my experience, that's often overlooked. Any more questions? Yes. So I have a couple of tests. It's like each test will make some global changes and I'm not able to run it parallely. So is there any solution or something like that? A solution to run tests that make global changes? Yeah. What do you mean by a global change? So for example, I need to test something like, enable something and the other test will test. I mean, if I run these tests parallely, the first test will enable it. Maybe the second test is trying to test like what will happen if it is disabled. So these kind of tests cannot be run parallely, right? It would be difficult because then you're talking about tests that are non-autonomous. You have one test that's dependent upon another. So what I would do in that case for your second test is set a flag in your test and say, if this is disabled, what's gonna happen? Because you're testing two things there. You're testing, one, does that work? Does that enable and disable the right thing? And two, if it's disabled, what happens? So there are two different things that you're testing and if you conflate those tests, you're gonna run into problems. Okay. So one more thing, maybe our application will only allow one thing to run at a time and maybe one script it maybe. So it will only allow running it one instance at a time. So those kind of scenarios also we cannot do it parallely, right? Your application's running one script at a time? Maybe it cannot accept more than one running script request at a time. So we'd have to, this would be a case where we talk about they were in that it depends case, right? So if your application can only handle one script at a time, then yeah, it's gonna be tough to run in parallel. But we can certainly talk and figure out ways around that. I've yet to find a test that can't be parallelized or an application that can't be parallelized. It's just thinking about the structure of that test and thinking about what you're trying to test and what you're trying to achieve in that test. Okay, one over here. This is regarding the testing the right thing, so. I'm sorry, I can't, can you open the front up? This is regarding testing the right things. So like currently we are using browser stack and in that we have to define in the configuration like what browser-based combination we have to. Now every 10 new Firefox version or Chrome version games. So is there a possibility of AI in that like we can have a top 10 latest combination of it? So instead of configuring that manually, we can have that data from Google Analytics and it will automatically take that combination so that we don't have to manually update that browser-based combination. So, and this is putting on my sauce hat now, right? I know you mentioned browser stack. So what we do at sauce to handle that. If I request the Chrome browser or the Firefox browser, I can use the word latest or latest minus one, latest minus two to get the most recent version that's supported. I can't speak to browser stack for how to do that but for the mechanics of how to do that. But then you're just talking about your testing vendors at that point. Does sauce base that evaluation of latest versus latest minus one? Is it purely what is latest or is it what is most popular driven by analytics? It is the latest version available on the sauce cloud. So it's not the most popular. It's interesting because it, I heard the question as would there be a value add from sauce about making that, helping us make that decision about popularity versus simply what is latest? Which is, I guess just a feature request. Yeah, it's certainly a feature request. We would have to understand what, you would have to understand your analytics and be able to take in your analytics to figure out what the most popular is. It's an interesting thought. So one of the things in the directions where sauce is going and I know other like browser stack and other providers are going is moving more into AI and thinking about how to help customers figure out both what to test and how to resolve common errors with tests. So certainly it's something that we consider. I think we have time for about one more question. So this maneuver from season. So I have one question like, is there any possible like generating the reports for different voice, voice browser combinations when we execute the scripts? Suppose if I executed 100 voice browser combinations, I need to have like, so from stress and you will get only one report like emailable report. So how we can generate reports based on this voice browser combination? Are you talking about test result report? Yes, yes. So a test result report based on those browser as combinations. I know how sauce does it. I can't speak to other vendors. It's gonna be dependent on your analytics and how you're capturing that data. So if you look at sauce, we have a nice dashboard where you can see the results of OS first browser filter depending on how you like. I'm sure that browser stack has a similar type of report. If you're doing this internally on a Selenium grid, it will be up to you to figure out how to generate a report based on that. It's possible. I know it's possible because I've seen customers take the raw data from sauce and put it into Tableau and then generate a Tableau report. So you can do it in any way, shape, or form. It just depends on how you're looking at the data and what the particular platform is that you're running your tests on. We used to put our data into Rally. Ah, okay, there we go. You can put the data into Rally. Rally has a whole test management and implementation as well.