 Thank you for coming. I know this talk is recorded, so I'm going to thank, before I get into any of the talk, I'm just going to thank my manager, Nate, for helping me create the topic and encouraging me along the way to apply to this conference. Thank you, Nate. Thank you to all the engineers and review trackers for going through the presentation and giving me really valuable feedback. Really appreciate you guys. But yeah, that is neither here nor there. I will get into the material. So you are here, again, at how to unflake flaky tests, a new hires toolkit. And the metaphor we will be using is a toolkit. We will not be using this other. This is one of the alternate talk titles I had. Keep the flakiness for your croissant, how to unflake your automated tests. So that is not the metaphor we're going for. It is not a talk with different kinds of croissants for your test. And this is also not the title of the talk. We will not be talking about cake. Each kind of concept will not be described with a different kind of cake. Because we've already eaten lunch. That is not the metaphor we're going for. It's tools. Oh yeah, and it is also not Taylor Swift themed talk. This is one of the other titles I was debating between, not Taylor Swift. And also, you don't want to hear me sing, because that would be really painful. So the talk is not going to be like 10 different Taylor Swift songs with each concept. No, it will be tools, tools are what we're talking about. Before I get into the talk, I'll just talk a little bit about review trackers, the company where I work. What we do at review trackers is, so let's say you're a business like Pizza Hut, and you have thousands of locations. And each of those locations has a Facebook page, a Google page, a Zomato page, a TripAdvisor page. And each of those pages is a place where users can leave reviews. So let's say you work at Pizza Hut and you're a brand manager, and you want to collect all those reviews. You use our platform review trackers, and you create an account. And we grab the reviews across the entire internet for you. And we give you a central place to look at all of them, give you kind of sentiment analysis of the review text, and allow you to respond to those reviews in a centralized place so that you can have a centralized brand messaging. It's a really great platform. Something else really great about review trackers engineering is that we have a microservice architecture, which means that the database is separate from the API, which is separate from asynchronous services, which is separate from the email services. And that makes it really great for me, a test engineer, to test those services. Everything's kind of separated in its own little microservice. So to actually run the review trackers application, it takes 14 microservices. I think that's pretty cool. I'll give a real quick warning before I get into any of the talk. So I am a Ruby developer and a Ruby test engineer. And so I'm not going to be using many code examples, but I just want you to know that I'm coming from a Ruby background. And I'm thinking about things in a Ruby kind of way. But I think even if you're a Java developer or a .NET developer, you'll still be able to get things out of this talk. Nothing is really Ruby specific. I just thought I'd throw that out there, though. One other thing is that I am somewhat new to programming. So that's me. I'm that little baby at the laptop. So please correct me if I'm wrong about anything at the end of the presentation and just know that I'm coming into this talk with kind of new eyes. I'll also describe our testing stack. Oh, yeah, and one other thing I wanted to mention is that I am coming at flakiness as a test engineer. Like, I'm not the developer of the application. Sometimes at organizations you have the developers. And the developers writing the application also write the tests. This talk is from a perspective of I am a test engineer and I maintain the Selenium tests. And I'm not making the application under test. I'll go real briefly into our test stack. So we use Ruby. We use Capybara, which is a DSL for using Selenium for Ruby. It's really great. It has kind of natural language so that the methods that you use sound like English. And you can read them in English. You can read the code that's running the test in a pretty easy way to understand it in English. So that even a project manager could read through the test and understand what's going on. Then we use Site Prism for our page object model. We use Parallel RSpec for our parallel test execution. We use Jenkins. That's our CI server. We also run our tests on a Selenium grid. 10 at the same time. And then we also use Docker. All our microservices are also dockerized. And the tests themselves are also dockerized. So before I get into it, I'll just talk about how I came up with this topic. And the reality of it is I've been a new hire three times in life. My first job was as a manual QA engineer. I did that for two and a half years. And while I was there, I kind of developed some programming skills and was doing kind of some of that exploratory unit testing that was talked about in the earlier keynotes. And then the other two jobs I had were as test engineers. And one of the things that I love about starting new jobs is you get these really fun tests or new hire checklists. I love new hire checklists. I love the feeling of going through it and checking it and being like, I did that. I know that piece of knowledge. I'm ready to start the actual job. I love checklists. However, when it's time to actually start the job, that's when you get the file cabinet thrown at you. It's time to start. And instead of writing new tests, you're often handed broken tests or flaky tests. And you're like, fix these. At each of the three jobs I've had, that's what's happened after I completed the checklist. So that's why I came up with this topic. I thought, let me pass on the lessons that I've learned and the things that I think are valuable to know at that stage so that you can fix your flaky test and then get on to writing new tests and providing further value for the application, get past the flakiness, shake it off, if you will. So some of the takeaways that I want you all to leave here with are confidence to uncover the flakiness in any of your tests. I also want you to have awareness of the breadth of issues that can happen in test systems. Because I won't cover all the things that can happen, but so many different things can happen. There are just so many variables when it comes to testing. But I want you to leave here with an awareness of some of the breadth of issues that can occur. And I also want you to all have persistence because don't let one flaky test get under your nail and be a thorn in your side. Like be persistent and tackle it. And hopefully you'll have the tools to do that at the end of this talk. And before I get into the tools, I'm gonna describe really quickly what I think is flakiness. This is my view and everybody might have their own definition of flakiness and it's totally all right for you to have your own definition. For me, the first thing that I think of when I think of flakiness is non-determinism. And what that means is let's say I have the same application under test and the same test script and I run it 10 times. And nine times out of 10 it passes, but one time out of 10 it fails. To me that's non-deterministic because nothing has changed in the ecosystem but a failure has been reported. Another thing that we think about when talking about flakiness is a false negative, right? So when that one out of 10 tests fails in your CI pipeline and you look at it, you're wondering like, is something actually wrong with the application or something wrong with the test code itself? And most of the time it's probably something is wrong with the test code. So that's a false negative. There's also the inverse situation that isn't quite flakiness but a false positive and it's something as a test engineer you need to keep in mind is possible. And that's when your test passes but it's not actually eliciting the behavior that you're trying to test. So as you're debugging, know that that's a thing that can happen. It's not quite flakiness but it's important to think about. And then another attribute I associate with flakiness is things that are hard to reproduce. Here's some of the common causes of flakiness. I think one of the biggest causes is change selectors. And there are a lot of talks at this conference about how to really nail down your selectors. So you should use those talks to nail down your selectors. Something else that's a common cause is change application behavior. You as a test engineer need to be aware that that's a thing that can happen. And it's possible if the application changes and your test is operating on the previous version of it that your test will fail. Waiting is a really common issue. I mean, if you were to Google like what is wrong with my test, like Selenium errors, people would be like, hey, are you waiting on the right thing? And as a test engineer, you really got to master waiting. You have to have your strategies in place. And I'm sure a lot of the talks at this conference talk about waiting strategies. And then there was also a talk earlier today by Dan from Sauce Labs about test order dependency where he talked about the idea of like an atomic test and if they're not atomic, that can cause flakiness. Same thing with test parallelization. You can run into unique issues when you try to parallelize your test. If let's say like your database isn't meant to handle that load or your login system isn't meant to handle that load. So there's all sorts of things that can happen when you try to parallelize your tests. That is kind of the same thing as test setup, right? Is there anything you're executing before your test? Maybe like factory data creation that can cause flakiness. And then the last thing, third-party code. So let's say like you have a third-party mail service that you use in production to send all your email and in your staging environments you use that same service but maybe you pay for a less expensive tier of it. Or so like in production you pay for the largest tier and you get all the bells and whistles that come with that but in your testing environment if it's less performant maybe it's possible that the third-party code in that case is what's causing the flakiness. Maybe that third-party service isn't performant. There's a number of things with third-party code that can cause flakiness and you just need to be aware that that's something you need to think about. I'll just cover really quick some of the common selenium errors that you can run into. Stale element reference, that's probably the most common error, right? So let's say you use selenium to grab an element and then JavaScript, unbeknownst to you, deleted that element and swapped it in with an element that has the same exact attributes and text in it, but selenium's definition of it is the prior definition so when you go to re-interact with that element, selenium has the old one and your stale. And there are a couple other things that can cause those stale element references. And then of course the element is not clickable at point x, y. You're trying to click something and maybe a modal supersedes what you're trying to click. You'll get this error. And element not found. As a test engineer, no doubt you have seen that before. And before I get into the tools, I just wanna show this quick video of a little bit of selenium silliness. My test script thinks I'm interacting with the first row but we actually put a flag on the third row and then did an action on the first row and that's because when the page loaded, we flagged what we thought was the first row but it was actually the third row. Because as the rows are loaded on this webpage, I mean you're not really seeing it in this video but the third one came in first in selenium's mind. So just know that selenium can be silly sometimes. All right, I've thrown a lot of information at you about definitions of flakiness and stuff like that. Now I want to get into the tools, how to debug that flakiness. And for me, it's really about having a bunch of things in your pocket to get rid of that flakiness, right? Like when I think of a toolkit, I think, okay, let's say I'm trying to hang something on the wall. I need a nail, I need a hammer and then I need a stud finder and what a stud finder does is it finds the area of the wall that has drywall behind it, not an area of the wall that has nothing behind it. So first you use the stud finder to find the proper area of the wall, then you put the nail in there because you know that's the area and then use the hammer. And it's the same thing with these tools I'm about to go into. Like you might use one technique to narrow it down and then you might use another technique to get even narrower. And yeah, so let's get into the tools except that I actually know nothing about tools. Those three things that I just talked about are the only tools that I know about. I'm not like a handy person. I don't think I've ever actually hung anything on a wall. So tools are not the metaphor we're gonna be using. We're actually gonna be talking about hats. So we will have, this is the, or rather this is the new presentation, how to unflake flaky tests, a new hire's hat collection. I'll be going through you as a test engineer wearing 10 different hats. And each of those hats will be like a different technique for debugging flaky tests. This is our collection. As we go through, I'll continue to go to the slide so that you know where you are in the presentation. So the first hat is Selenium speed. So as a test engineer, you need to know that Selenium is faster than you. It can uncover actual bugs in your application. If your front end JavaScript code isn't meant to be interacted that quickly and you run a Selenium script to do a lot of things really quickly, it's possible that you can actually break your application. It's really important to know that that's a thing that can happen. I'm gonna show a really quick video where Selenium speed showed buggy behavior in our application. And the way this works is this initial screen is a review form that an end user would fill out to review this place, McDermott Chester, that's created from our factory. And then I'm gonna really quickly go into the application and look at that review that's registered. Something else to keep in mind is that this is an NPS survey, so it's zero out of 10, but in our platform, that zero out of 10 will be translated into a zero out of five. We normalize all the reviews in that kind of way. So that means if someone selects a six in our application, that'll be a three. And if someone selects a 10, that'll be a five. All right, so I'm gonna play the video. So notice that Selenium selected six and then 10 really quickly. And then the review in our platform actually shows three stars, which means the six was registered, not the 10. And something with this is the idea of debouncing. And what debouncing is, is it's a JavaScript technique such that actions are registered in the order that they happen and they don't break the system. So for example, let's say you have a submit button and you click twice. If that submit button is debounced, it will only honor the first click. In this case, we needed to remove some of our debouncing and honor that it's possible for the end user to switch reviews really quickly. So debouncing is something that you should think about as well. All right, so we've gone over Selenium speed. Here are some photos of people in headbands. I'm a really big tennis fan, so I love Andre Agassi. He's the guy on the, I guess you're right. He's pretty cool. All right, next hat. We will be talking about the super user hat. And by super user, I don't mean someone with admin privileges. I mean someone who is an expert in the application under test. Someone who maybe has done a lot of testing in that application and is familiar with the ins and outs from an end user perspective about how the application works. And when you wear this hat, you need to really think like our Selenium, for our Selenium test, is the test workflow that they're executing actually valid? Is the error that you're seeing foreseeable? And a good example of that is like, let's say you have an application with a form and the form has logic in it so that there's divergent paths. And let's say you see the Selenium path going down one path, but you actually intended in the test for it to go down another path. So you as a test engineer need to know that it's possible that maybe the Selenium selected the wrong radio button. And the reason why we're failing now is because we're going down an alternate path that's possible for our application. So like the routing maybe isn't broken. It's just that Selenium selected the wrong radio button and is going down the wrong path. And then something else to keep in mind with being a super user is that we're new hires at our company, right? So it's possible you're not yet a super user. What that means is you need to find a super user, find someone who's an expert in your application when it's time for you to debug and get them to help you. And so for our application review trackers, I consider myself a super user. And this is a part of our application where you can add a note to a review. And we have a modal that pops up if you enter the at sign. And so that means it's kind of like Twitter or anything else like you can refer to other people and then they get notifications that you refer to them. And me as a super user, when I saw the Selenium test fail, I knew what went wrong. And right now, so this test is happening in 20 times slow speed. And we entered a full name and the modal claps. And the reason it claps, and I know it claps is because the modal can't handle spaces. As soon as you enter a space, the modal breaks. But the engineer who wrote this test didn't know that. And here are some Sherlock Holmes's. All right. The third hat, the triage hat, the nursing hat. Triage is really important as a test engineer. You really need to be good at triage. You need to be the face of that error and handle it, handle the messaging around it. Also, the failure on this screen is a failure we're seeing in Jenkins. And this is just kind of the precise text of the failure. If you're lucky, you might work in an organization that has really nice reporting. So you can look at that failure in a more beautiful way. And I actually implemented this reporting. So I'm like giving myself a pat on the back mentioning that. But yeah, it's really nice when you're trying to triage something and you get syntactical highlighting. It can help you just triage that error a little bit quicker. All right, so in the endeavor of triaging, you're first gonna look at the stack trace. And I'll just go over really quickly this stack trace. The first thing to notice about it is that it's in English. It says, expected has export CSV to return true got false. And the fact that the error is in English in this way means that we're using a test framework. And we have an assertion failure as opposed to a code failure or like a zero division error or a method not found error, something more technical. So I think when you look at a stack trace, the first thing you need to ascertain is, is the code, is this a code error or is this an assertion error? And then you can look at that stack trace and you can find where precisely did it happen? Where in the tests did it happen? And then here's another stack trace and I'm showing this one because this is an example of a code error, not an assertion error. It's a no method error. It's an undefined method user link for nil class. And the reason why this error is happening is because this find operation, if it finds the content it's looking for, it'll return it. If it doesn't find the content it's looking for, it'll return nil. And then we're trying to do a dot user link on that nil. Therefore we're getting there. But really the important thing for me to know in this is that like the test hasn't even gotten to the point where it's testing the thing under test. Something has changed before we can even get to that point. Such that the script itself is failing. So we've looked at the stack trace. The next thing you might do is look at a screenshot. I think screenshots are great. If you're a test engineer and you haven't turned on screenshots for your framework I really encourage you to do it. It's a great place to start when you're debugging test failures. If you're really lucky your organization might provide you videos of the failures. If you use something like browser stack or Sauce Labs you will get those really nice videos. I encourage you to look at them. It's one of the first things you do before you go into the code. This video I'm about to show is one of the test failures that we encountered at Review Trackers. And I'll just describe the situation a little bit. So there is a text box on the right and there's a preview window on the left. And the text box, the text that you enter into the text box is reflected in the preview window with some default text added to it. And that is done through JavaScript. And what's gonna happen when I hit play is you're gonna see Selenium enter text into the text box and you're gonna see the JavaScript in the preview window kind of freak out. And I'm playing this at like 20 times slower speed than the actual test runs. So we just enter the text test message. Wait for it. And then the JavaScript adds the default text that it expects at the beginning of the page. So this was another one of those Selenium speed issues. The page wasn't actually fully ready to go because the JavaScript that handles the mirroring of the text box text into the preview window wasn't ready. So it overrode what we entered in the test. And really the only way I could triage that was by looking at the video or by running the test locally. There's no way you can know what was going on by just reading the error. Here are some nurses. All right, our fourth hat, the debugger hat. So they're kind of two pathways to debug when I think about debugging. You can debug your Selenium code, your test. And it's also possible to debug your application. To identify, hey, is there actually an issue with application under test? And the first thing I'm gonna talk about is debugging Selenium scripts. So one of the things that I love to do when I'm debugging a Selenium script is put breakpoints in the test. And this is kind of hard to see, but do you see the red dots on the right border? Those are the breakpoints that I put in the test. And you'll notice that they're kind of interspaced at like 10 to 15 lines each. And the reason I did that is because, yes, with using breakpoints, you can navigate line by line or you can even step into methods and debug that way. But for me, when I wanna get the feeling of a test script and understand what it's doing, I like to put breakpoints interspaced at like 10 to 15 lines and then watch Selenium execute those 10 to 15 lines. So I can kind of see what it's doing from a higher level. And I'll usually do that. That's the first way that I'll debug is I'll just see, hey, what is this test trying to achieve? And then from there, I'll put a breakpoint in a certain place and then I'll do kind of the more interactive debugging. But this is where I start. And then, oh, something else I wanted to mention is just that being a test engineer, like you, it's important to have good debugging skills. It's something as an engineer, you always want to improve. And just because, or I guess, you know, like just try, you know, expanding your debugging skills. It's just really fun and it's a good way to become a better developer. So another thing you can do when you're debugging these test scripts is add those interspaced breakpoints. You can open up your local browser and open up the network traffic tab. And the reason that I like to do that is because if you run a test and you see 400s in the network tab or if you see 500s, you know something's wrong. Not only that, if you have a microservice architecture like I do, it's pretty easy to understand the API because it's a simple REST API. And if you look at the payloads of the API, it can be helpful because let's say you're looking at the page that has a list of reviews, right? There's supposed to be 30 reviews and you look at that payload and you only see 28 reviews. That will indicate to you maybe there's something wrong in the API layer, right? Like if you have a simple crowd application, looking at those payloads can be helpful. So I'm gonna show a video of me doing that very thing. So I'm in a breakpoint right now. I hit continue and we're reloading a new page and then here are all of the network calls and then I'm looking at the payload and I'm like, hmm, everything doesn't look good. And what happens here is I called one of my fellow engineers over and someone who was a subject matter expert in this area and he was like, oh yeah, you're right. We're missing the main data of this payload and that's what's wrong. So there's some other things I like to do in the Chrome debugger and there's some little Chrome Dev console tricks like dollar X for XPath. So you can test out your XPath selectors in the console. You don't have to do it in your script. And so I'm playing this video and I'm looking for a specific element. Here is the XPath that is under question and I'm just gonna enter it really quickly in the console. And one was returned. So it looks like that selector worked. Another thing you can do is use document.querySelectorAll. I've also heard of using dollar sign XX which is kind of like Chrome's way of allowing you to use CSS selectors in the DOM. And then in my breakpoint, I have 25 rows and I'm gonna execute that same CSS expression in the console and do a dot length on it and then okay, they're 25. There they are. So I like to use this pretty often. So just usually I'll start in the console to kind of narrow down what I wanna select it to be like and then I'll put it in the script. So another fun little tool that you can use is drawing a red box around the element in question. Sometimes your selector will return an element but you're like wait, is this actually the element that I think it's returning on the DOM? So this is a little JavaScript execute script line that you can do and it'll put a red border around the element in question. And you can use this technique for, you can use this in your debugging session and it's a cool thing you can do. So yeah, here's me doing it. I'm gonna do it on the header in there. Now it has a red box around it. So I've described some of the ways that I like to debug Selenium code. Now I'm gonna talk about debugging application code. The first place that you can start when you're debugging application code is looking at the application logs. This is a screenshot of me looking at the database logs for one of our test runs and it says could not connect to DB after some time investigate locally for the logs. So clearly something has gone wrong. Our DB isn't even up and I knew that instantly by looking at the application logs. If you don't have permission to look at your application logs, prod your developers to get you that permission because it's really important. Another thing you can do that is a more advanced technique but it's pretty cool is if you have access to your logs from the command line, one of the things I like to do is run the tests alongside piping the logs to the console and then so you can see the logs that are generated from your test session. Here's a video of me doing that right now. So we use Heroku and I'm now tailing the logs for our API server as the test is running on the left. And then you can do some really cool things. You can like grep for an error in those logs as your test is running and here's me doing that and look, there were a handful of errors as our test was running. So now we know, hey, there's something wrong with our API. And if you do this, your developers will thank you so much and they will appreciate your technical skills and your ability to really hunt down bugs in the application. Here's some debuggers. Yeah, this is one of my favorite photos for the entire presentation, it's pretty great. All right, the fifth hat. So this is the Diffing HTML hat. That actual text on the hat says bird nerd. So this is your ornithology hat. Your hat of someone who really appreciates birds and goes on bird watching expeditions and has binoculars and they really hunt in the HTML to find what is actually the changing element here? Like, what am I supposed to be waiting for in this Selenium script, such that my application under test is ready to proceed with the test? Because you don't wanna run your test while your application is still loading or the JavaScript code that your application depends on is still loading. You wanna ensure that the DOM is in the right place, the expected place, such that you can run your test. So one of the things you can do to kind of see the HTML changing as your test is running is you can open up the dev console and just look at all the changing elements as your test is running. It is a little bit overwhelming, I will say, to get kind of a mental map of what's changing by just looking at this. It kinda hurts my brain if I'm trying to investigate a fail. And I think it's because we're grabbing the wrong thing. And yeah, so I actually don't really recommend looking at this too much. One of the things I like to do is I like to use this little script to capture HTML changes. And the value of that is that you can know exactly what is changing in the application under test such that you can create a waiter that waits on the proper element. And I'm gonna go a little bit into details in the script so that you all understand what's going on. Okay, so the first thing it's doing is it's grabbing the entire page HTML as a string. Then it's gonna write that HTML to a file. Now we're doing a loop over seven seconds and we're gonna grab a new copy of the entire HTML page as a string. And then we're gonna compare that new copy to the old HTML that we just grabbed. And if they happen to be different, like a.k.a. the HTML has changed, we're gonna write it to a file. What that means is that, let's say you execute the script and nothing is changing in the DOM, you're just gonna get one HTML file written. However, if you execute the script and tons of things are changing in the DOM, you could get up to 20 files generated from the script. And you're gonna get every little intermediate step of changes in your application. And then once you have all those logs, you're like, how can they help me? So one of the things I like to do when it comes to analyzing those giant HTML pages is I like to run XML lint on the HTML files. And the reason I do that is because that prettifies them and then enables you to diff the HTML more effectively. And the reason why I use XML lint is because, well, I use a text editor and it has a pretty HTML feature, but every time I try to execute it on my giant HTML files, it crashes. So you really gotta use XML lint to format or to prettify those HTML files. And then once you have, let's say, two or three of those prettified HTML files, you can, or here's an example of the HTML before it's prettified and then after. And once you have those prettified HTML files, you can use KDIF to see the actual changes. And then once you identify the actual changes in your application, you can write proper waiters that wait for that changing thing to be gone. And then your test will be a lot more stable. And this is an example of me using KDIF. We have an element called isLoading and that indicates that the page is in a loading state and when it's gone, it's just called loading block. So we knew we have to wait until isLoading is gone. And here are some bird nerds. All right, the sixth hat, the application internals. It's really important that you as a test engineer understand your system architecture because you need to know like where the bodies are buried, right? Like where in your test, or where in the system is something commonly failing so that you know, oh, the database failed last week. Like that's probably the cause of this test fail right now. You need to know like which parts of the application are possibly really legacy such that they commonly fail and that's what's going wrong with their tests. It's also really good to know which flavor of JavaScript is your application using. Is it using React.js? Is it using Angular.js? It's usually, it usually will impress the developers if you know that you're using React and you know that IDs are dynamic so that you don't use those as locators. And they're also, when it comes to like googling errors and stuff like that, the flavor of JavaScript that's under test is an important variable. It's also really important to know who to ping in engineering. You definitely have to have friends in the engineering department that you can be like, hey, is that database issue still going on? It is? Okay, let's investigate that and see if that's the cause of the test fail. Here's some groundskeepers. All right, the seventh hat. This is your bookie hat. And I don't know if y'all are familiar with bookies. Those are people who do like sports betting and they collect all your money and they're in charge of the odds, right? So they know that like one team has higher odds over another team and then they will take your bet and then give you money. Or in my case, they never give me money. They just take my money. Yeah, so why is it important to wear the bookie hat? It's important to know where your application has changed. A place that you can look to see why a test is failing is the recent development logs. Is an area that's currently under development the reason for this test failure? Another place you can look is the release notes of your application and see is this a feature that we released three months ago that we're now seeing fails crop up in. In another place you can look is your bug database. You can see, oh, is this issue we're seeing now possibly a regression? Has something regressed in our application such that we knew about it, we thought we fixed it but it's not really fixed. So that is your bookie hat. Here's some bookies. All right, the eighth hat, your chef hat, your application versions hat. So it's really important as a test engineer to know that in the systems that we live in, the versions of things need to be compatible. Much like a chef, when they are preparing a meal needs to know which flavors pair with other flavors. They also need to make sure you got a good vintage of wine. Much like you have a vintage of selenium that you like to use. And just know that it's possible that if you have a combination of these things that don't work, you can hit issues. So this is a hat you wanna wear if you have recently made a change to your infrastructure. It's also an important hat to wear or I wore this hat at review trackers when one of our engineers was trying to run our test locally. And in our documentation, we tell them which version of Chrome driver to use because our tests work with a specific version of Chrome driver. And he downloaded a prior version of Chrome driver and we noticed that he wasn't able to log in running the selenium script locally. And the reason why was because the version of Chrome driver that he was using had a bug where the number three like wasn't going through, incend keys. So that was a wacky issue to debug. So you really gotta be wearing the chef hat sometimes. And yeah, here's this issue. And here are some chefs. All right, ninth hat, the headless hat. And or the complications of CI hat. And this is a really, it's just important to know that your tests run differently in your CI server than they do locally. It's possible that if you're not using something like Docker that your tests have different versions of Java on your CI server versus locally. So you probably wanna use some kind of version control to make sure that that stuff is the same. There's also different network latency when you run on your CI server versus your local connection. So know that that's something that can cause your tests to be flaky. Browser window size. So if you're not explicitly setting your browser window size, it's possible that when you run your tests headlessly in your CI server, it's using a smaller one than it would locally when you're running on your local computer. So make sure to set that. We also ran into a really weird issue at one of the companies I worked for where on our CI server, we noticed the application wasn't, it was loading, but some of the styling wasn't there. And the reason why some of the styling wasn't there is because there were security procedures on our CI server and one of the third-party JavaScript libraries was being blacklisted. So just know that if you have security measures on your CI servers, make sure that the applications that your test needs are not blacklisted. They're not prevented from being downloaded. And then every company I've worked for, time zones have always been an issue. So just be aware that it's possible your CI server is in a different time zone. Here's some headless men. All right, last hat, I'll go through really quickly, and it is the internet. And the internet is your friend and you always need to be Googling the issues that you're seeing if none of the other techniques have helped you. So just keep in mind that it's possible that you are hitting an issue that other people have hit and you need to use the internet as one of your biggest resources. A place I like to look is also Selenium HQ issue tracker. It's possible that someone else in the community has hit an issue that you're hitting. All right, here's some programmer hats. And that's the presentation. Thank you all for coming. Here are your hats. I hope you wear them in your day to day testing lives. Thank you very much.