 Let's dig in. One of the most popular and common inquiries I get is about headless testing. It's something that everybody seems to be really interested in. And so there's a couple of different ways that are recommended to get started with doing it. And so let's talk about the benefits real quick for those of you that aren't familiar. So there can be speed benefits because you're not dealing with latency potentially of connecting across the internet to a grid or remotely to a grid and dealing with potential spin up times. And then potentially it's less maintenance because you're not maintaining like a robust complex array of browsers and machines. And then you still also get the benefit of screen shotting. So effectively you get, you know, it's kind of a nice trade off. So easier to maintain a little bit faster in some cases and you still get screen shots so you can get screen shots when there's failures. And then there's actually a lot of different headless browsers. And I'm going to talk about two different approaches, but this link here is a list of kind of every available headless browser with all different kinds of rendering engines and so on and so forth. And I'll make sure to post my slides to the comp engine after the talk's over because it's just a chock full of links and some code examples. So the first one is using a virtual frame buffer and it's called XBFB is the common one that people use. And each tip has a link to where it is on the web. Unfortunately the only ones that are available on the web right now are just Ruby, but all the code from all these tips has been open sourced. So it's easy to get access to, so. But XBFB is, apologies for the tiny font, it's short for X virtual frame buffer and it's an in-memory display. And so it really only works on Unix-like operating systems, so Linux and Unix derivatives, and it enables you to run graphical applications without a display. And so while also preserving the ability to take screenshots. So you'd use it because it's ideal for running small test suites on a headless machine. So if you have like spun up a Linux machine and running Jenkins and you want to stand up Firefox or Chrome, you could quickly just use XBFB to have it actually pipe out run a browser in a virtual frame buffer, take screenshots if there's failures and then shut everything down. So that's pretty much the use case for this. And option one for doing this is you'd have to start XBFB on a specific display port and then you would background that process and then you would tell the terminal which session to use for that display port and then you would run your test. So basically, you're creating the virtual frame buffer, connecting the session to it and then running your test and it magically finds where that is. And then it'll keep, with this approach, it'll keep XBFB running until you actually close down the application. Alternatively, there's actually a binary for XBFB called XBFB-run that enables you to use that and prepend it for the command you're going to launch. So it'll start the session, do all the stuff for you with a display for it, run the tests and then it'll actually close XBFB when it's done. So not a bad way to do it. A couple things to think about, display port collisions. So if you have multiple jobs running and potentially there's more than one running at the same time and they end up using hard-coded ports that you specify, if you're using port 99 for all of your jobs, you'll end up having things run into each other. So you want to use unique values for your display ports and a couple ways to do that is you can leverage environment variables and just pull the CI job number and use that for the display port or XBFB-run has a dash A flag where it'll just randomly find a port that's available and then use that port and then shut it down when it's done. So that'll help remediate any potential display port collisions. The next one, which is very popular and a lot more prominent lately, is running your tests headlessly with Ghost Driver. And Ghost Driver is effectively the web driver bindings for PhantomJS. And PhantomJS is a, as it says here, it's a full stack web browser built on top of WebKit. And so it definitely covers a lot more ground potentially than just using XBFB because you're still using a full browser with XBFB. It's just running in a memory display port. Whereas PhantomJS is effectively a JavaScript rendering engine, but it's not a full browser. So you get kind of hopefully all the benefits without all the blow of a browser for startup time. And so Ghost Driver is the official binding that's built into it. So you would use this because it is faster. And it's also useful for a CI server, but it's really also the big benefit and it's not just limited to Linux. So you can use it on any browser and it would work fine. The first option for it is you have to download PhantomJS and just available on the download page. And then you'd start PhantomJS the binary with a web driver flag. And so it's just dash dash web driver and then the port that you want to use. And then you can connect your tasks to PhantomJS using slanim remote. And so another cool thing is you can actually connect to PhantomJS as a node to your slanim grid. And so what it looks like in code is this. You have to create a desired capabilities object, set the browser name, and then pass in the capabilities. That's it. Option two is you download PhantomJS and then you tell slanim where the binary is. And then you just specify the browser name and then launch slanim and it will just figure it out. And so one way to do that would be if you create a vendor directory within your test code and then store the binary. And then you set a property. The property name is PhantomJS.binary.path. And then you can actually grab the property as it's relative to your directory. And then that's pretty much it. And then you say new PhantomJS driver and then it works. So that's headless testing. And visual testing is the next topic. And it's something that I've been researching a lot about and writing a lot about over the last year or more. And I think it's the one thing that if you're not doing it, you should start paying attention to it. And just a quick primer on that. So visual testing, what is it? It enables you to check that an application's UI has rendered correctly, automatically. So using automation to test this kind of thing. Something that used to be thought that could only be verified manually by humans potentially late in the development lifecycle. And it can also be used to verify content, which, when I say content, I mean something more robust than just words on a page, but complex user interfaces like charts and anything with animations. There's just loads of things you can do that are very graphic intensive. You can use this for that. And the biggest thing is that you get hundreds of assertions effectively with visual testing. And it's just a few lines of code typically to do it. Because the available libraries are so robust and very available, there's fairly straightforward to get started nowadays. And so there are some inherent challenges with visual testing. There is complexity. So you already have the complexity of all the browsers and operating systems and mobile devices, all that. But then with visual testing, you also need to think about viewport sizes, responsive web, and then across all of the browser operating system devices and form factors. So it's like it gets a lot more crazy. And there's potential for false positives depending on how you're actually doing your matches against your baseline images. And things like content slightly shifting or dynamic content or typos on the page. I mean, there's loads of stuff that can be false positive driven. But a lot of these things are actually things you could mitigate pretty easily if you just know what they are and where the pitfalls are with the common tools that are available. And so I'll step through one example using a combination of Sauce Labs and a platform called Apple Tools Eyes. And so the benefit of Eyes is that they have machine learning built into their setup. So you basically can train the machine to know what to look for. So you could say, oh, that's actually a legitimate failure, blah, blah, blah. And so basically it makes it smarter. And then it kind of adopts that for all of your runs, which is great. And Sauce Labs just to get access to a browser that we might not normally have easily accessible. So in your Palm XML, you just have to drop in assuming we're using Maven because all of my examples are using Maven. You grab the Eyes Slame Java SDK. And actually I think the version number is much higher now. It's probably like 2.5, 2 or something. And then. What's your speed, though? 3.0. And so the Eyes SDK also actually pulls in the latest Selenium binding. So they might be pulling in the beta pretty soon. So I have a really simple test here. It's just a login test. So it sets up a new Firefox instance. And all it does is it visits the page, fills in a login form, submits it, and then verifies that the success message appeared, and then it tears down the browser. But if we wanted to add in a visual check to this page, then we would do a couple of things. We'd pull in the SDK, and then we would create a field variable to store the Eyes instance. And then we do a couple of changes to the setup. So we actually store the browser instance in a local browser variable, which would then pass to Apple Tools Eyes after we create an instance and set the API key. And so basically, when we call Eyes.open, we pass a browser session with some metadata, and they return back a slightly modified driver object. So then Eyes, the Apple's Eyes platform, can then take screenshots when we tell it to. And so then the test, we add a couple of checkpoints. So we call Eyes.check window, and we pass in some helpful metadata. So basically, what is the thing we're checking? So Eyes.check window for the login page. And then after we have submitted the page, we want to say, we've logged in. And then after all of that, at the end of the test, we call Eyes.close. And then when we call Eyes.close, that actually does the comparison against those check windows. It compares them to the baseline. And so in order to get a baseline image the first time you run the test, it'll say that it failed because you need to review the image to prove it. And then if you just run the test again, it'll automatically make that the baseline. And then any future test run will compare what it sees right now to what was originally captured. So that's effectively how most any visual testing tool works. And then at the end of it, if Eyes.close were not successful in running, we want to make sure we do close the session. So we call abort if not closed before we quit the browser session. And so then we run it and we get a URL dumped into the test runner. And then after we go to the jobs dashboard, we can actually see each step, each check window that occurred within that job. And get a quick little thumbnail here. And if we actually, we can hover over it, you get a lot of information, basically what viewport size, what browser, et cetera. And if you click into it, it gives you a lot more detail about why there was a failure. And so we actually have some diff matching here and we can turn it off and we can actually compare what we saw to what the baseline had. And we can see that the logout button actually has disappeared. And this test was actually only checking for the success message before. So it would have missed this until we added visual checking. So we want to actually approve or reject. And this is something we don't want. So we reject and then we save it. And then any future test run will know that this is a legitimate failure and make sure it flags it for us. And so, so then if we wanted to add in additional support with Sauce Labs, then we would just have to specify the desired capabilities with remote web driver. And then we just say what version of the browser, which platform, and then the name of our test, and then the endpoint for Sauce Labs. And then once we do that, we run the test in addition to what we got for the benefit of Apple's eyes, we now have the job dashboard for Sauce Labs, right? Screenshot for every step, as well as a video of the test as it ran, and then all the logs and metadata that come along with that. So, kind of more information and hopefully more information we could ever need, but that's the one, two punch of, you know, quickly and easy setting, setting that up. I've written a lot about visual testing. Here are the first five write-ups I've done. And this example I just talked about is from number four here. And the first one, if you're interested in what's available in terms of open-source tooling, at last check it was something like 17 different open-source libraries across four different programming languages. And probably like a half dozen of those work with Selenium. And so I think regardless of what tool you end up using, you should check out that first post just to get a sense of what's available for visual testing and just see how you can potentially incorporate it into your testing practice. So, up next is fun with a proxy server. So, for those of you who aren't aware, you can take Selenium and connect it to a browser. But you can also have that browser hop through a proxy server before it connects to the application you're testing. And with that proxy server, you can do some fun things to either monitor the traffic or manipulate it. And so a very common way to do this is to use browser mob proxy, which is an open-source proxy server, which works with Selenium. And so the first one that people, one of the big things people want that isn't readily available in Selenium anymore is the ability to access HTTP status codes. And the recommended approach for how to get this information is to use a proxy server. And so the configuration is to use a proxy server to capture the traffic from your Selenium test. And then you can find the status code for an action you're interested in, like visiting a URL. And then you can assert that that status code is what you expect. And so an example of how to do this using browser mob proxy in Java is you first have to create a field variable and then you start the proxy and you specify a start of zero to start immediately. And then you create an instance of the proxy from that server and then you configure Selenium so that it can use that proxy. And then you have to pass it in to Selenium when you create the instance for it. And then after that you have to, well you don't have to do this but you can enable additional detail like the request content and response content. So if you want more information, typically the response code, you wanna make sure that you enable hard capture types. And then at the end, you also wanna make sure you call proxy.stop before you call driver.quit. And so like that takes care of the setup. So now you have a proxy server running, you get an instance of the proxy and it would have the data that you want. And it loads up a browser instance using that proxy. And then you don't have to do this in your test but just for the most rudimentary way to do this, you could, this is how I do it. So you create a new HAR, a new HTTP archive and then you do your Selenium action and then after that you can grab the HAR and then from it you can pull out the status code and then you can assert that is what you expect. And this page on the internet, the status codes page, if you append any number, it'll return that as the status code. So 404 returns 404. And so that's the way to do status codes basically. And status codes are like a real, actually like kind of rat's nest of like edge cases. So like this is the simplest one, a page loads, it has no resources on it. So it returns 404 but what if there's like a bunch of other resources? Like which thing are you actually checking a status code for? So that's, you know, it's not a picnic but this is just how the first step for doing status codes. One of the other things you can do is blacklist content. So you can manipulate the traffic with a proxy server. So if for example you want to identify third party resources that are taking a long time to load that could impact your tests, you could actually blacklist them. So you make it so they just don't even load. So the way to do that, in addition to all the setup I showed, it's one additional command. You just call proxy.blacklist requests and you use regular expressions to find the resource and then you set the set, you specify the status code you want it to be set to. So if you just set it to 404 or some obscure status code, like that's just what it will be and then it won't load it. And so if you wanted to write a test and then verify that the proxy server was actually configured, ironically you can't just go through the hard entries and find them. In this example here, I would have tried to pull out all the entries and then loop through them and find the one that contains the thing I just blacklisted but that's actually not available in the hard entries within the Java bindings to BrowserMap proxy. So if you were curious, then you would actually want to use an HTTP client library and drop in the proxy connection and then run a query against the thing that was blacklisted to make sure it actually is returning 404. So you actually wanted to test the testing machine. That's how you do that. Load testing, this is a less common one but still an interesting use case. You could actually use a proxy server to capture traffic from your Selenium tests and then convert it from an HTTP archive into a Jmeter.jmx file so that you can basically create a baseline template for load testing. And then you would run the new Jmx file with Jmeter and then modify it as needed so they could enact a load on your application. And so it's fairly similar to what we just did except we're doing one thing different. We're creating a new file and we're outputting the har to a local file and then that's it. And then there's a couple of ways you can do the conversion but the simplest one would be going to Flood.io's har to Jmx converter and just copy paste in the har and it'll just download a Jmx and that's at Flood.io slash har to Jmx. And then from there, you can open up the Jmx and if you're interested in kind of some tips and tricks about doing this conversion, Flood.io has a nice write up on like this is a great place to start but there's some things to think about how to parameterize the Jmeter test run to make it more robust. So that's a good way to get started if you're like, we have all these great Selenium tests and we want to do load testing with Jmeter. Do we have to like recreate a lot of things? And the answer is no, you could actually do something like this and create an initial base set of Jmeter tests. And then the next thing is broken image checking. So broken image checking, there's, if you're not using visual testing, there's three approaches you can use to verify if there's broken images on your pages. And so the first option is a proxy server. The second one's an HTTP library and the third one's doing it in JavaScript. And so proxy server, you know, you do your setup and tear down like we did before. And then we do a little bit of tap dancing here. So we create a new har, visit the page that has, this example that has broken images on it and then we find all of the images on the page and then create a new collection where we're gonna store the broken images and then we loop through the, or we grab the har entries and then we loop through them and we look at all the images to find if it's an image and if it has a status code of 404. So we use the har entries to say, are there any images here that are 404? And if so, we add them to this collection and then we assert that the collection of potentially broken images compared to an empty collection. And then when we run the assertion, it points out which ones in that collection in a single assertion statement are broken. And then with an HTTP library, it's a very simplistic setup. There's like nothing special and set up and tear down. But in the test method, we're just pulling in and using, in addition to all the previous stuff we do where we find the images, we're using an HTTP library and then we're looping through and finding all the ones with a non 200 status code. And then we're doing the same kind of assertion for empty collection to broken image collection and then assertion lists the broken images in a single blow. So if there's like 50, it'll list them all, which is great. JavaScripts, after creating an instance of Selenium for your browser, you need to cast it as a JavaScript executor. And then after that, find the images. And then what we do is we execute JavaScript to look at the natural width and if it's undefined then basically, or if it's also not complete, then it's a broken image. So we store that and then we do the same assertion. So end up with kind of the same pattern over and over again for assertions, but the technique's different with how we get there. And so that's broken image checking. Of the three options, JavaScript is my vote for the best option because there's no additional setup. There's no additional network calls. It's just using the JavaScript executor, which comes out of the box with Selenium. Forgot password. So it used to be that you could just do this Gmail but it's actually gotten a little more tricky than that lately. But basically you can use Selenium, just a bit. You can use Selenium to trigger a forgot password workflow. So you take Selenium, visit a page, fill in a form, submit, and then have the browser sit there. And then you keep that session active. And then what you can use as an API call to some kind of a third party that you've just sent this email to to retrieve that email and the information that you want out of it. And then you can grab out of that email the bits that you need to complete the forgot password workflow. And so it used to be Gmail. This was kind of the way to do it. And then with them switching to Oauth 2, it became harder. And the fact that it's actually not a super reliable thing because you're dealing with very potentially brittle approach to doing email retrieval is a system not built for it and is probably in violation of the terms of service. But then there's a couple other new entrants that have become more prominent in the space. And the two I looked at is Mellosaur and Melonair. And so across Gmail off the list. And after looking at both, both Mellosaur and Melonair for their free accounts, well, they don't really offer great free accounts but the paid accounts are pretty good. And of the two, Mellosaur seemed to win out to me. And so I crossed Melonair off the list. And Mellosaur was the one to go with from my experience. And I looked around after I made this conclusion and found that Alistair Scott of Watermelon fame, he wrote up about this and came to the same conclusion, basically Mellosaur, Melonair, and Mellosaur was the one he went with. And so here's an example of using Mellosaur for doing a forgot password workflow. And so we have to include their SDK. They have an SDK for every language. And then in our test, we can create a connection to Mellosaur, start the forgot password workflow. And we can actually, using the Mellosaur SDK, we can generate some generic dynamic email address. And then basically put that into the form, submit that. And then we have to sleep, obviously, just to wait for the delay of the email provider to make sure that the email ends up in the right place. And then after that, we can grab the email. And I created this just for a helpful output to demonstrate, like we can just pull the information out. And what it looks like when we run it is this. The simple email was forgot password from the internet and it has a message saying, a forgot password retrieval was initiated. If this were a real forgot password email, you'd have a link or some action to take. And the cool thing is that all of the email that gets inbounded to the Mellosaur actually is immediately parsed into JSON. So from the SDK, we could easily just go through and grab the links. So if there was a specific link, we wanted to grab it easy to get. So you don't have to actually do any crazy, like rejects to pull information out. And so, you know, straightforward, very simple to set up and that's pretty much it. It was far easier than my previous attempts using Gmail. And so that's the way I'm planning to test going forward. The next one is A-B testing. How many people are familiar with split testing or A-B testing? I'll just read this because it might be too hard to see, but so split testing is just a simple way to experiment with application features to see which changes lead to higher user engagement, right? So it's something that a business owner would wanna do. And so a simple example would be testing variations of an email landing page to see if more people sign up. And in this kind of a split, there would be the control, the thing that's there right now that you're probably testing is how the application looks and behaves normally. And then a couple of variants. So changes that you wanna test that could include changing text on the page, color, buttons, placements of elements. And then once those variants are configured, they get pushed into rotation. So the experiment then starts and during this experiment, each user gets a different version of that feature. And then the engagement is tracked. And so split tests live for the length of an experiment or until a winner is found. And then tracking will indicate which variant converted higher. And then if no winner's found, more variants happen until there is a winner and then that new winner becomes the control. And so it's kind of a nightmare for automated testing, right? You write a test against a thing, it should stay that way and you're not aware that it's gonna change and all of a sudden you're test break and you're like, what's happening? Then you go to the page and you're not seeing the same thing. It's kind of crazy, might just make you go mad. And so a very common AB testing platform that's out there is Optimizely. And so this is an example of how to opt out of AB tests. And so on this example on the internet, there are three different states of the page that are available and you can identify which state you're in because there is different header text. And so when you're in the control, it'll just say AB test control. If you're in a variant, the first variation is AB test one. And then when you're not in a test, it'll say no AB tests, pretty straightforward. And so the configuration is you can easily opt out by forging a cookie or appending a query to the URL. And then this way you get a known state of the page which is far less likely to change without your knowledge. And so an example of this, visit the page and then we grab the header text and then we'll assert that we're actually in either the control or the variant. And then what we wanna do is forge a cookie. So with driver.managed at cookie, you can throw a new cookie in there for optimizelys opt out true feature. And then we'll refresh the page and then we can assert that the header text changed to no AB test. Alternatively, we could have visited the homepage of the internet, added the cookie, then visited the page and that would work fine too. And then the other option is just appending optimizelys opt out equals true. And then when that happens though, it does trigger a JavaScript alert. So you need to switch to that and dismiss it. But once you do that, you'll end up in a no AB test state. So each company is a little bit different. So some companies build their own AB testing platform. But ideally every platform has some escape hatch. So find the escape hatch and make it so you can opt out within your tests and you'll be a lot happier for it. File management, uploading a file. This was actually tip number one because it's the most commonly asked for thing that comes up. And it's actually getting harder to do this. But for most cases, I think this is still valid. But you have a form where it's like choose a file upload. When you click choose a file, it pops up this dialog box. You know, like, and Selenium's like, I can't reach that, it's too tall. So alternatively what you can do is use something like a GUI manipulation library, like auto IT or a robot or something. Which is kind of limiting, right? Cause typically it's like dependent to a specific operating system. And, or the alternative would be you can input the file path into the form field. And then you can sidestep the system dialog entirely. And so what that looks like is something like this. So we specify the file, visit the page, and then we just use send keys to just kind of inject the file path into the form. And then we submit it. And then that's pretty much it. But if you're dealing with a Selenium grid, something just happened, there it goes. But if you're dealing with Selenium grid, then you need to look at the file detector. Cause you need to be able to send the file across the wire to the grid for it to be able to actually use that file to upload. But it's something that's built into Selenium that's available in every language binding. So if you end up with dealing with file uploads, and not the fancy kind where you don't have this feature that which covers most use cases I think, then this is the way to go. And then downloading file, the recommended advice from a lot of people is just don't do it. But if you have to do it, here's how you do it. So there's two approaches. You can configure Selenium to download to local disk automatically when you click a button and then delete the file when you're done, right? So you can download it, inspect it, then delete it. Or you can use an HTTP library and perform a head request. And so you could check the headers for the correct content type and the correct length. So you don't actually have to download the file. Option two is better, why? Because it's an order of magnitude faster and there's no need to download the file. So it's just immediately better all around. The bummer is that if you're dealing with in-memory file renders for download, like you have to use number one. Like there's no flat URL for you to just ping and grab and use a head request. So it's context-driven mostly. But when you can, try and use option two. Otherwise, option one with Selenium looks like this just for Firefox. Mind you, each browser is different and not every browser is able to support this. So first, in this example, I'm creating a unique ID and making a directory for the file to live. So then we create a Firefox profile and then you have to set a few different preferences and these are specific to Firefox. So browser.downloaddir and then, so we're passing in the absolute path and then we have to specify the folder list and there's three different values here, zero, one or two. And two is saying use the thing I just set in the downloaddir property. And then the next one is we have to specify when to never be asked to save a file to disk and we have to specify each of the MIME types that we want that we're gonna be downloading. So if we try to download the PDF, we have to make sure application slash PDF is listed here. Otherwise it will prompt us. And if you're doing a PDF, you also need to make sure that you do PDF.js dot disable to true. Otherwise it'll try and load the PDF in a little preview pane in Firefox. A lot of work. But that's how you do it. And then also after that, you have to pass in the profile object to Firefox so it uses it. And then when you're done, of course we would want to delete all the files in the folder. So we just clean up after we're done. Alternatively for Chrome, you'd have to set Chrome options object and then set a couple of different values, which are very similar, right? Default directory when not to pop up and then you have to create a capabilities object in this case and then pass that capabilities object to Chrome. So similar but different. And then I'm not really sure how to do this in IE. And so good luck there. And then with an HTTP library, it's basically we go to the page and we grab the download link and then we create a connection for an HTTP library. And then we perform a head request and then we check the content type and the content length and then we basically assert content type is correct and then the file is not empty. And then it's kind of the same thing as saying file was downloaded, what's the extension on the file and what's the size of the file. So this being the better option obviously, when you can use it. So switching gears to additional output. So there's some additional ways you can actually get more debugging information out of your tests than just looking at the logs. And so one thing you can do is you can actually use Selenium to highlight elements on the page. And so one way to do this is with JavaScript. And so what you could do is actually grab the original style out of an element and then you can change the style of that element. So in this example, I'll change it to a dash red border. And then I create a little mechanism here to say keep that element highlighted for a certain amount of time and then revert the styling back to its original state when it's done. And so this example being used in a test would just look like this. We'd say highlight element past the element and the amount of time we want it to stay highlighted. And then it would just look like this. So in this case of a large and deep DOM, I've taken this element and highlighted it. So couldn't theory just add this debugging to parts of my test to say highlight these elements as it steps through so you can see what's actually being selected. Alternatively, you can use something kind of cool like jQuery growl and again, more JavaScript. And in this case, it's a little bit more work because you have to make sure jQuery is on the page if not add it to the page. And then once it's added at jQuery growl, at stylings for jQuery growl. But once you've done that, then you can just send execute script messages to pipe out whatever you want. And the way to take this and make it crazy cool is hook up an abstract event listener and then just fire it before and after each call. So you can actually get access to the actions being taken and what was the locator and basically output all this debugging information and it looks like this when you do it. So it says, oh, I visit navigating to this page. I found this element, found this element and then one more went to the URL change and went to this page. So what you could do is actually turn on this debugging mode, throw in an abstract event listener, make no changes to your actual tests as it was written and get debugging output like this when you run them. And if you're doing recordings and there's like an issue, you could quickly say, you could just look at a glance without having to kind of correlate between this screenshot and this action or this video and this action. You just watch the video and see as it's firing each event as it's happening, what it said. So it's a nice upgrade. So there's some more Selenium resources. If you're interested in kind of more getting started stuff, I have a Selenium boot camp and then there's my weekly tips and my weekly tips by email are available in Java. It's just not on the website. And all of the code, so I covered 14 or so tips but all of them, all 70 or so, all the codes have been open source and I'm slowly porting them to every language. So right now there's about 25 or so that are available in Java and there's about 25 or so in C-sharp and all the other ones are in Ruby and pretty soon all JavaScript, Python will also be there as well. And then I have a book on how to use Selenium successfully and the tips. So if you want to find the tips I just talked about then correlate the number to the folder on the GitHub repo. And then I do wanna close with just a quick shout out to the fall Selenium conference is coming up. The call for speakers is open. So if anyone's interested, definitely consider throwing your hat in the ring because there's, if your talk gets accepted your travel will be covered. So definitely consider it if you've got a talk that, I mean just submit a talk, if you have an idea just submit a talk and it's open till the end of July. So I would love to have you in London, that'd be great. Or attend, either way. But thank you, thank you everybody and I'll be around to take questions. So any questions? Okay, is there a microphone floating that I can use or could use? Question, question, lots of questions. Sound guys, one second. Let me actually just say it to me, I'll just repeat it back. I'll start up here and then I'll work. Okay, I see. Okay, so your point is that in the case you're describing your test application already creates a cookie and then you can correlate that cookie to elements that should be displayed. So yeah, so if you know the state, like this is like the next step is if you know what is actually supposed to be in a split test you can actually verify those things. But if you don't have that access then the shortest path is to just opt out. That was my point. It was like if you're not tightly coupled with like the changes that are happening and you just wanna make sure that the production site as it lives today with the control is working, that's one way to do it. But if you have access to that and you know it's in there then it's a great way to do it. Yeah, yeah, yeah. Mine is like how to manage the dynamic content when you do the issue layout testing. Yeah, so visual testing with dynamic content is there's a couple ways to handle that. And so the common approach is to use ignore regions. So you can actually specify elements or areas of a page that should be ignored. Which solves the problem in the short term but it potentially obfuscates or hides errors so you could be kind of minimizing your coverage. So the way that in Apple Tools they've solved this is they've actually built an additional, they call it a layout option. So normally it's typically very strict matching that happens between an image. Not like pixel by pixel but a little bit more fuzzy than that but that's where that ignore region happens. But with Apple Tools what they did is they create this thing called layout two which is not a very helpful name but it actually accounts for dynamic content. And so it actually will check all the rendering but if it's dynamic it says it knows it's dynamic but it still meets the qualifications of fitting into the proper spacing. And so it can actually find rendering issues like if something runs off to the side. And so it gets you the benefits of kind of what you get with the ignore region but it's actually seeing what's there. So for example, the classic example is like a newspaper website and how there's an image, there's a headline text and then there's the copy and then like a link to go to the full story and those always change. And so with something like this layout two mode it basically knows that that's dynamic content but can find rendering issues within it and so that's one way to handle it. The alternative would be you'd have to adjust for the match level which is a real slippery slope because then you create potentially gaps in your coverage pretty quickly. Have you tried gallon? I haven't tried gallon but I've looked at it and it looks really promising. So I think that there's more and more tools coming and ideally more and more open source solutions will start to offer better options than just ignore regions. So yeah, questions. Yeah, so let's start. There's one back here actually. Yeah, for the visual testing do you have any support for the Selenium C-Sharp? Is it only for the Selenium Java? In terms of the example I have available or what's available? In terms of what is available in the open source libraries I don't know if there's a C-Sharp one unless gallon's available in C-Sharp, no, okay. And then Apple Tools has a C-Sharp SDK. So I think that there is some love out there I just don't know. I try to balance with Apple Tools as a commercial offering so I'm trying to like, well, what's the open source equivalent and I don't know what's available for that. But I think that it's definitely something that could be figured out. Okay, and the same as the case with the forgot password or what you want. Forgot password, yes. Yeah, Mail-A-Sore has C-Sharp findings, yeah. Thank you. Yeah, you're welcome. Question up. Let's go back and then up here and then we'll bounce to the side of the room. Can we handle the online PDF also? Can I say that one more time? Can we handle the online PDF writing across when we are doing the test proposition? Can you hold the microphone, we'll take it off, sorry. Yeah, you showed about the downloading of a file and as well as uploading a file, right? So is there a way where we can handle writing up a PDF file online? A video file? Like uploading a PDF file? No, no, no. To do what with a PDF file? To write across a couple of edit boxes or anything such actually. Oh, like actually do checks against the PDF file? Yeah. After you've downloaded it? No, writing online. Oh, online? Yeah, so we, some PDF? Oh, like putting into a PDF within, oh, in the browser, like the ones where you basically do that and you can print or save and I've never actually had to test that and I'm willing to bet that wouldn't work. So, but that's a great question. I don't know the answer to that. Because I tried to spawn a couple of solutions for that and I think I couldn't find any of them. Yeah, so my interpretation of what's happening and there's people in the room that could probably correct me here, but it's not like loading a traditional DOM and as a result, it's not something that Slime can interact with. It's like the same thing as like the settings windows that pop up in the browser, like the Chrome extensions window, like that stuff is just like harder to get at and so out of the box, I don't think that's something that's readily available. That's a really interesting question now. I'll have to think about that some more. Most of the application, if you see on the banking side or the... Oh yeah, it's just they fill in the forms. They fill up the forms online. I see. I think that's the most clean right now. Okay. That's a great question. I'll definitely give some more thought to that. Okay. Yeah, yeah. Thank you. Yeah, cheers. Let's bounce over here, just in the back, almost there. I just see that now, how you handle and find on the different browsers using this API. Yeah. The same way I want to see some web push notifications. I can handle the Chrome. For example, a notification is not part of the web page. It comes as a browser object for us. It will come from browser. So we can't handle from Selenium as often. So you're saying push notifications that are coming like outside the browser, like a ground notification at the system level. Yes, yes. That's, I mean, something Selenium can handle. I mean, there's probably ways to get at that. Maybe, you could probably, you have to think creatively, I guess, to solve that problem. If you care about the notification, I mean, it could also just be that it triggers like a post call that you can maybe catch and then assert. And there's probably some way to get at that. But, you know, doing something at the operating system level that gets a bit trickier. So, sorry, I don't have a great answer, other than what to think creatively about that. Yeah, yeah. We've got to work our way up here. My question is regarding the load testing part. When we use JMeter to record and then, you know, prepare the script, there's a lot of correlation. You need to extract the reg X and do the correlation. There's a lot of managing work. So when we use Selenium to convert the file to JMX, is that still required? There's, you need to, like the file as it is written, like is, it's just rudimentary. So if you wanted to actually make it more robust, you'd have to make changes to it. So it would just take like the single action and basically replicate all the get requests. And so you'd have to, you know, you'd have to make changes to it. So if you could parameterize it, use different data if you needed to. So basically the converting portion, you just, it's a replication of the recording that JMeter does. Yeah, so it's the same thing as if you actually did the record function within JMeter and then stepped through by hand to do the same things that you're already in your Selenium test. So it saves you the step of actually stepping through and manually. Yeah. Exactly. Yeah. But you still have, you still have to. Similar to recording, switching on the recording in JMeter and then running a script. Running a Selenium script. I don't know, I don't know if that would work this, maybe that would work, but this works too. I mean, so this is just capturing the traffic and then doing the conversion that way. But then to make the script work in JMeter, you still need to. Oh no, no, to make it work, you just, you get a JMX file and then you just open it and it's there. And the output for that is the same as if you had turned on the recorder and stepped through it manually. So you still need to do the correlation part. You still need to work on it. No, I mean, it'll work. It's just, my point is that if you open it up, you can run it. It will work fine. But there will be improvements you'll probably want to make to make it actually like ready for primetime load testing. Yeah. Thank you. Yeah, you're welcome. Can we move up front there? Thank you, thank you. Hi. I had a couple of questions. So one was, so when we're running multiple tests in grid in parallel. So we run into a lot of issues like the browser we have died or MSD code log. And that is one of the first questions. Basically, why do we run into such issues? Is there a solution for this? Basically, we see a lot of issues like that. When we run multiple tests, is there a grid? A multiple test in parallel using grid. And well, there's like, there's so many reasons that could happen. And I'm sure Luke's here somewhere. And he might be a better person to answer this question since he's one of the maintainers of grid or the maintainer used to be, maybe. He knows enough about it to do a workshop on it. And so, there's reasons why it would just die on a machine. The browser could die for a number of reasons. So a lot of people basically start by killing rogue processes for browsers, rebooting machines, stuff that you can deal with from an ops perspective. And then there's potential errors and issues that come from timing within your sliding test maybe. In terms of, if it took too long to get a connection as a result of something like the browser dying, then you get, there's like, there's so many different ways when you unpack it. And then you have to kind of figure out where to start and kind of chip away at it. But it happens and like everybody deals with it. And so, you have to find ways, the best way, the ways most people figure it out is they basically do what I just described. They do it at the ops level, kind of make sure that the machines are healthy, the browsers are still running. And then when there is an issue, they would deal with that machine by bouncing it and then maybe we run the tests against a different node, which isn't great, but that's kind of what you end up having to do in a lot of cases. And then I'm assuming behind the scenes for something like browser stacker sauce, like they just do a lot of really crafty stuff like that and nobody gets to see it. So, yeah, yeah. The second question I had was, basically the same stuff works fine in one browser. There's no change in object, there's no change in the name and no export change, you know, nothing changed. The same stuff works in one browser. And then it doesn't work. Yeah. It's like very strange. Yeah, and it happens all the time. Yeah. Yeah, it's magic. It's magic. So, that's the typical approach that I recommend. It's common to have this happen. So, most people start by writing their tests in the most readily available browser like Firefox. And then they get it all working. It's great. High-fiving. And then you run it in Chrome. And then you're like, oh, God, it's breaking. What just happened? And then you go through and you find that there is potentially timing issues or there's like subtle differences where it used to be a difference between how locators were looked up where in Firefox you just say, I want this element. And then in Chrome it was, you need to specify the dependent of the element. Like you have to be able to click the center of the element. And now that's the same in both Firefox and Chrome. So like those issues are probably less. But once that was released that probably broke your Firefox test all of a sudden. And so there's things like that and then there's timing issues. Like where if something may execute slightly faster or slightly slower and your timing strategies might be a little bit wonky. So you have to like tweak your explicit weights. And basically you have to find these issues when you jump to a new browser, get those working, make sure they work in both. Then you hop to the third browser and you repeat. Like you have the same problem again and then you address those issues. And sometimes if it's like, then when you get into older browsers like, if you have to deal with like IE six or eight or something horrible, then there's like limitations for what locators can be used. Like you can't use CSS three. So you can't use pseudo classes like pseudo selectors. You can't do anything cool. You can't do like nth of type. And so then your tests, what works in these browsers over here are these cool new browsers don't work in these older browsers. So then our weird JavaScript for hovers, like stuff that just breaks for no reason. Then you have to get fancy and make your test, your page objects context aware of which browser is running and then use a different locator depending on that. Like it's just horrible. But like that's the name of the game, I guess. So welcome. So. Right behind you. Oh, next. Hi. So you say, so the difference is between eyes and which platform? Eyes and sickly. The one currently. Oh sickly. Okay. Is Adam Karmie here? Cause he, yeah, yeah, that's fine. I just, there's some, whenever sickly is mentioned in Adam's presence, he gets visibly upset. So sickly is not really a visual testing tool. It is a, meant as like a means to automate things by using visual images. Like so you can like say, I want to find this thing on the screen and it matches to it and I can use it. It's not really designed or very good for any sort of robust visual checking. So it's not, it's not great. It might be good for like getting started, but like it definitely falls off the rails very quickly. And so, so that's that. And so like either eyes or other open source visual testing platforms are much better. Like, and so that's sickly. So sickly is meant for a very specific use case. And then the other question was that resolution resolution. So an orange can jump in here. If I get this wrong too bad, but so typically you care about view port size in your browser tests. So you specify the different view port size. And then when you get in different resolutions, then you have to deal with anti-aliasing and other kinds of things. But, you know, any sort of visual testing platform that's or tool that even open source ones, it's worthwhile accounts for that kind of differences. And so really it just cares about the view port size. So ideally can we trust if it works on the one machine? It should work on another. Using the visual this one. It should work if it works on one machine, it should work on another. The thing that's important is comparing the view port size. So if it works on this machine for 1024 by 768, it should work on that machine running 1024 by 68. If the resolution changes, then there probably chances of getting fails are, you know? It should be, it should be, there shouldn't be failures if you switch. If the tool is worth it worthwhile because it should account like it's, if you're doing true pixel by pixel comparison, that will always fail. But if you're doing like proper baseline image comparison with layout checking and that kind of stuff, like it'll account for that and it'll be, it'll be fine. Thanks. Okay. And I'm at time. So thank you everybody. And I'll talk, come find me after. And thanks everybody. Okay. So I guess this will turn into me answering questions for the rest of the time. So we could do that. So let's do that. So, hi everybody. So where were we? We're talking about uploading files, not working in IE but working in other browsers. And I was, my next question to you was, how are you trying to upload files? Like what's, what are you doing? You're just using send keys. Which version of IE? IE 11, IE 10. Normally it works. So the train of thought for me is maybe there's something different that's happening on the page for IE that's causing some sort of odd behavior. So the way I would start would be basically try to do the same thing against a different app. Like in IE, make sure that your local setup with that browser works. And then go back to your app and be like, okay, well what's happening that's different? That's causing this potential mismatch of behavior. So like that's why, that's the other reason why I built the internet, this open source app. So it has examples like uploading files, login. So if I ever run into an issue for a client and they have set up a custom browser and they do all these kinds of things with modified profile and they have an issue that should normally work, I would run it against a different app. So I run it against the internet and say like, okay, does file upload work with this browser all customized against something that's just stripped down and it's just bare bones? And if it doesn't work, then it's like, oh, then it's probably in the setup or it's like a bug in Selenium or if there's something. But if it does work, then it makes me suspect that there's something different about the application. And so like the assumption that the application is working as it's intended isn't always true, especially when you branch out in other browsers because there might be something specific like executing, excuse me, specific JavaScript firing or something. So without actually seeing the markup on the page it's hard to diagnose it, but that's how I would start. I would find a way to like kind of whittle down and find a way to like target and say, well, where is the problem? Is it in the application or is it in the setup or is it in Selenium or is it something else? So, because there's like, there's so many things it could be if it's not just working. So, and unfortunately, IE is tricky, right? Cause it's like, it's one of the only, one of the few bindings that are left that's like still reverse engineered and not actually built by Microsoft or the browser vendor. So, if there are things that aren't working properly, it's like, well, it's just another issue we have to open and hopefully we can get it addressed at some point. So, yeah, okay. Next question. Oh, another question. Okay, hi. Like using a remote web driver or, okay. So you're saying that screenshotting is not consistent across browsers specifically IE. So you're having some bad times with IE. Got it. But the tests are running. So that's a start. So, screenshotting is also tricky, I know because it's also an inconsistent implementation across browsers. So, if it's not working though, that's not sure the best answer there. I feel like, I mean, IE just is tough in a lot of ways. And I'm just hoping that soon the world will adopt Edge as the new Microsoft browser and then all of our problems will go away. So, my recommendation when I don't have a good answer would be, there's two things. The first one is, see if there's an open issue for it on the slunning project and like go actually into the issue tracker and like search through it and see. And if there is, that's good because it means that one of two things that's gonna happen it'll either get fixed or there'll be a workaround posted about something you can do. So like for example, there's, it used to be an issue and now it's just like a noted workaround but it was a point where Microsoft changed something in IE and in order to get IE driver to work you had to go into your registry and add a key. And that was just something that was like an issue and tracked forever. And then eventually someone posted this great workaround add this registry value to this registry node and then it will work. And so if there's not an issue and there's not a workaround then you might have found a bug especially if it's a replicable thing and you could open a new issue for it. And if that happens then potentially someone posted a workaround or there'll be a fix but potentially there's one other option which is go to the Slaym IRC chat channel. So all the committers on the project hang out on IRC. It's internet relay chat. It's like a old school chat technology. And so there's this thing called FreeNode where a lot of open source projects have little chat rooms. It's like pre-slack, it's like but you connect to it, you get a client you connect to it. And then you hop on the Slaym chat channel and then you can ask questions of the committers and then get answers. And also other practitioners like it's just a room full of all these people that work in Slaym all the time. And so they'd be able to point you towards but saying like, oh, that is a bug. Here is a thing to help you fix it. Or it's one thing you could potentially do before submitting an issue if it's legitimate. And then if you do end up submitting an issue the general advice is make sure that you have a truly replicable, like you can actually post like markup from the page. Like with an example of a failing test and an example of markup from the page so that the developer could potentially take it and reproduce it on their local machine. And without that it's actually kind of hard to as much detail as possible basically. Because if you just say, if you just describe like screenshots don't work in IE 11 like you're not gonna get much attention for that. But if you have something like you clearly put thought into this you've exhausted kind of your options then that might be a better approach. So that's a very long answer to your question. But that's what ends up having to be when there's not a clear answer to it. So yeah, yeah. Yeah, how do you automate things like flex and silverlight and I'll just throw a flash on top of that. As far as I know there's not really good support across for those. The only thing I've really seen that works well is to potentially expose hooks in JavaScript through these widgets so that if you wanted to exercise functionality within it then you could execute JavaScript from Selenium to interact with parts of those. Otherwise, and also things like canvas anything that's like you can't really like inspect it as a DOM object and like interact with things within it. You know like with frames you could always switch into a frame but with flex and silverlight and those kinds of things there's not like there's not great support for it. Falls into this other category that's not really well covered unfortunately. So if you have a page that has nested frames how do you interact with the innermost frame? You have to switch into the nested frame. So you have to go from the topmost frame to the next frame and then you go one more frame and then once you do that then you can interact with the innermost piece and then once you're done you don't have to like walk back up. You can actually go to the main content window. There's like a default content method and that'll jump you back to the top and so that's what you have to do. You have to switch, you have to walk one at a time into it and then you can interact with it. Yeah, I think that ultimately that's where the project's probably heading to a certain degree, right? Like so once you get to Selenium 4 then it's like the W3C specification implementation. Okay, so once Selenium 4 comes I feel like the emulation is, this is kind of the bridge across, right? It's like we're moving a thing, make sure that and we had WebDriver back Selenium to kind of get people over that hump and now it's like well now let's just do the emulation and then eventually it's gonna be like it's just not there. So I think that the hope would be that we could move as quickly as we can into Selenium 4 and hopefully not nearly as long as it took to go from Selenium 2 to Selenium 3. Yeah, well I mean there's no reason that it should be that hard. I know also though it is like there are a lot of API changes and so it's not a trivial change especially if you have a large number of tests. Unless you were using something like a base page object, like a base utility class and your page object referenced that then it's like really a much simpler process to drop in. But yeah, it'd be a real disservice I think to not have backwards compatibility for clearly a large percentage. I mean I don't think it's like a clearly not a majority but it's still a large enough percentage to pay attention to. And so it's definitely always been the plan to have a slow deprecation process for it. And then it's not that it's like gone. It's like a separate right now it's like a separate download also. Like if people still wanna run the RC that's why it's a leg RC. It's like the old implementation. Yeah, so Selenium ID is just an interesting tool in general. I mean it was originally built by someone in Japan and only in Firefox. And this hasn't really received much love. I mean, I know that like Adam Gouchard it's like there's there have been people who have supported it to make it make some enhancements but for the most part it's if you talk to other committers on the projects most of them would say record and play back should die in a fire. And like that's just yeah. Well, so I know that there's also been attempts to kind of rebuild like I know like sauce had like sauce builder and they contributed that open source and they kind of made like Selenium builder. And I honestly don't know the plan for that as part of the project goes. But I know that people do use Selenium ID too and it is meant still as a stepping stone for like either create, throw away, test, solve a short-term problem or export them and write them in real code but that's never something I think that's talked about in the keynote so. But that's a good question. I think that there's an opportunity to maybe make it better so that even something as simple as like making me the export function better, right? It's like when you export code out of Selenium ID that's also pretty horrendous so making that better and also making it approachable for people who want to contribute fixes. That's not something that's I think readily accessible for folks or maybe offering a port of it in Chrome or you know like those kinds of things but the low-kitter lookups are really suspect. Like they're just not like unless there's like an ID obvious ID like the end of with like really brittle X path and it doesn't really educate people on how to write good tests. So it's if you could solve that problem which is hard to solve in an automated way so I can see why I think from Simon's perspective it's not worth it because anything that dynamically determines locators he thinks is just a mistake. Yeah and so but it's just so interesting that like the main like there's a big market share of Selenium ID usage in Japan of all places because there's like really robust language support in Selenium ID so yeah. Yeah so anything that gets like that's why I'm not sure like the new versions have caught on cause I don't think they really focus on language support and I think that's kind of the big drawback. So it's like it's kind of weird. I didn't realize like all these things until recently I was like oh Selenium ID is actually a hard problem for a lot of reasons yeah so yes. But good this could be your first lightning talk. A live fix maybe not could be some sense of data. So apparently having an issue executing in, is it in Chrome now not Firefox. Okay so issues launching in one browser not working but working fine in another browser. So Chrome drivers launching and oh it runs in the background now. If you were to run it and then do the like if you just launch the browser and take out all your test commands and just read that sleep for like 10 seconds you can, I think you could be able to see it running in the password. So certain browsers in when you run them like Chrome driver like they some of them and this isn't the default behavior unfortunately across all of them but they are meant to be backgrounded so there's no focus of the window and so for things like Chrome it'll run in the background so you may think unless you actually get an actual exception that it didn't launch like in a stack tray saying like a specific issue if it actually looks like it ran and it says the test pass but you didn't see it open then it's probably running in the background so you'd actually want to like kind of zoom out on all your windows and be like is it running right now? Cause it can really throw you off cause if you're running Firefox it'll launch right in front of your face. But the hopeful intention was that I think they were trying to make it so all of them had a good background but it depends on if the developer is implemented or not for that specific browser. It's like the Chrome driver team did that cause like most developers want to run their tests and then go do something else and then wait to see what the results are and then they'll capture screenshots if there's issues or something. So Chrome does that so it's probably running and unless you actually have a stack trace that says that there was an exception. Do you or not? Do you have an exception? Well that's happening. Any other questions? Question? Yeah, so it's just something information kind of I just want to share. So how many of you have done headless automation using Selenium? Yeah, did you use HTML unit driver apart from HTML unit driver? Yeah, so just that I want to share so like many people over here may not be into JavaScript so like so if you're using PhantomJS driver so you can bridge the gap between using Selenium and using by not using JS like you can use any other APIs that Selenium uses and you can use PhantomJS driver in the back end to run your test headless. So like I did it sometimes ago so it really helped me out in you know accomplishing the client requirements of running it headless but using my own choice of language I preferred using Java so it really helped me out in you know succumbing the gap between PhantomJS and Selenium so it gave me the independence to code and besides that it also becomes easier to integrate with Jenkins which is a continuous integration tool so you can run your test without even bothering to run yourself every day so just kind of three minutes that I want to share with you guys. Thank you. Great, thank you. Do you have a question for him? Okay, here's the mic. Are you using flaky test plug-in for Jenkins? Like rerunning the failed test cases? Yeah, that part I got like even we are using Jenkins but I wanted to know like even from Jenkins if we launch sometimes the test cases may fail because of some issues like error connection will be refused something like that so if anyone is knowing like how to rerun the failed test cases from Jenkins no, no, I want to automate it like there are some plugins I searched on Google like flaky test plug-in and nageinator but for our Jenkins version like flaky test plug-in is not available so how can we just we need to give the time like after okay, so no plug-in is required just we can so in post build actions what exactly we need to select? Any idea? Okay, thanks. Can I give you a mic real quick? There are people on live stream who wouldn't be able to hear otherwise. So, while we were using PhantomJ we are using XPath as our locator to find elements we had a lot of issues where with PhantomJ it would not all of them but some of them would just not get recognized so did you say something like that? Yeah, so initially I also faced this issue so sometimes it may happens due to the versioning of the PhantomJ's driver not driver itself but the PhantomJ's binary part that you are giving so the PhantomJ's version that you are downloading it should be compatible with Selenium so when I searched it in the Selenium Google user group they have specifically mentioned that you might be you should be using 2.7. something version so you can use that another way you can do is like you can put a weight like since it's headless you know it works really fast so it may be something it may be happens like before even loading the DOM element the control may be going out so just because of that it may be that it's not able to find the DOM element or the XPath that you are mentioning so in that case you may put a implicit weight or you may put something like wait for a resource like or wait for a locator like there are specific locators you can put a weight for that and maybe execute your test after that and we also have a lightning talk can you guys feel free to introduce yourselves and then as those questions will fill them but you have three minutes and take the floor okay hello everyone this is Jitendra here and I have Ulaz with me so we have been working for one year together one of the problems that we are hearing from the various talks that we have gone to is that how do I run my 5,000 or 10,000 test cases on the UI so we had this similar kind of challenge in our previous project that we worked so what did we do to solve it so first of all we felt that having less number of UI test cases is better so the next question comes if not UI or functional test cases so where should I write my test cases so we came up with an idea that we should be writing functional automation test cases on the API level rather than on the UI level so why do I want to do that yeah maybe if you follow the test theorem what happens is like everybody is like in our case what we have is a mobile app then we have back end API then we have some panel which will control the thing so normally in product based companies you want to write those test cases and the test theorem suggests us to write at each particular level so you should be having your back end test cases at API level and then unit level test cases API level test cases, UI level test cases and then the application level test cases like at mobile app level or the web level so that's why we have divided our case cases and here what we have done is like we have a single framework but which will cater to the all level of case cases like the API level test cases or front end level for mobile app mobile app for the web so that's it so whenever we talk about API level test cases I see most of the people get a little scared how do I write API level test cases well we all have been writing Selenium test cases using page object I think so for last many years so we brought up the similar concept on API as well so how does it look like so just like we have page objects in Selenium we can have page object for our API as well so for example if I want to test you know this feature I have written these test cases in cucumber so where can I write the page object for it so I have pages and in pages I go to API so this is how it looks like so similar concept of having page objects in UI test cases we have brought or borrowed the same idea in our API test cases also so this helped us a lot in maintaining our test cases and we were still doing the regression for our entire application one it was not written because it was not dependent upon UI and second was it was passed so all our regression testing used to get completed in 15-20 minutes with all our API tests tested and all the functional testing done for all of them what do you think of these test cases that you guys have talked about this can be plugged into any server like CIS server like GEN so we used to have in this single framework we used to have created separate jobs for a separate layer of test cases so whenever back and forth files and those programs then with anybody from the panel like the website they have changed then our web test cases will be run so there will be a separate job for each level of test cases and then we have an interim test cases as well the good point is like you have seen a single of these there are a number of ways to attack the test cases here we are managing folder wise as well as you can have there are a number of ways of tagging it you can manage the product the good thing is like same test cases basically the single framework which catered to run all the test cases like here we have app level test cases as well as and again it will be different for iOS and different for Android so and we use rate utility to have separate rate test cases which will be run in the different test cases it is the top level or it is the top each product you have so yeah that was like you want to share great thank you yes independent entities so whenever I write an API test it is like indirectly helpful in my UI test control for example let's consider a couple of an e-commerce website I need to log in I need to add product to the cart and then I need to check out now if I know that my ad cart functionality is already working fine what I would do is I would log in and add items to the cart via an API rather than doing it from my UI test itself this makes my test really fast and in UI I am focusing only on the things that I need to focus on so when I write an API test implicitly it also helps me to work on my UI test also so it's upon me whether I want to reduce the things that I have written in API in my UI test also so this is the backing behind this API so are you using the rest lines that I am just getting so the best part of it is that every client library has its own HTTP library right we can just get or post request using any HTTP client library in any language we can come up with a similar architecture in any language that we want we use basically Ruby and Ruby's HTTP client library to run our test but we had also tried this before in a library called Rest Assured in Java which also helps us to do similar kind of thank you great thanks guys anyone else want to get up and jam out for 3 minutes no I'll take any questions also I do have I can talk for 3 minutes on something weight strategies for Selenium how many people here use implicit weight an overwhelming terrifying number of you how many people if keep your hands up everyone that still uses implicit weight that also uses explicit weights too okay similar number of people okay so so it's not that implicit weights are bad but they're bad when you mix them with explicit weights so it's interesting because you can set an implicit weight and then effectively technically override it with an explicit weight or adjust the implicit weight right so it's like okay your default wait time is here and then you say well I don't want to wait that long so you adjust the dial to go up or down and doing it by adjusting the implicit weight that's kind of bad and then using an implicit weight and then adjusting it with an explicit weight that's also bad but for a different reason so globally changing the wait time is kind of a horrible idea just for one action and when you mix implicit and explicit weights you end up potentially introducing transient test failures because of the way that implementation is done depending on the browser when being executed over a remote connection so just because it works fine on your machine then when you run it in a cloud or on a grid it is potentially going to cause weird test instability and so the recommended approach from the committee and from like practitioners is to only use explicit weights even though it's a little more work to be explicit because they're explicit weights but that's the recommended approach because it gives you consistent behavior and I'll go one step further saying that there are some cases where an implicit weight will not actually wait for certain things it will trigger a false positive and then you'll actually get an invalid test result whereas an explicit weight typically covers every use case that I've seen for waiting so that's my spiel, my short lightning talk so either use an implicit weight or explicit weights but don't mix them and really the recommendation is to use explicit weights so that's that so performance wise will an implicit weight be better or an explicit weight so the way I think about it isn't which would be better in terms of more performance than the other I think about writing a test as like trying to run through a maze you don't want to take the maximum amount of time to get through a maze and so what you want to do though is go as quickly as you can and so writing a test to work through a specific workflow for a user on an application ideally you would want that to go as time it takes to complete that action and then to determine if it's accurate, it did this thing it got to the end and it worked or not and by trying to put in some sort of safeguard by saying let's make a default weight time for every action and if it can't happen it's going to keep trying it until this blanket time out that's not really specific to the context and there's plenty of times you don't even need to have a weight there are loads of times actually only typically when you end up dealing with slower browser timing which is usually easy enough to pinpoint and apply a weight or asynchronous JavaScript every other time though you load a page, you don't need to wait and the idea of waiting for the page to be loaded is something that doesn't exist anymore you want to wait long enough for the thing you want to interact with and then interact with it and move on to the next thing technically a crutch to get started whereas you could actually remove it see where your tests falter, add explicit weights and then when you jump from one browser to the next you have these pressure valves that you can adjust whereas if you just rely on an implicit weight you're already set off on a kind of rickety bridge so in terms of like which is more performant I think it's really how do you write tests that are more efficient to get through the workflow so that's why I think explicit weights are better welcome back to the code oh it said no such element error yep any other questions yes we'll start here since you're closer so you're asking if we can set a page load weight time to determine wait this long for the page to load so I guess I'd ask why why would you want to wait until after the page loads hmm okay so the idea, but I still challenge just because you think it can happen in one browser maybe not another but why would you need to wait for the page to finish loading typically the approaches you wait for the thing that you need to interact with to load the only time you care if everything is loaded if you're going to take a screenshot of the rendering of it but if you're interacting with it you only need just enough to actually accomplish the active stepping to the next part of your test and so the idea of waiting for something to load I would do something like look at the Ajax counter and see if that's at zero that's like the simplest approach as far as anything else the idea of anything to say synchronous the idea of is a page ever finished loading is like an impossible thing to determine now other than checking the Ajax counter because all the assets are pulled in but then JavaScript starts firing and starts pulling in more assets or starts doing other things to the text and then something triggers and then more JavaScript fires when does it end then you're stuck with either checking the Ajax counter because there is nothing else firing or you just set a static amount of time because typically just a driver.get or a navigate to that's just just waiting for something to return from the browser but it's nothing other than okay the page is kind of rendered but that's like literally just saying there's markup in the DOM but it has nothing to do with okay but it's JavaScript firing once that happens there's nothing you can really do to actually set that other than just do an execute script to check the Ajax counter and if you're using jQuery that's easy to do so yeah exactly so that's the recommended approach question back here if I have a responsive website to test I have tested it on desktop my Selenium script runs perfectly fine and I'm running it on mobile now which is also perfectly fine but when I manually go and check the website on the mobile the UI is completely distorted so it is not how it should be but my Selenium script is still passing so in such cases how should we go about writing a test to make sure that the look and feel is correct on both the platforms so responsive web testing is kind of a somewhat deep topic but it's I'll do my best to answer it in a brief couple of sentences so you're describing stepping through a responsive site in Selenium and then inserting an element displayed or text is there and then when you actually look at it it's malformed and so that's because all that Selenium cares about is is this element in the DOM and can it be scrolled in the view if it's out of view and is it visible to the human eye is it not marked as hidden and there's no overlay covering it so all of that criteria but the look and feel is completely busted and so that's where visual testing becomes really important and so some kind of use some kind of library that does a baseline image comparison and there's some open source ones and then there's platforms like Apple Tools Eyes where you can basically you set the viewport size or it depends if you're going to do setting the viewport size to test the breakpoints for the responsive layout or actually a device, either a simulator emulator or a real device and the caveats with that are the easiest place to get started if you're doing mobile would be use a simulator emulator but the real rendering issues are only going to show up on the physical device which then gets harder to test but if you just start thinking about it from the perspective of desktop and then like downsizing it to a smaller window then you can get a good amount of mileage but you would need to use something that would basically take a snapshot, store it as a baseline and then when there's an it would take another snapshot later and compare it and say okay are there visual differences here and then the more you get into it is the more you find the false positives and the challenges and workarounds and better options but basically if you start doing that you could actually start catching things rather quickly like okay the Selenium test you can actually also incorporate it into your Selenium test so you can have it like take a snapshot at different stages of the test and then compare it each step of the way and then tell you okay something's broken here and you go look at it you can actually see the comparison so that's how I would do it and then with responsive you can actually do it for each different viewport size so I would look at the CSS for the responsive layout and figure out the different resolution breakpoints so it's like if it's you know greater than 1024 by 768 or whatever and then the middle one that triggers to like the smaller and then the smallest one so it's typically like three in a responsive layout so find what the breakpoints are and also test just around those breakpoints because if you're off by like one pixel or two pixels sometimes that actually is enough to create a weird rendering bug so I would basically do that so do all of those things so yeah so right here and then go back yeah can you ask one more question yeah sure so for distributed testing we use grid but we have one more alternative for that like using Jenkins master state combination so which one is more advantageous so the thing with so you kind of need both right so Selenium can take as many connections as you Selenium Grid can reasonably take whatever connections you throw at it and so it enables you to run things in parallel but you need to be able to execute the test in parallel to begin with so Selenium Grid is just a receiver of however many connections you can throw at it so you have to create the means to generate all those connections and a single Jenkins server could potentially be overwhelmed very quickly running tests in parallel so then you would need to enact like a master slave model where you could then scale your execution across them so if you're trying to run like a thousand or more tests it depends how much you want to scale out but you kind of need both in order to do that so there's not like one versus the other it's like if you're doing everything you just described it's like good for you so yeah you're welcome question so you download a file you submit a form it gives you so so validating data in a pdf so step one is get the pdf on local disk really so you effectively download it step two is find a library where you can crack open the pdf and inspect the things within it so like I know there's in Java I can't remember the name of the library there's one there's like one for every language basically that enables you to like traverse the pdf through effectively a simple set of APIs and so you can kind of inspect elements within the pdf and if you do that then you can kind of verify it with what you expect but no no you don't want that so right now as far as I know this question came up earlier I don't have a good answer I'm pretty sure that's not something you can do with Selenium but you can't like interact with the DOM of like yeah so it opens in a frame or it opens in a new tab in a pdf it's a javascript standard pdf view that's built into Firefox it's not necessarily consistent across all browsers so anything can be automated let's just be clear about that software can do anything and for pdf to do it in a way that's browser agnostic my recommendation would be find a way to download the file and then that's probably less of a problem to solve download it to local disk crack it open and inspect the elements within it and that's what you do when you're done and if you do that then you're solving the problem of downloading across browser as opposed to how do I navigate this semi-impossible thing right now rendering in the browser this different depending on which browser I'm looking at so it can be done yes anything can be done software is malleable but yeah so that would be my recommendation is there any way to migrate from QTP to Selenium I've heard of some options for it I don't know I don't know if any of them open source either yeah I if anyone knows a way to migrate without having to rewrite everything from scratch yeah a quality center is there any way to connect between quality centers to Selenium I don't know I've never used quality centers so I'm not sure I never had to solve that problem but I'm sure one of the committers here might be able to answer it I'm sure yeah I would ask maybe one of the committers they have any ideas so like some people hanging out in the bug bash might be able to help you I think we're just about on time so thanks everybody and we'll turn things over to the next speaker here in a minute thanks