 Rydych, rydyn. Okay. The stuff is typically what your test automation code looks like. We've got a few lines of code, and somewhere here, the other side of the world, there's a few lines of code that it tests. This example comes from some website I found this morning with a nice page object that's to wind up the people that are going to talk about all the other pages and things later on today. In here is a bunch of connectivity code. This is typically millions of lines of code. It's your web browser, it's Selenium Web Driver, it's the networking code, we've got the GUI code, which has all the glorious JavaScript and CSS and the rest of it, but ultimately what we're testing is this. And typically there's some sort of database running in the background. So we've got small amount of code, testing small amount of code, going through a vast plethora of code and a bunch of connectivity. Again, apologies to be missing some graphics here. The other perspective I want to give you is your tests are here, we have some sort of conduit code, which essentially connects your code to the outside world. It deals with this graphical user interface. This is what, as users we know, got the aesthetics to it, but ultimately we're treating it as an interface, albeit that most GUI interfaces are implicit interfaces. And by that I mean that we don't describe the interfaces in code, we find them and we try and fathom out how do we get our code to work with the GUI interface. That goes through a bunch of support code, so how many of you have written software? I'm not going to talk about test automation software, I'm going to talk about applications software or system software, some of you, a few of you. Essentially there's normally lots and lots of code that runs that finally gets us to the behaviors we want to test, whether it's a login, whether it's putting something in a shopping basket, whatever it is we're trying to achieve. And all our code has to go through this, and of course if there are problems with the interface or the problems of the behavior of the code, the code is likely to be less reliable. Again, I apologise for the formatting. I used to work for eBay and this was a fairly typical conversation I would have with any of the test automators pretty much that I was working with in the company. The eBay page, this is a simplified version of it because we're not signed in, but it's in the order of 500 main elements on the home page of eBay. So I'd go to someone on the team and say, what are we testing, say, what are we testing on the home page? Okay, fine, show me the code. And what we'd actually find is that's all we're testing. We go to the login on the search field, we type something in, we press the search button, and we pretty much check that something happens. The title's updated or some other wizardry, or if we're really, really good at writing our tests, we check that the keyword we search for is found in the destination page. But we don't tend to check much else. Again, this morning I was searching for examples of code and there's no end of code that checks for nothing on the internet. So these are sort of things that our automated tests may miss, and I suspect that virtually all of your automated tests miss some or all of these types of things. Again, using eBay as an example, we've got navigation. This is a fairly old screenshot, so it's about three or four years old. The user interface has changed a little bit. You could drill down through the user interface and say I want to look for cars or clothes or whatever it is. We all have browser histories. Now the browser history is a browsing history, sorry, on eBay. It's Richard. It'll actually show me images of what I looked at before and the updated bid prices and whatever it is. A lot of money is made out of promotions, whether it's eBay promoting its own stuff or the ads that we see and know and love and sometimes click on. It misses just about any sort of errors with JavaScript, CSS, and HTML. And it misses everything to do with layout rendering and formatting unless we're lucky and it happens to be that little result that we're looking for. And one of the questions for us to ask in terms of test automation is when does it matter to us that these things change and when doesn't it matter in our tests? Because we're not checking for them. Clearly we're missing them, but perhaps for this test it doesn't matter when we've got another test that checks something else for us. The next thing is essentially stuff changes all the time. So the time of day changes, the network conditions change, the device people are using changes, the operating system, the browsers change, developers change software. Again on large sites like eBay, Google, they're running hundreds or thousands of what's known as experiments and the experiments will have a group of you. You on this side of me, my left, versus you on my right and you'll get the control, in other words the same as we had before, but you'll get the condition that we're trying for and it may be that we've changed the colour slightly. So Google experimented with, I think, 40-some different versions of blue to see which ones people clicked on more or not in search. So you may be having the slightly lighter blue or the slightly darker blue. You're not aware of this, but when I revisit the page, the servers that serve me may be a different server and it gives me a different return. But again, do our tests need to look for this or not? The next thing is that let's assume there's some sort of change in functionality or libraries. When that happens, some of our tests will now fail, we hope, and we need to think about these tests. Have they failed correctly? In other words, have they detected something that's meaningful to us that we now say, aha, this is a bug, this should not have happened? Make sense? Sadly for most of our automated tests, I suspect that normally they fail because our tests have failed. We scurry away, we write a bug report and we go to the developer and say, hey, look, it's broken, and they go, of course it's broken. That's what it's supposed to do. Go away, numpty, or whatever polite words you have in your team. Sometimes we need to modify the tests to adapt for the new functionality. Perhaps you're lucky and your teams communicate well and you know ahead of time what changes we're expecting, but quite often we sometimes discover perhaps unanticipated changes, unspoken changes. So it's just now fading away. I think battery is another challenge for these little computers. So we need to sometimes modify, and finally, we need to actively look at pruning and deleting tests. One of the big problems, and I won't name the companies, I'm going to get something barrensly, but some of these big companies have test suites in the order of hundreds of thousands of automated tests and millions of tests, and they preserve them because they like having millions of tests, not because the millions of tests are useful. And one of the biggest problems you'll find is fairly large teams, I'm talking about 10 plus staff, who are spending most of their time fixing tests that may not have much more value in the company. So the second part of this then is what I'll call preventive maintenance and retirement. How many of you have flown in aircraft? Commercial airlines, most of us? Okay, I came here in a big Boeing 777, and I think it was. Now, I used to be in the military, I used to work on aeroplanes, and one thing that you would learn with aeroplanes is you don't want to break when they're being used. Does that sound fairly obvious? Okay, so what do you do about it? Well, it do two things essentially, the same as with your vehicle, if you've got a car, you get a sort of service schedule. It says at 6,000 miles, six months, whatever it is, check the brake fluid, or whatever it is. And most of us ignore these with impunity, because it's tiresome and time to go to the dealer and get things fixed and the rest of it. But if we leave it long enough, eventually things will break. My daughter and I rode around Europe, we just came back day before yesterday, me on an old bike, she on a fairly new bike, hers is eight years old, mine's 30, some years old. And one of the parts of her back brakes broke. She had no back brake anymore. Now, we checked it, but not often enough. So we got to a crisis because she no longer had a back brake. We had no pads, thankfully we found someone in France who fixed it for us, which was a wonderful miracle, because she wasn't very confident riding without a back brake. But we need to make sure we have preventive maintenance for our tests. The other thing is to activate the retire tests. I'm one of the older people here, I'm over 50, so there are a few old grey hairs like me here, but eventually we'll get retired from this industry. You'll say, thank you very much, Julian, nice conference, go home, rest in England now, whatever it is. But we need to do the same with our tests. And think about when we're writing a test, how long do we expect this test to live for? In some cases, the best automated tests last for 10 minutes. They do something useful. We've learned from them and we don't want to commit them to our code base necessarily. We want to just say, thank you, we've learned what we need to know, let's move on. Thankfully, Source Control, I hope all of you are using Source Control. I haven't helped you if you're not. Anyway, put the stuff in Source Control, let it take care of it. Next challenge, I've been writing software since the 80s. That doesn't mean much. It means I predate most of you in C in other boring languages. And to be honest, I've lost touch in some of these languages. My daughters are now adults and they're both learning the program. So they come back to me with C and I suddenly realize I don't actually understand the methods and the functions and the rest of it. So I'm relearning how to write C. It's actually been quite useful for me. I'm doing Android development, I'm learning how Gradle works. So I'm teaching myself and I know my weaknesses. And when I know my weaknesses, then I know not to overtrust the competence of me writing software and my software includes my software tests. So the test will be unreliable if I don't know how to write good software. The next thing is to practice. So when you're on the bug bash with Selenium 3, make sure you're practicing different API calls. Don't just do the hello world type things, but experiment and help us to make sure we've got confidence in the new software by exercising by practicing with it. And I'll encourage you to test what you're using for testing. A lot of what we need to do is understand how the browser works, how the system works, how the tools work, how the APIs work, but we typically rush into just writing automation. That's the best we think we can do. So rather than rushing in, let's practice and write code that learns the behaviors. Once we understand the behavior of the tools, then we can write better tests. So my challenge is to what I call test inside the box. And then the final point to make, and this is more so for things like performance testing and load testing, is to calibrate, to make sure that when it says a number, 37 milliseconds, it's actually accurate, rather than inaccurate numbers. And I've seen tools vary by three to one in the times they report, running a comparison across a suite of 10 different tools. The next thing then in terms of making our tests more reliable is to improve the interfaces. This is the bridge between the system we test in the GUI typically and our test automation code. And you'll probably hear many times at this conference some of the challenges of using things like XPaths rather than adding IDs. And there's specific talks here on ways to improve the interfaces. For instance Andy Palmer's talk, which I think is tomorrow. So go to Andy's talk about his ideas and suggestions for ways to improve this. This is an academic paper. If you use your favorite search engine, you'll probably find it. And what they actually looked at is an industrial project out in Italy. They compared the maintainability of the code base and found that IDs were better and they explained why they found them better. And then we need to work out where our tests want to be flexible. So do we mind if the login and password section drops a couple of pixels on the screen? Is that important to us? Well for some companies it is and some projects it isn't. So we might write tests that are flexible. They don't mind that. But we may want to say stop if the branding changes. Because we realize that we've just merged two companies or split two companies apart or the company's got new corporate branding or whatever heck it is that's important to them. And so we may want to make sure our tests fail if we detect their problems in these areas. So design the test explicitly for this rather than just implicitly letting them continue and do what they want to do. But the next thing then in terms of improving your code rather than your practices is try and make your code defensive. And by that I mean it protects itself from giving you stupid results. Because otherwise we'll just let the test run and sometimes it'll make a fool of us. So there's a couple of concepts for you. One is this concept of trip wires. This comes from I guess military days. You would put a very thin wire and if someone tripped over it it would set an alarm ringing. There's a company called Tripwire who do security monitoring of software and they monitor file changes. So something like Tripwire means that we detect that something's changing the system we want to test and we say stop let's go and examine our automatic test before we let them continue and just continue running them. Similarly we can set preconditions that say I expect there to be a one column layout. If I detect something else then just stop and wake a human being up and say hey human being these tests may no longer be trustworthy. Please decide and then continue. And similarly we're setting expectations in the code of what we expect it to behave like and when it doesn't behave like that again stop and involve a human. In terms of designing automation you've got I think four or five talks here on design patterns, exact page object patterns, little component patterns. So those talks will be useful to look at in ways to improve the design of our tests. The wisdom of others, so Brett Pettycourt I think is doing the keynote tomorrow morning. I happened to be reading one of his papers from over a decade ago on the flight over and he talks about I think seven sets of success for test automation. So there's a lot of wisdom out there that can help us to improve things. Ken Pugh wrote a book back in 2006 called Prefactoring and here he was trying to encourage us to think about what may change in the code and design a code in so that it continued to work well as the system that we were testing changed. Brian Mammarach will come on to in a couple of slides. Our next thing then is to look at the reliability of tests and here I'm looking at three different measures and they come from the ISO standards. The old ISO standard is one called 91126. If you've done the testing certification you may have come across this. The newest series of standards are the 25,000 series which are a superseded version of them. To look at three key facets of software. The first is internal quality which is quality essentially we see in the source code. So if I was to ask some of you who program to look at the source code you go, oh my goodness. You call the variable what, dog, you know, whatever, or erg, that looks horrible. Or you run static analysis and it'll report a series of problems with our code. That's the first type of quality that we need to consider. And if our code is poorly written internally, the chances are it's less likely to perform well externally as in when we're looking at external factors. External factors are measured at runtime, the things like memory consumption, network utilisation and things like that. And finally the quality and uses when we sort of disappear from all this and the people who pay us look at the work we're doing, how do they assess the quality of the tests that we're running? Do they say, thank goodness we've got those wonderful guys at the Selenium conference now because we really value their tests or do they not even know that you've gone? I'll let you measure that yourselves when you go back to work. It's important to think about the quality and use beyond all the other measures, the sort of technical measures. The next challenge for us is we typically spend a lot of time working quite hard, some of us very hard on writing software, but sometimes we're not actually spending our time working on the right things. And we're always in this perennial struggle of working out what to automate, et cetera, et cetera. So I mentioned Brian Marek, I apologise there's writing long quotes but I didn't want to chop them down too much. It's a very, very well-written paper from 1998 about test automation and essentially he says, think about what you want to automate and only automated stuff where you see value but he explains how he looks for value. So a key point in this summary is first of all it costs time, energy and money to write automated tests. When we write automated tests we're not typically doing other forms of testing. So we're not interacting with the software and we may actually not be doing four, five, 10, 20 tests while we're diligently writing automation. So which has more value to the organisation, to the project and sometimes the automation isn't worth the extra effort. The next thing is that the test typically don't give much value in finding the bugs they were designed to find. Like the login worked correctly or the login doesn't work, I can add to the shopping cart. What they actually find are bugs in things like support code where someone's changed an interface or an API, they should have changed all the calls to it but they missed one out and our test stumbles across it and the test fails and then go and look and say, oh my goodness, why is this test failed? Ah, because someone didn't realise we had to change the plumbing on this code base. So I quite like his perspective and I found it unusual in the industry compared to the other papers so I wanted to highlight it for you. The next point I wanted to make is that we tend to over trust automation and the best anarch I can get for you is a dog, if you have a dog, that a dog is a great servant but a terrible master. And I meet friends with big Alsatians and I know when an animal is well trained because the dog is under control. I can go next, I'm not a fan of dogs and big dogs with big teeth, they're very scary for me but I know when these dogs behave that the owners are taking care of them and trained them correctly. So the same holds true with our software. I'll mention the automation bias in the next slide but the other thing is we tend to over trust the results. How many of you have got continuous integration, continuous builds? And you're looking for a nice, what colour is it? Green, so you're looking for green and you say, well we've got 57,000 tests running, we've got a green bar, we're good to push to production. All the tests have passed, just ship it. The trouble is that some of those tests are testing the wrong things. Again, I won't name the big company but I inherited a series of 600 API tests and we actually found the only thing the test checked for was whether an exception happened when the return code came back. It didn't check any of the parameters or any of the return codes or anything else, it just did the test crash or not. And that was all we did. We eventually deleted that test base, by the way. As I've mentioned earlier on, the tests miss more than they find. And I'll move on now to a little bit of the reporting side of it, is we need to make sure we're capturing the information in a test to make it easier to report the problems and provide feedback. I work in testing mobile apps and one of the things we're finding very powerful is something called heat maps where essentially it takes snapshots of the screen every, say, 10 to the second. It builds something a bit like a video and it allows us to go back and look at what happened just before the crash occurred. We need to make sure we're paying attention to things like privacy of the individuals, of course, because what's great for us to look at, hey, look at what the user was just doing, might not be so comfortable for you out there to realize that the software's tracking what you're doing. So with power comes responsibility, but it's really useful to be able to use this sort of information. Onto Cummins's work. So he came up with a score of zero to 10. Zero means no automation is involved. 10 means the system is fully automated. 10 is anti-missile missile systems. So you may be familiar with, said US military, and they have missiles that are designed to shoot down incoming missiles. Now an incoming missile may be coming at Mach 3 or Mach 4, which is about 4,000 kilometres an hour. So that means by the time it's set as entrance, it's probably hit us. So we can't afford to faff around and wait for Anthony to go, yes. So the computer just makes decisions and fires and occasionally it goes wrong and people die. So level four is roughly what he was looking at. I assume that none of you fly. Does anyone fly? Okay, a couple of you fly. So one of the things that pilots have to do is they have to come up with a flight plan. I'm going from airfield A to airfield B. I'm going to go this route so you agree the heights and all the rest of it and roughly where you're going. And what they did in this test is they deliberately seeded the autopilot with visible wrong information. It's a bit like saying that I want to fly from Bangalore to Chennai via Sri Lanka. Now I realise airlines do weird things but I think even that's a weirder route. And what they found is that even though the information was in front of the pilot, even though the pilot knew better because they'd done a manual flight plan, 40% of them just pressed the OK button and accepted the autopilot decisions. So what do we do when we see automation? We tend to press the button and say, okay, accept, keep going. Well worth reading. Some readings for you then, Brett Petticord's paper. Go on, talk to him. He's even better than reading his paper. Ken Caner, if you don't know who Ken Caner is, he's been in the industry a long, long time, dual professor in computer science and law and very good at sharing testing information. Brian Marrick's paper I've already mentioned and Emil Cumming's paper. So that's it. Thank you very much for bearing with the technology challenges and the slides. I'm happy to take any questions now and you can of course email me later. I'll try and repeat your question as well for the mic. Hello, yeah. So what we will be doing is like we'll be automating a stable product, a stable module. So something is like a, because we don't want to put much of the effort in fixing them again and again. So what we do is like we create automation on a stable product. So what it will find is like hardly it will find five bugs or six bugs in every release. So the thing is like will it be useful if we start the automation in parallel to the development where it can find the more bugs. But the thumb rule people used to say is like we need to automate on a stable product or stable build kind of thing. So which one would be better? Because end of the day we should add the value to the company because finding five bugs can be done by putting some manual effort also. But if we start automating parallel to the development, there we can find some more bugs so that it adds value. So which one would be better in your aspect? Okay, so let me paraphrase your question. The question I think is when should we look at automating code earlier in the release cycle or as we're getting to it being more of a stable system? Is that rough with the question? Yes, yes, exactly. All right. So the several things then it can depends on what code is still changing as to when we want to do the automation. Given that we're talking here about GUI testing in particular, if the user interface is changing frequently and sometimes on projects it is, then we're going to write a lot of automation that will probably have to be modified often and or thrown away. So sometimes the effort of doing that is very high. It can be lower if you're the person writing both pieces of the code and you've got good design abstractions. If, on the other hand, the user interface is fairly stable and what's changing are things like the support code, then it may be worth having the automation written early rather than late. And what I'll do is I'll refer you to Brian's paper because I think he explains this very well. Cool. Thank you. Just wait for the mic. Hello. Regarding UI automation, lately we've been having some kind of discussions in order what kind of testing or what kind of tool we should use to check if the UI, it's okay on the website. What do you have to talk about this or what do you recommend us to use? So this is what would I recommend as tools for checking the user interface on mobile applications? Web applications. And web applications. So it depends on what we want to check for. Sometimes the layout and the aesthetics are really, really important. For most companies, they are. So if you go to Gmail or Hotmail or whatever your favorite website is, your travel booking site, they really care what it looks like. So we want something that checks the aesthetics of it. Now it could be that you can check thumbnails. One of the things that I did back in the days at Google is we used to do thumbnails of all the Google mobile sites in those days it was still possible. And essentially the computer would then put them into three buckets. Yes, we think it's okay. Still looks like yesterday and that's good. Yes, we think it's bad and don't know. And don't know means it's detected changes but it doesn't know if it's good or bad. And so a human being goes and checks those. So we can write a lot of code that will check aesthetics. We can check things like layouts, where things are on the screen. And I'll give you a specific example. I was working on a mobile application and we found that our reviews dropped from 4.4 average to 4.3 average, which is different to 0.1, which sounds so trivial that who cares? It's like what's the temperature today? Oh, it's 22 instead of 21. We don't even notice it as humans. But we did notice it because we found that our revenues dropped between five and 10% overall. And what caused the problem was a dialogue box dropping on the screen. And the humans who were using the app in their thousands weren't really visibly noticing this. It just was unease that something had gone wrong. And it was to do with the complicated race condition and translation, but ultimately we had to find and fix the problem to restore the feedback from the users and it restored the revenues. So sometimes that the layout can really matter. Other times, of course, we're checking functionality and the behavior of the software. And there, if we're looking at web, then ultimately the world seems to have stabilized now on Selenium, which is good. And you'll find even the commercial tools are now adding Selenium support. So that's a very strong vote of confidence. So I assume that you know Selenium as the answer though. I'm trying to give you a different answer. Thank you. Hi, I have a question on that 100% automation that you mentioned. So in today's world, all of us pushing for the continuous integration. So if you don't trust the higher number of automations in 95% to 100%, what is the other approach we can take? So I don't think I mentioned 100% automation, did I? Is that what you heard? Yeah, almost. OK, I don't think I said 100% automation. On a scale of 0 to 10, right? So that's the automation level, yes. No, thank you for clarifying that. So the challenge is to work out what the appropriate level of automation is. So typically a four is a recommendation. So recommendation engines or expert systems are used now throughout industry. And what it essentially says is I think the following action might be useful to you. So would you like to redo something? Would you like to change what you do? So to give you an example then, I don't know if you use something like Google Maps on your mobile device to navigate. So I was driving around Europe with my motorbike and I had a GPS built onto the motorbike, an old thing about five years old and small but waterproof. And then I had an iPhone in my pocket with Google Maps. And typically what would happen with Google Maps is providing there's a network connection, it would go and check more or less real-time traffic updates. And it would say, by the way, you may want to change your routes because there's a jam in Luxembourg. And Luxembourg will be a small country if a jam is still a jam. I don't want to sit in the jam. So that's a recommendation engine and that will be something like a four to give us advice. So when we're looking at automation then, as we go higher up the automation systems, it's typically where we're moving towards the push on green or however it's described in different organisations, which says because the automation has passed, then we'll promote the code into the next level. And sometimes we may want a human being involved in that decision to complement the automation. I've seen lots of arguments for yes, we should push on green, we should trust our tests, we should fully automate everything. And I've seen lots of counter arguments for that. So I wouldn't say yes or no to what one should do. You have to put your hand up to see how you can see it. Hi, do you have any advice on how to deal with long end-to-end tests that have several different steps in between? Long duration? Yes, in my experience they tend to be pretty brittle, especially over time. So I'm not sure if you have any advice on how we can deal with those. So the question is then, do I have any advice on these long running complex tests that might do something like login, add something to the shopping basket, modify something, put it in the basket, come back, search for something else, and finally check out? Exactly. That sort of mess. But it's important for us to have these tests. Personally I would try to do them without doing it all through the GUI. Because essentially we're testing a bunch of logic that lives behind the GUI. The GUI is a presentation of it. So try to not do everything to the user interface because as you point out it tends to be A brittle and B slow. Unless we're very good at writing software and many of us aren't, the code that we wrote wasn't very good anyway. So what tends to happen, and I've just seen this on a project a few months ago, is because we're not sure how long things take, we add a 30 second wait until this happens or fail. Now actually what happens is it failed at two seconds. But the test waits just in case because it's polling for the object that will never appear because there's a big red error message saying, I'm broken, you know, turn me off. Sort of thing. But the test doesn't look for that. And the trouble is we put this into an automated build and every single test waits for this in terminally long time. And we're not quite sure what's happening because we can't see what's happening and we just waste hours and days of our lives. So rather than testing that way, find ways to interact more positively with the system behind the GUI and then still do some of the GUI tests but don't focus on the end-to-end side. But we have one more here at least. Okay, good afternoon. So my question is later to the tripwires, the precondition and the expected thing, right? So can you throw more light on the tripwires with the classical example that you might have used? So a tripwire, the concept is when anything is detected by the tripwire, stop. It's a very brutal halt to things. And it would be saying, check for something on the system and if that doesn't hold true, then simply stop everything. So in this case, it may be, I don't want to use the word smoke test because I think it is a precursor to a smoke test but making sure that the expected first page is there and if it isn't there, don't run any more tests. Simply stop at that point and say, the system doesn't seem to be in a fit state for me to continue running these tests. It could be applicable for every test that we... You need to talk to the mic. This could be applicable to every test that we automate to as well, right? So it may not be specific to a suit. It could be specific to a test that we automate as well. Yes, so it depends on what's important to us. What I'm trying to encourage people to do is to consider what checks might be appropriate before then rushing in and doing the rest of the tests. So by rushing, sometimes, like the chap's speaking there, is 10-minute rush, because it's rushing like molasses trying to check a bunch of stuff, but meanwhile, it's going further and further off course. So we sometimes want to make these preconditions or whatever we want to describe them as at the beginning before we then move on and run the rest of the tests. And sometimes we try and do the opposite and we try and have adaptive tests and the tests adapt to the condition. So rather than... Let's pretend we're testing an e-commerce application and what we do is rather than assert there's three items in a shopping cart, we check how many are in now, seven, we add an item in and then we make sure that the answer is seven plus one, which is eight, so that's an adaptive test. And similarly, we can adapt the user interface. We can say things like if the user's allowed to move sections around, does it move below where it was before, but not on absolute. So I don't think it's a right answer, it's a case of considering it for each test. Oh, sorry, you'll be next hopefully. I do have a very basic question. Does the Selenium locators play any role in making the test reliable? And if yes, which one you would recommend? So I think that the answer to the first question is yes, let's hope that they make some difference because if not, try and write the opposite, try and pick the most perverse selector and see just how often it breaks for us. So reversing the argument, then of course the choice of selector will make a difference. In terms of the reliability of the test, then typically you want a selector that is sort of welded to the object. It identifies the object uniquely for the test and interacts with that object using that. But again, sometimes it's important to fail the test because even though the object's there, there's another problem. And we sometimes want the test to check for those as well. So don't rely on that as your only guide of reliability. And there's a specific paper I referenced that's worth having a look at. And again, I think there's another talk here at the conference related on the effectiveness of selectors. We've got a question at the back. Sorry, excuse me. Is this lady here, I'm not sure, agreeing? Yeah, this question is regarding better object selection criteria. So actually when I attended one of the code retreats, I had a thought process like we can have a dedicated test attribute for the object so that even though the ID or XPath is getting changed, your automated tests remain reliable. So what's your suggestion about this? It will add additional effort to the developer so whether we can suggest it to our developers to do it, whether it will add some value to our automated test. So the question is if we add a specific locator to... Yeah, dedicated test attribute for the objects. Yeah, so at use mean. Correct, something like that, yeah. So Andy and I were having this argument in the car from the airport this morning. So what I recommend you do is go to Andy's talk and then argue with both of us. Because we disagree. And I think that they can be a burden of an overhead and Andy thinks that it's absolutely the right thing to do. Why do you disagree on this point? Several reasons I think from my experience. And of course, our experiences differ, but we do know each other from years gone by. Typically, when we're asking someone to add something specifically for testing, and here I'm not talking about the annotations that we'd have in a unit test runner where it says at test or whatever, I'm talking about specifically in a user interface, is yet another thing to remember to update. And with the perfect development team, they will update these and everyone will work in confluence and by confluence, I mean they'll work concurrently with each other and the world is wonderful. But life isn't always like that. So the chances are that these could be not added at all, they could be added late, someone could forget to change them when they change some other aspect of the user interface. The user interface could be changed by something seemingly innocuous like the design team have changed the CSS and it's no longer valid. So I'm always cautious about something specifically there for testing. If we look at mobile apps, then the test frameworks in Android and in iOS, both of them use the accessibility labels. I quite like the idea of using accessibility labels because there's something there for a specific purpose which to help improve accessibility for a human to understand it would say a screen reader, and it's therefore serving a useful purpose for end users as well as for testing. So that's my preferred approach. And I think Anthony wants to comment on this. Thank you. OK, even better. Just wanted to ask you, because you said you'd disagree with it. So that would sound like because in some places it won't work, no one should do it. I don't think that's what you mean. So can you just elaborate on the context element of this that if it could work in your context, it could work for you, go for it? Or do you say, no, it doesn't matter what your context is, I completely disagree, you should never do anything just for this. I think there's a best practice here. No, I'm teasing you. Thank goodness for that. You can elaborate. Serious for a moment. So thank you for asking me to clarify. I would prefer to be using a standard ID element and or an accessibility element because those have been around for a while and they serve a useful purpose and I think they can help with the test automation. I wouldn't create a specific tag specific of a test automation of a GUI. Because ever, no matter what the context, you'd never do it no matter what the context. Well, I think if we weren't looking at web testing, web has got a fairly well... No, I mean with the team, because you said people, developers and testers, what if the developers and the testers are the same people and they're all working together and the testers can update the code and the developers can update the test and everyone's happy. In that context, do you still disagree and say, no, you still shouldn't do anything just for testing in the code? I don't know, is a short answer. So I would lean towards using IDs or an existing element identifier. But you can prove me wrong and we may write code together that uses tags before I die. I suggest we call a hold because I think this is the next session. I don't know, I'm not the timekeeper here. So if you want to know the question, that's fine. This can do the last question. So before this, we have an announcement to make. Please be seated and look at that, they're very responsive. Yeah, sometimes we need to automate the graphs and charts, like something like pie charts and all. Yeah, how would be your recommendation? Like there are some options, but how would you recommend to automate the charts and graphs, you know, right? So how would I want to automate graphs and charts, the checking of graphs and charts? Okay, what is it that you want to get from the automation? What answer do you want it to tell you? Yeah, let's say if you took some pie chart, there might be some three or four sections, like one section has some 30%, some other have 40%. So we need to check which person has that 40%, which person has 50%. If you want to go with the elements, we can't get because if you mouse over to that particular place only, we get that value sundal. So if you want to go by get some elements, we may not get that one. So I'm not sure is my answer. So I don't know enough about how you draw your pie charts or your graphs. So if the information is encoded in the element, so the slices drawn based on data that's provided to user interface is easy to query that in the DOM and say what's the value of this slice? Oh, it's 50, where 50 represents 50%. Ah, okay, my test can check that. Do you believe they're not stored in DOM actually? Maybe dynamically they will generate by using some Java query or some other object. But if they're dynamically generated in the user interface, they're inherently in the DOM. Unless they've disappeared. I guess you could generate an image and then delete the JavaScript that generated the graph. But if you're going to that level of sophistication, then you're way beyond me. So I think what would happen is the code would execute and would still remain in the DOM. You can still query it if that's what you need to do. Now you can also check for the visual representation of it. So you can actually capture the image slices and assess those. You can compare them against screenshots, against image detection. I mean, I think it's all a little bit over sophisticated. So I don't know enough to say what I want to do with what you want. Okay, thank you. So thank you for the question. I think I'm done, by the way. Thanks. Thank you.