 Live from Houston, Texas. Extracting the signal from the noise. It's theCUBE covering Grace Hopper Celebration of Women in Computing. Now your host, Jeff Brick. Hi, welcome back everybody. Jeff Brick here with theCUBE. We are live on the ground at the Grace Hopper Celebration of Women in Computing in Houston, Texas. We just found out we'll be back in Houston next year. So we're excited to be here. And our next guest is Dot Graham, software test consultant. Welcome Dot. Thank you very much, Jeff. So you said you tried to retire, but everybody keeps inviting you back to speak. You've written four books on software testing. So the folks here at Anita Borg invited you out. So what was your talk on? Well, first of all, I'd like to say it was a great privilege to be invited to speak at this conference. In fact, I didn't know anything about it until I got the invitation. But it was very, as soon as I found out about it, I thought, yes, I've got to come here. Well, let's jump into that before we get into your talk. Having never been to Grace Hopper before, and you know, it's, wow, do you come here? There's a lot of people. And once you're kind of in the network, then you're like, wow, this is terrific. But a lot of people don't know about it still. So what's your impression being here at the first Grace Hopper? Well, I'm just bowled over by the size of it and the energy levels that are here. And I think it's an absolutely wonderful initiative. And it's really astonishing to see how it's grown. Again, not living in the States, I've probably one reason why I hadn't heard of it. But I'm certainly impressed with it, that's for sure. Yeah, I think we stopped by last year, it was 8,000 last year and it's 12,000 this year. And my name bad says 13,000 on it. So it's pretty amazing. And the energy and the vibe is like nothing else that I've ever been to. I've never been to a conference this big before. Yeah, so that we've been to a few of them. They can have 20,000, they get crazy big. So anyway, so let's talk about software testing. We're in a software defined world now. Everything is about software. All these companies here, many of which are not what you would think of as software companies, all have huge software development shops that some people know don't know. But test and dev obviously is a critical component. We want the stuff to work. Conversely, back in the old days used to do MRD, market requirements document, then a product requirements document and spec the thing out and build it for a year and test it for six months and roll it out. Today, it's an agile world. People are trying to push out fresh code many times a day if they can. So how does that impact test dev? Well, the testing has to change because the testing certainly isn't the same as it was back in the old waterfall days, but it's still critically important that testing is done along with development in the agile world. And people working together, sometimes agile teams will have people who have a particular talent for testers and that will be included in the team working together with people, getting a tester and developer to pair together during both development and testing can be very useful. But certainly testing skills are needed just as much as ever, no matter how you're developing things. So let's dig into your talk a little bit about, you were telling me about offline intelligent mistakes. That's right. What does that mean? Well, I was talking specifically about software test automation. And test automation is the use of software tools in order to help running the execution of the tests. And I'm focused particularly at system level, although in agile teams we should be doing testing at unit level as well. But a lot of people struggle a lot with the testing at system level, with the test automation. So my talk was on intelligent mistakes in test automation. And I actually had five of them. And it's just a bit of an oxymoron, an intelligent mistake. It's something that seemed a good idea at the time, but which has some problems with it. And the first one was the fact that you might expect automated tests to find lots of bugs, but they don't. Because what we're automating is usually regression tests and regression tests are tests that have been run before. We want to make sure that it hasn't slipped backwards and that a new change in one place hasn't defected something somewhere else. So they're very unlikely to change. One of the stories I told in my talk was- Because it's measuring the delta. So if there was a bug in the first and there's a bug in the second, they're not going to catch the delta? If you make a change here, you might end up affecting something over there and you want to run the tests for that other area to make sure it's still working. And that's regression tests. So they're not very likely to find bugs. One of the conferences I went to, I sat next to a guy who turned out to be a very high level manager. And he was telling me that he had a team doing automated regression testing. And they were running tests every night and every weekend and it sounded to me like he was doing quite well. But he said to me, I'm thinking of knocking it on the head. Why? I thought, sounds like you're getting benefits from that. Well, it's not finding many bugs, he said. But that's not the point of automation. Automation can help you run more tests, run the tests more quickly, cover more of the system. But an automated test is not the same as a manual test. Another misconception that's common out there is that if you get one of these tools to help you automate the testing, then you can get rid of the testers. But you can't because these automated tools don't do testing. Testing is an intellectual activity. When you're testing manually, you might be going along and thinking, I wonder what happens if this, or I wonder if anyone's thought about that, or I wonder this, I wonder that. No testing tool has ever said to itself, I wonder. Right, right. The testing tools don't do testing, they just run stuff. It's very useful because they can run the stuff that's boring and repetitive for people to do, but it doesn't replace testers, it supports testers. So how can, how do people testers use the automated tools to add scale to their process as the rate of software is growing and growing and growing? Yes, and I mean the tests that are mundane to do the repetitive, for example, the regression tests, if you automate those, you then free up the testers from having to do those manually and give them the scope to be able to do much more imaginative testing and more exploratory testing and so on. Okay, so I think we got like three, what are the other two? Oh, let's see, the first one was that automated testing doesn't find bugs, the second one was that automated testing doesn't come out of a box, or it's nowadays a download of course, but what you get is just an engine, it's not a whole car. So you need to build around that, you need to have a well-defined test automation architecture in order to have long life for the automation and have it widely used. My third point was that you shouldn't just try to automate all of your manual tests. Some tests should not be automated, they should stay as manual ones. For example, do these colors look nice? Or tests that take too long to automate? Or even something like, you know, when you go on a website and you have the wavy lines and you have to type in what it is, that's to prove that you're not a machine. The CAPTCHA, everybody's favorite. Sometimes they're too smart, right? No, you can't read the thing, it's like crazy. Well, that's right, yeah. But the point is, if you could get your tool to do that, it shouldn't be possible, because the whole point of it is to prove it's not a computer. So that shouldn't be automated. Shouldn't be possible to be automated or your software's broke. Now, I know there are ways to get around it. It's interesting, the case of, if it takes too long or it's too difficult to make the automation. So that's just really kind of a value assessment in terms of your resources. That's right, I mean, some tests are fairly easy to automate and if they're going to give you value, then they should be automated. But you shouldn't just automate tests that are easy, but on the other hand, you shouldn't just automate tests that are difficult. So it's getting the balance right. All right, so that was four, or did we get to the five? The fourth one was that return on investment for automation can be dangerous. I mean, sometimes- Return on investment for automation can be dangerous. That's right, proving that you've got return on investment for the automation. Oh, proving you can get return, okay. Yeah, I mean, if people are looking to invest in automation for the first time, sometimes they have to make a business case. And in some cases, okay, what's your return on investment going to be? The reason why it can be dangerous is the easiest way to show return on investment is to base it only on people's time. But that implies that that's the only thing that's important and it implies that the tools replace the people. So that's why it can be dangerous. But I, I'm sorry, I would imagine better testing always has ROI, right? But it really, the question is whether it's automated testing has- That's right, I'm talking about automated testing. Yeah, and of course everything you do should give you value for what you're doing. But whether you actually have a specific business case for it and quantify that, that's what I'm saying can be dangerous. Okay, so- Automation is an enabler for success. And then what's number five? Number five was, and this one, I'm a little bit swimming against the tide with. Okay. Because- That's what we like on theCUBE. We like people kind of pushing against the tide a little bit. Yeah, my fifth point was that I think it's not necessarily that the testing tools are tools for testers. In other words, it shouldn't be the testers who automatically are assumed to become the test automators. Testing skills are different to automator skills. If you're going to be working directly with these tools, they use scripting languages which are programming languages and those require programming skills. Not all testers, especially if they've come from a business background, have those skills or would want to acquire them, or if they did, would be very good at it. Right. Hans Vivalder says you might lose a good tester and gain a poor programmer. So, I mean, nowadays the fashion is if you're going to be a tester, you have to have programming skills. And based on the keynote I heard yesterday about getting everyone to do coding in school at a very young age, maybe in 15 or 20 years, this won't be an issue anymore. Right. But at the moment, I think it's really unfair that testers who've been in the industry maybe for a while have a really good understanding of the business are being told, you're not worth anything anymore because you can't write code. Right. And they don't want to write code. They wouldn't be good at writing code, but they're great testers. Right. They know the weaknesses of the system. They know exactly where to probe it. We're throwing out something of great value if we imply that all testers have to become programmers. I thought you were going to say you shouldn't have them do it as kind of a separation of church and state type of an issue where you want a different, mechanism, a different skill set, a different methodology to kind of counteract what they're doing. Thanks for that, actually. It's different roles. And I have no objection to people who want to do both roles. I think if people want to do both testing and development, that's fine. What I object to is pushing all testers into being developers, even if they don't want to be. And then what about the whole DevOps movement and how is that impacting this where it's no longer the developers making the software, obviously it gets tested and kind of throwing it over the wall to the ops people. Now it's really more of an integrated process. So if it's more of an integrated process with these faster code pushes in this agile world, how does that impact kind of the tester role? Yeah, I mean, I think it's great the way software is being developed now. It's so responsive to users' needs and so on. And this old over the wall idea that we had 20, 30 years ago really didn't work very well. I mean the idea of independence was having a different set of eyes because you can see things that other people don't. But you can get independence just by having two people working together doing pair work or pair programming. So I think the move towards agile has been really good. In fact, my very first published paper back in the 90s was about intermental development based on some of the work of Tom Gilb, who's known to some people as the grandfather of agile. And he was pushing this back in the days when it was really unfashionable. Right, right. So you do see the role of testers being just as important, regardless of the amount of automation. You know, it's a conversation we have often about kind of role of technology and what are people going to do when the machine learning takes over but really having the context, having the knowledge, having the history with the application that defines a lot of the subtle things that the automation is just not going to get. That's right. Automation is extremely useful and very beneficial but it doesn't replace manual testing and you shouldn't ever completely trust the automation either. Because there are a number of examples of the most recent book that I've written called Experiences of Test Automation. There are several stories in there of people who thought the automation was doing something and then they found out it wasn't. One of them calls them zombie tests. For example, one of them, they were supposed to connect to the network but if the network connection wasn't working instead of the test saying something's gone wrong here, I'd better fail, it just stayed green because it couldn't see that there was a false result. Because it never connected to the first phase. Yeah. So looking forward, testing, what's coming up next? What do you see kind of as the evolution of that science? I think building testability in is very important to make testing easier and particularly automated testing. I certainly see a lot more test automation in the future but I also hope that we're going to see a lot more creative and imaginative testers and the recognition testers bring to the table. Awesome. Well, Dodd, thanks for stopping by. I'm glad to hear you're enjoying your first ever Grace Hopper. Hopefully we'll see you again in 2016. Thank you very much. Well, thank you very much for the interview. Pleasure. So I'm Jeff Frick. We are live in Houston, Texas at the Grace Hopper Celebration of Computing. You're watching theCUBE and we'll be back after this short break.