 All right, last session of the day and the conference. I'm so waiting for it to get over. So this is an impromptu session. As I was saying earlier, we lost a speaker. So thanks to TV for jumping in and helping us run a design thinking session. I would give him a round of applause for that. I'm going to touch upon agile testing. I don't know what it means, but I'm going to try and talk about that. The reason why I say I don't know what it means is because I think there is no novelty to it. But there are some new elements from off late what we've been discovering. And we're trying to somehow put all of that in the name of agile. So we will see how much justice it makes. The one thing that I've often found people are familiar with is what we call as the software testing ice cream cone problem. How many people are familiar with this diagram? Any challenges with this approach looks good, right? Let me kind of tell a quick story about another industry and see if there is something we could learn from that industry. How many people recognize the picture on the slide here? It is a power loom. It's a power loom. Bangalore used to be one of the biggest power loom center. I don't know how many people have grown up in Bangalore. I did. So I know where I used to live from morning till evening, you had this tuk, tuk, tuk, tuk, tuk, tuk, tuk, tuk, sound going on throughout the day. Those were basically power looms. And even though I say morning to evening, there were a lot of gaps in between because of the lack of power in India. India is actually one of the biggest or actually the biggest consumer of silk, right? India is the biggest consumer of silk. But where do you know where actually the highest amount of silk is manufactured? China. China is the number one silk manufacturing country. Most of it gets imported into India. This was not the case 20 years ago, right? Why has it changed so much? So let's talk about how power looms used to run, right? There used to be these buildings in which people would set up the power looms in the morning. They would set up the threads, let it run through the day. And then in the evening, they would get a bunch of people who would sit in front of each other and they would check the cloth to see if there is any defects in the cloth. What kind of defects? Thread missing, an insect getting woven into the thread in a new design, right? Or some kind of what used to call petas, which means a lump of silk that is together. And that's not good. So they would check for things like this. They would find the defective pieces and they would remove it out. And the rest of it would get shipped to a silk retailer or a silk manufacturer, sorry, manufacturer, whoever. Now what they found very quickly was that actually before I revealed that, right? What do you think they found when they were doing this? So through the day they run and then at the end of the day they bring a couple of people to come in and check the cloth. They remove the, they segregate the defective ones. They take the good ones and they ship it out. Late in the day to find the defective pieces. It's time consuming, lot of wastage, right? So what they found was about 30% defect in the cloth and the profit margins were around 22 to 25%. So that business is not gonna scale or sustain, right? Which is why we have software companies now or software factories in the same place where we had looms and history repeats itself. But these guys were smart people. These are smart business people, right? Now do you draw any corollary to the story, part one of the story I spoke just now to software? Developers build stuff and then you bring testers at the end of it and then they test stuff. They find defects and then at least we are lucky in software, you don't have to throw it away. You can go fix it to whatever fixing means and then we deploy it, right? But that is kind of the corollary that I'm trying to draw, right? So hang on with me for a couple of more minutes to understand the story. So what was the part two of the story? What did these guys do as part two? So the problem was big amount of defect eats into their profit, eats into almost all their profits and they don't have a viable business. So what did they do? How to increase the quality? So instead of waiting till the end, right? Let's have someone at each loom watching as things go by and if a thread breaks, stop the loom, fix the thread, let it run, right? If an insect is somewhere near, move it away, right? If you see something is getting introduced, so basically try to control quality or quality control, right? So have one person at each loom and make sure that things are not slipping out. So you don't have to wait till the end of the day to find that. Make sense? How did that go? The second one is agile, which I think is fundamentally flawed. It's a kind of ironical that I make that statement, but it is fundamentally flawed. That's basically bringing testers and making them part of your scrum teams, right? So they're there watching things as they go along and surprise, surprise, I've not seen any company successful with that model and I can see the power looms also shut down for that very same reason, right? History repeats itself. So what was the problem with that model? Let's try and understand that before we jump to conclusions, right? So what happened? Why is it not profitable? Operating cost went up. Operating cost went up because now you have to not pay these guys two hour salary, you have to pay these guys eight hour salary, right? That's one part of the story. What else? Whose manpower not automated, but more importantly, you were basically, right? So you have these people at the looms and what they found was their productivity, what the loom would produce in a day actually had gone down. The output of what a loom could produce if a loom could produce 100 meters of silk sari in a day, that had gone down to 80 meters or even 75 meters in some case. Why it went down, right? Now I'm only producing 75. I am paying these guys more money, right? So net net, what is the profit? Net net, what did I get at the end of all of this? Plus I have to manage labor, which is not easy in India as we all know, right? Someone doesn't show up, what do you do? Shut down the loom. Someone has to go for lunch, what do you do? Shut down the loom. So your productivity actually went down massively, right? What your loom was able to produce had gone down significantly and they had the overheads of paying these people. Now if someone wants to take a break, what do they do? They are in a dilemma. Let the loom run or stop. If I let it run, I still need testers at the end of it. If I stop, I take a big productivity hit, right? Now compare this to agile testing. What do we do? We bring in testers into the scrum teams, right? They are there as developers produce stuff, they keep testing stuff, and can we ship things at the end of it? Unfortunately we cannot because we have this pile of regression that we need to do, right? So have we really benefited from this model? We've not. Now did the looms benefit and that's how they shut down. But what was China or Japan doing differently that they actually are the world leaders today in manufacturing sir? Anyone's familiar with this company called Toyota? Do you know the parent company or the precursor to Toyota was a company called Toyota? They were into loom manufacturing. That's what funded Toyota as a car company. So pretty much everything that you hear in Toyota lean manufacturing, all of that stuff, actually comes from Toyota, which is the loom manufacturing company, right? What did they do? They looked at looms. In fact, this is during Second World War or in fact the First World War when they had shortage of people, they had to produce a lot of cloth and they said if we cannot deploy people, we need to solve this problem differently. So they analyzed what was the maximum reason for defects in the cloth. And the first reason they found was threads breaking because of the tension in the threads and that causing defects in the cloth. That was the biggest reason for defects being introduced in the cloth. So what did they do to solve the problem? Instead of throwing more people at the problem, they said can we mistake proof the process? Can we mistake proof the manufacturing such that, you know the defects are not put in the first place and we don't need testers hanging around trying to check things, right? That's where the whole statement comes. Inspection considered wasteful. So the simple thing they did was these threads are here at the bottom. They go up into the loom and so they basically put small livers on each thread. If a thread breaks, the liver goes and jams the machine. It was a very trivial idea. It's patented by Toyota. And that's how they actually stopped defects from getting in the first place. They went on to innovate to do all kinds of interesting things where they had dual threads so that even if one breaks, the other one quickly picks up and keeps running. So they kept on to innovating and now nonstop, without human intervention, you can produce almost 99.999% precision cloth at a very high pace. Fast forward, bring it to software development today. How do we compare that approach to what we are trying to do? So almost, right? One thing we can say, adding more people is not gonna help. That's not the answer. That's not how you wanna approach testing. How do you approach testing then? By mistake-proofing the process, by building quality in the process rather than thinking of it after the fact. So what kind of practices can help with that? Test-driven development, what else? Automating regression tests. So continuous integration is a way of basically ensuring that at any point in time what is going in is actually mistake-proofing, right? It doesn't allow something to go forward without actually having, without testing for it. It doesn't just let something slip by, right? So there are these bunch of techniques that people have been talking about in terms of how you can achieve this. What I wanted to do in this session is go a little bit more deeper into some of these topics very quickly. So I talk about this inverting the test pyramid and the challenge that we have today is the pyramid basically is unit test, integration test, end-to-end test, and a whole bunch of manual checking. The problems with those are, again, always catch up, more people, all of that stuff. Now if we're trying to turn the pyramid upside down and we focus more on the unit tests, where, you know, and by the way, that entire pyramid is what we're talking about as automated pyramid. So it is not something that someone's gonna manually do these things, right? Just to keep building on the car analogy that we were talking earlier, right? If Toyota is building a car, right, let's take a small part in the car. Let's take piston, right? To build a piston, you need a bunch of screws, you need a bunch of bolts, you need a bunch of other parts. What would they do before even they assemble it? They would take each of these and they would make sure it's according to specification, and it's automated. It's automated in many sense by weight, by, you know, by checking for thread, by checking for the sound, vibration if there are cracks. So they do a whole bunch of things at very minute level. Unless the unit is working correctly, there is no way the whole part will work correctly together. Do we all believe in that, right? The previous approach feels like, you know, Toyota just happened to assemble a car and call someone saying, hey, can you test drive it and make sure it works, right? No one's gonna sit in that car. Instead, what they want to do is they wanna make sure every little bolt that goes into the car is actually validated correctly. It's checked. Then we talk about domain logic acceptance test. What is domain logic acceptance test? So we talked about unit tests. In software world, what would be a unit test? You'd look at every class, what its public methods are or public API is, and you would ensure that that unit in isolation without any other dependency does what it's supposed to do, right? The screw works absolutely how it's supposed to work. Doesn't matter where you're gonna fit it, right? But when we move up, we're talking about domain logic acceptance test. So that screw is gonna go into, let's say, a piston and you wanna make sure the piston functions correctly, right? So it's more of, from the user's perspective, what is a unit? It's not a feature. It's a much smaller part of the feature, right? It is, a piston is not a feature in a car. They don't sell you a car by saying, I have so many pistons. You don't even care about the pistons. But a piston does a specific functionality for the car, right? It's an atomic unit which is more than a class. It's an interaction between a set of classes which gives you a basic unit of function, right? Some people call it component, but I don't want to use the term component because that almost takes you in a vertical sliced way. You would think of a UI component. What we are talking about is a end-to-end component, something that cuts across all of these, right? So for example, just a simple flow like a service, a service which checks if my server is up, right? That's a piece of functionality. Can I take that alone and put it somewhere? No, but that individual piece is still something that I want to validate its functioning correctly. Then we move up. The next level is the integration test. What does the integration test do? This is one of the things that I find quite fascinating that we all seem to use the same terms and we mean very different things. I'm gonna take a very simple example here and try to demonstrate the point. So let's say I have a calculator. Everyone seen a calculator before? I have some kind of a server component that talks to some kind of a database. And I'm gonna press one plus two. So you will see one plus two and then I will hit equals, right? So at that point, it's gonna send a request to the server. It's gonna send a request to the server saying calculator service dot add one comma two, right? And then whatever response it gets back, it's gonna take that response and it's gonna show up here. Trivial as that. What is an integration test now? What do we mean by integration test in this example? Click one plus two, check if three comes back. That's not an integration test. This is a module, this is a module. So how do I test that? By pressing one plus two and seeing three is coming back, right? I don't think that's integration test. So when I press one plus two and I hit equals, whether this service is getting called with one comma two is a unit test. I can stop this thing out. I can make sure when I press things over here, what is the request that is getting sent out? I can stop that out and I can verify if that's happening. I don't need the server sitting around for that, right? So that's a unit test. Similarly, so we'll go step by step. This unit test you understand, right? I'm gonna punch a bunch of numbers over here and I'm gonna make sure that the appropriate request is being triggered. I don't even care if it's actually gone. I just stop it right here before it's even sent. Now I can come here on this side and I can say, okay, I have this service which has different, different methods and this service can be, you know, because it might need to go access the database, do some other things. So here I'm gonna do what I'm gonna call is the domain logic acceptance test, right? So the domain logic acceptance test will make sure my calculator service functions as the way it wants, right? So it does the addition subtraction. But that would go to, let's say, the database and come back. I can further granularize that and say only the portion where I'm doing the addition, the subtraction, the multiplication, that would be a method in a class in that I could unit test, right? So I've unit tested that here. Now, why do I need integration test? Is there any purpose it's solving? I wanna make sure that there is this little piece over here which is some kind of a controller or something which actually talks to the server. I wanna poke that guy and I'm gonna give some input to it and I'm gonna check if it gets back a valid request. So I'm gonna give something like add one comma two and I'm gonna see if it sends me back some number. I don't care if it's three, right? All I'm interested is can this guy talk to this guy? Is it configured correctly so they can actually talk to each other? I don't even care if it sends back three or five or seven because that I could unit test over here, right? So that's what I mean by integration test. One rule of thumb when we write test is that every test should have a single reason to fail, single responsibility principle, right? If I were to write a test here which basically went all the way come back and I validated it if I got three back, now there are two reasons for that test to fail. One is the server is down, I can't even talk to the server. The other reason could be the server sent me five back when I expected three back, right? We don't wanna mix things up because then you get a failure leading to multiple things, right? So that's what I mean by integration test is to talk between two boundaries. Let's say this was calling out to some service, let's say to Amazon something and then getting back. I wanna simulate or think of a payment gateway, right? I'm making a payment. I wanna make sure that I'm sending the right attributes to the payment gateway and then the payment gateway is redirecting me to the right page. That's all I care about, right? That's what I mean by integration testing. Clear? We move up. What is a workflow test? But there are two more layers on top of that. So basically what I wanna make sure is that I have multiple additions to be done. Let's say I did a one plus two equals. It got the answer. Then I say plus again, it got some answer. Then I basically did subtraction. So I'm gonna do a sequence of step to achieve a objective that the user wants and a task from the user's point of view might require multiple steps. Think of Flipkart or Amazon shopping cart, right? I go, I pick up a product. I add it to my cart. I maybe pick up another product. Then I say check out. I review the details. Then I punch in my credit card details and then it sends me a confirmation. It sends me a confirmation email. It basically reduces the inventory from the system. It does a bunch of things and then sends the shipping details to FedEx or whoever needs to ship things. Let's say that's a flow, right? Now there are multiple places where you're interacting with external systems over here. One is the payment system. One is the inventory system. One is the shipping system. I'm gonna stub all of those out. I'm not gonna talk to any of those. I'm gonna assume that it works correctly because I would have done integration testing to make sure the request I'm sending is actually going correctly, right? So I've integrated, I've done integration testing. If I send this kind of a request, do I get some response back? If I send you some jumbled up request, do you give me the appropriate error message? Those things I would have already tested at my integration test. So I'm gonna stub them out now and I'm just gonna make sure that I actually added four products and I actually get built for four products, right? So it's the workflow that I want to test by stubbing out external APIs. And I am not gonna do it through the UI because this is more of a workflow that I could do just at my API level, right? That's what we mean by the workflow tests. Makes sense? Then we move one layer up. The difference between that, those two layers is basically here, you're not gonna stub out things, you're gonna go to the external system, right? You are actually gonna go to the payment gateway. You are actually gonna go to the subsystems that you deal with. And then when you move to the last layer, which is the UI layer, so it could be a graphical UI, it could be a text UI, it could be some other kind of a UI, doesn't matter. Here you wanna make sure is that the navigation from one thing to another thing is smooth, right? Other things you would have already unit tested, integration tested, other kinds of things you would have tested. Here you're only interested in the flow from the user's perspective, is it actually working? And as you keep going up the layers, you see it's a pyramid. The numbers keep reducing, all right? So if I had a total of 100 tests in my code base, then a good 70% of them would be unit tests, right? When I move up, 10% of them would be my domain logic acceptance test. Keep moving up 9%, 6%, 4%, 1%. Those can be easily unit tested. Why do I need UI tests? I have not seen a situation like that till date. If you can show me one, I'd be happy. So forget that ancient definition of unit testing, right? Leave that back. It's like those guys in the looms, they're gone, those factories have shut down. Unit testing as we are talking, all of these are from ensuring that you're building quality into the process, right? You're mistake-proofing things. It's not about coverage. We don't even care about coverage. If we do this, you automatically will get a very high coverage. But coverage is just a nonsensical number. It means nothing, right? We are not doing this for coverage. We are doing this for fast feedback. I want to make sure that every level we are getting the required feedback. So pretty much 90% of your UI can be unit tested. I think it's a myth that you need, you know, things like Selenium or stuff like that to do unit, to do UI testing. I don't think that is really true. In fact, we ran Selenium conference last year and I did a presentation on how in one of the company's ideas we actually moved away. Majority of our tests from Selenium all the way down. And what used to take 16 hours if I'm not wrong is now down to 13 minutes. Exact same feedback. Much, much more faster. All right. That's what the navigational things would do, right? You want to make sure that it navigates. But then if there is specific, let's say, JavaScript that you want to make sure that this JavaScript executes correctly on all different kinds of browser, I can easily unit test that. I could use something like SauceLab or I could use something like BrowserStack where I could basically run a whole bunch of Jasmine tests, right? Which are just JUnit, like JUnit tests, but they are basically, I could run a whole bunch of Jasmine tests or JSUnit tests in 20 different browsers on different operating systems, different versions. But that's what different layers of tests could do for you. You could decide what is the feedback you want. Is it the JavaScript function? Is it the look and feel? You're producing a report. You want to verify the internationalization of that. Each one would fit into one of these layers, right? Absolutely. That's not the top. You click on something and see your output. You want to make sure the flow from the user's perspective is happening. I'm not interested to see if the label is correct, if this is correct, because that I've already tested at lower layers of my tests. I mean, it's too late to get that feedback at that level, right? And typically things like this don't break. I mean, once I've put the label, how can it magically disappear? The problems typically comes is when you have logic that you're doing and the logic is constantly keep, you know, you're constantly mutating that logic. And as you mutate the logic, you want to make sure that it continues to satisfy all the conditions it worked for before and the new condition. But things like label is not going to suddenly change or disappear. But however, I mean, if that's the concern, then you would write more unit tests around that, right? So I know people are calling for time out, but this is essentially what I think is the different layers. And if you start building these as you build along, right? And all of this is automated. So if you start building it, you are really bringing testing into the process. You're not leaving it till the end. And you're essentially putting those levers wherever it needs to be so that you're mistake-proofing your process. Now, your code and code testers don't have to, you know, waste their lives clicking buttons here and there. They can focus on doing more interesting things like exploratory tests and stuff like that, which is stuff that we might not think. Once you look at something, you start poking around it and say, what happens if this happens? What happens if someone logs in, selects a bunch of products, shuts down the browser by accident, and then brings up the browser and somehow goes back to the URL? What should happen? Right? Those are the kind of things that, you know, now testers can actually start asking and poking things around. And if you find that's a valid scenario, you would try and capture that at a different level, one of these levels and make sure it's captured into the system. All right? So, one of the things we are trying to do is basically change the identity or the role of the tester. The role of the tester is not to find defects, right? That's gone, that's dead. The role of the tester is to stop defects from getting in the first place. So, if you will, that's what I would call as agile testing. They sit down together, they flush out, what are the different things? They would probably start with the workflow test and keep drilling down. And when it hits the unit test, the developers would go off and start working on it while the testers could cover other scenarios. And it's all automated before the developer checks in code. All of these is checked in into the version control. You run everything, it should all be green. Only then the developers can say, I think we are good, we are good, not I am good. Correct? So, you have to change that structure. That's when you would really see the benefit. Otherwise, you're really doing waterfall in the name of agile, right? It's, remember the loom story. That's what you're gonna be continuing doing. And then remind yourself of what the Japanese and the Chinese are doing. All right? So, it was a short session, but I think you get the point and I'm gonna wrap up now because we need to open up the hall for the fish bowl. I'll be around if you have any more questions. Thank you very much.