 your presence here. You know what I mean right? Those who love cricket, it's lot of sacrifice for them. So, my name is Sachin. I work for a company called Ideas and Ideas basically provide solutions for hospitality and car park industry. We help them decide right prices for their products in order to make maximum revenue. So, what you see on the screen is you know the overall distribution of our clients. You see they are spread all across the world and there are more than 7000 clients who are using our products. We are there for more than 25 years and you know we are considered as leader into this domain. So, I mean there are more than 100 big hotel chains who are using our products right now and these are some of the prominent name from those hotel chains. So, coming back to today's topic, I have Naresh Jain with me. So, he is going to help me out with this presentation. He has been consulting with our organization for more than a year now and he has been a big catalyst there to you know to transform our development and testing practices. So, the agenda, agenda for today's discussion. So, the title of the session is death of inspection. Sounds very scary right. So, first we will spend some time in understanding what is inspection okay. I am sure many of you will correlate when we describe that term and when we describe the death of inspection. So, we will see what is inspection. We will see what type of challenges this inspection brings forth into our development and into the faster deliveries which is the need of the hour. Then when we realize that you know we need to change what we tried to come out of these challenges and initially when we tried to come out you know we took a little bit wrong path but then how we course corrected that path after we realize that you know this is not the right way to solve the inspection problem. So, we will talk little bit about what we tried to come out of inspection and of course we will share the experiences across this journey. So, this journey started couple of years back. I would not claim that we have reached the destination. We are still in transition but during this journey whatever experiences we had there were lot of learnings. There are lot of learnings right now and we thought it is a good time now to share whatever we have learnt so far with the larger audience. So, that many more people who might be in the similar situations get benefit out of it. So, let us first start with what is inspection. Nareesh, would you like to take it up? Cool. Alright, thanks Sachin. I am sure people are familiar with the term inspection. I mean especially people who come from the lean background. You might have heard C's inspection or things like that. So, we want to quickly jump in and talk about inspection. I think one of the big challenges we have seen in the software industry is people interchangeably use the term inspection with testing. Is there a fair statement to make? Right. So, what we want to clarify today is inspection is not testing. There is a big distinction between the two. Inspection is not testing and what is actually testing? So, if we look at the actual definition of testing, what is testing? Testing is not about having a set of things in your mind and going and validating whether those things are working. Testing is an art of basically exploring, of understanding, of poking things and trying to see how it reacts. It is more of exploratory in nature. It is not unlike checking. So, checking is where you have a set of predefined expectations and you are checking if your software or what you are doing meets those requirements or checks that you have put together. So, checking is quite different from testing. If you want to read more on this topic, I think Michael Bolton, he is one of the leading gurus in this space. He has written a very interesting article and then a bunch of other people have caught on to this. So, they are trying to distinguish between checking and testing. Just to take another example, like everyone is using smart phones these days. We download an app on our smartphone and what do we do? We go and read a manual, right? Rarely. Probably never. What we do? We try and poke around. We try and do different things to see what the interactions, what happens when I do this, when I slide, what happens when I do. There are some conventions that have come in. So, we follow those. There are some non-conventional things that we do and what you are really doing is exploring the application. You can think of that as a thought process that goes behind the testing mindset. While checking is what typically we have seen in the software world, right? Where a set of requirements are there and you are validating against those requirements, whether your software does what is written over there. So, first thing we want to say is that you might have heard this thing in the agile community a lot. No manual testing. What do they actually mean? Automate everything? I think we just heard Jeff Patton talk about don't fool yourself, right? You can't fool yourself. You can't automate testing. What you can automate is checking. If I know this is what I expect, you can automate it. Now, I'll correct that statement saying, yes, you can 100% automate checking, but there are some portions of testing that you can also automate using property-based testing, using A-B testing, using techniques like that. But they are not what we generally refer to as testing. So, I'm going to leave that aside. But mostly when we refer to everything should be automated, we mean all checks should be automated because checks are supposed to be automated. And using human checking for inspection is wasteful, is a crime in my opinion, right? Yeah. So, I believe we were in that situation where whatever we call as regression testing was basically human checking. So, there were thousands? Oh, yeah. It is continuous checking. You're right. Yeah. Is it the right term or it has been confused with continuous inspection? No. So, like, if you look at checking, checking can be automated, right? Continuous inspection we have. We're not creating a new term. We're just saying inspection is human checking, is wasteful. We're not creating a new term. We're just trying to distinguish between testing and checking. Let's not use the word testing when we actually mean checking. So, a lot of what happens in software is unfortunately only checking. Checking is not bad, but it's limited to only checking. We're not really doing testing testing. And testing testing is very important, is what we're trying to highlight. But if all your time you're going to spend as a tester doing manual checks or even automated checks, where do you have time for actually doing any kind of exploratory tests or any other kinds of interesting things that might actually help you understand usage patterns of users and usability aspects and other kinds of things. So, that's where we're kind of going with this. Okay. So, basically, checking should be automated. That's one thing. And human should not be deployed for checking because they will either do extra or they will either do something less and more often they do something less. So, that's why it should be automated. And then there are a lot of other aspects to inspection. So, a lot of other attributes to inspection. We will look at that. So, what we mean by inspection. So, typically, we already discussed, they are like prescribed, right? You have a prescribed specifications and people sit around the specifications and they will derive their test scenarios, positive test scenarios, negative test scenarios, et cetera, et cetera. So, they want to make sure that they have that traceability for all the tests, you know, they want to run against all the checks they want to run against the given requirement. So, but typically one aspect which is lost here, like those who are writing those checks derived from the specs, they are not worried about the, you know, architecture of the application, internal wiring of the application, the coding aspect, the way the application is designed. And because of the internal design, you might land them into many other issues and how are you going to catch those issues? You know, that aspect is totally ignored in this type of prescriptive, you know, check design. So, that's another attribute of inspection. Since it's done manually, there is a lot of repetition there. Every time, you know, you find an issue or you're writing a new feature, there is a lot of impact on the existing features. So, there is a lot of testing going on again and again. Repetitive testing is going on again and again. So, it's a repetitive testing, you know, and it's mistake-prone because it is done manually. Okay, many of us are familiar with this, I guess. You know, typically there are departments, right? You have testing department, you have development departments, and the role of tester is finding defects. That's how they are groomed, right? If they don't find defect, they are not good testers. So, all the time they are hunting for defects. So, which is good in a way, but when they are finding defect is also important. It's not just finding the defect, it is when they are finding the defect. So, in the inspection, typically what happens is, you know, it's considered as a feather in the cap of a tester that, you know, when he finds the defect, you know, he has won the battle. But, you know, you're not creating a harmonious culture there because when you find the defect, somebody else is also, you know, it's like a reward for someone, but it's considered as a bad thing for someone who is, you know, developed. So, there is a lot of departmentalization type of thing happening in that inspection type of culture. So, testers, they enjoy finding defects, and developers also tend to believe that, you know, it's not their responsibility to develop quality software because someone else is sitting there in other department, and it is their role to find the defect. So, they will, that's another attribute of the inspection type of mindset. And typically, you know, in this type of mentality, the quality is always somewhere in future, right? Because I'm developing the code, someone else is going to test it afterward, and he will find the defects, I will fix that defect that time. So, quality is in the future, which is not a good thing. Quality should be in the present. If the quality is in the present, then future will automatically have the quality, right? In fact, there are studies that show that, you know, at traffic lights, when there are cops deployed, more accidents happen versus when cops are not deployed, right? This is because you think if the cop is there, the cop is ensuring that no one's going to jump traffic lights or things like that, and then you leave that, you being watchful to someone else, right? And you try and just go, and you get hit by, you know, surprise. And that's something you'll see similar in software as well, where people say, well, testing or checking is not, you know, quote, unquote, my responsibility, someone else is doing that. So let me just drive as fast as I can, right? Leaving that to someone else. And that leads to differing the whole quality thinking thought process in the developer, and then you have other kinds of problems that I think what Sachin is talking about. Yeah. So, I mean, this is where, you know, we were maybe in 2010, and this is how our typical release cycle that time. So we used to release every quarter, and three months, out of three months, two months were dedicated for developing the new features. There used to be weekly releases happening to QA, and QA used to get partially or fully developed feature every week, which they would start testing. And during those two months, they will mainly focus on new features. And after two months, we will call it as a code freeze, so no new code added unless there is a defect found, because all the new feature development has been done. And then there was entire month dedicated just for regressing that application, finding the impact of the changes and defect fixes on the other applications. So just to give an idea, like our application had 100 plus data centric screens, so we needed to do a lot of impact and regression testing around those screens. What they're building is a very intense analytics product, which relies on huge amount of data and then crunching through the data, generating certain kinds of decisions in terms of when you should sell the hotel, at what price you should sell the hotel, to who you should sell the hotel. Like pretty intense analytics going on over there. And one month would actually be shot, in my opinion, to do all those regression testing during that one month. So does this sound familiar to people? At some point in their career, have they seen something like this? We're not totally talking something alien over here, right? By the way, this is 2010, we're just kind of sharing our journey. It's taken maybe six years for us to go through this. And we're kind of quickly going through that. Yeah. So obviously, we were not in a very good position. And one month regression is a real big bottleneck, because if you want to release frequently, that's the critical path we have, which we need to address. So there was a challenge posed to testers that how you can go and reduce this regression period. So, testers have the checks, right? Manual checks, which they have written over years to validate the requirements. And they are mainly testing through UI. So when they start thinking about automation, where they land up, and what we did was we started evaluating the available tools in the market, which basically helps to automate the test through recording, right? You record the UI, user actions on the UI, and then you start simulating your checks, and you run them. So that's the first thing comes to the mind of the tester. So we figured out the tools which were suitable for our technology and our budget, and we started automating our test cases. So what we did, as I spoke, we have 100 plus screens, data-centric screens. We prioritized those screens, which are most critical to the client, medium critical to the client, and we started attacking the critical areas. And we listed down screens which are more critical and started automating the regression tests around them. So it took two years, we put two full-time automation script developers. They are basically testers having automation skills. They knew how to use those tools. So they were full-time doing this automation. Our entire team can't be put on this task because we are parallelly testing the new features and new releases, right? So they took two years, and within two years, what we automated was around 40% of the entire regression suite we had. So it's pretty time-consuming to automate using UI automation. And we'll see, so how many of you believe that this is the best automation approach, like this is the best automation approach? Nobody believes? Okay. How many people have seen this in their company? Let's ask that, then the hands will go up. It's not that we believe this is the right approach, but we see this quite often, right? This is the approach, the standard approach that a lot of companies use. So that's what we started doing, and we thought, yeah, you have a question? Okay, so that's what we started doing, and we thought that we are on a right track. But soon we realized this is not the best way to automate your regression test, because there were some other problems. What were they? So automation is always catching up. When you are developing a new feature and when the new feature is ready for release, your automation suite is not ready, because you can't automate when your screens are under development. So you are still behind, even though your feature is working on the production, your regression is not automated. So the automation testers are always catching up. The other problem with this approach, it's heavy to maintain. I mean, we found that almost 20 to 30% of our scripts, I mean, on already released screens are always under maintenance. So the reason behind that is when you try to automate through such type of tools, you're automating the UI. You're automating UI validations there, navigation and all UI components of your application. You are validating the business logic in the same place. You are validating the database algorithms at the same place. Everything is embedded into that single monolithic UI regression test, right? And your application changes at various layers, right? So any change anywhere in the application while developing the new feature or while fixing the existing issues will tend to break your UI tests. So they're heavy to maintain. We found that almost 20, 30% of our already automated tests are always under maintenance. And this is one example of the module where we had 80 end-to-end scenarios. And we automated them using this UI automation approach. And the time of execution, each scenario took five minutes because there is a lot of backend processing going on in our application. So this entire cycle, I mean, executing the steps of the test, waiting for that backend process to complete, and then making validations on the screen. It was pretty long. And for running all those 80 scenarios, it was taking 400 minutes, right? So close to seven hours. So they are pretty slow. All the regression tests which are driven through UI, user interface, they are very slow. So that is another issue which we are facing with this approach. So this is a 25-year-old company. So the product has been started in 2002. And we are building automation because it was all manually done before. We were building automation starting 2010, 2012 actually. So this is being built after the fact. We're just taking our manual checks, converting them to an automated thing. Now, can we run them in parallel? No, because we have certain dependencies on analytics products. And those can run only in one at a time. If you were to buy another license or deploy another version of that, maybe you could run in parallel. But that's going to be extremely expensive to do. So sometimes running in parallel is not an easy option. And even if you went down that path, I don't think it's going to help you much. That's the point we're trying to make. That's going down a wrong path completely. I was part of that team. So I will talk about it a lot more in detail. I spent two years working on that product, on Gmail product. But let's kind of continue and talk about what are the issues with that approach as we go. Yeah, and the other aspect of this particular suit was there is a lot of dependency in these IT scenarios. The post condition of scenario one is a precondition of scenario two. So again, there is a solution that you create those many databases and run them parallely. But then it makes it very heavyweight. You need to have that array of the databases which are required to execute them in parallel. So they are pretty slow. That was the problem. And this is where we landed after two years. We have some test coverage at unit level, which was minimal. Some test coverage at integration. And 40% of our UI is automated using N2 and UI tests. And still there is a huge cloud of manual regression, which is, I mean, more than 45%. Because as I mentioned, these N2 and UI tests, many of them are still under, I mean, always under maintenance. So what was the result of that? So this is how testers would look like. And to some extent developers also, because testers keep finding many issues later in the game, which developers are fixing. And they have to wait until everything goes green. So testers were really getting burnt out in this particular scenario. So what is the result of this? Many issues were skipped to production. This is a graph taken for a few months during that period. We are always finding some issues on production. And once you find the issue on production, you have to do something about it. So there is an additional cost involved in that. So we all know, I mean, those who are in development and testing, the cost of fixing the issues grows exponentially as they are found later in the game. So we were spending a lot of money in delivering the patches for the issues which were found. So this is where we were after 2012. So we came out of that inspection mindset. We started automating using UI automation tools. But even after spending two years into that, we were still not really happy about it. We started sensing that something is wrong. We can't go ahead with the same approach and reach the stage where we will be able to deliver fast. Because even with this UI automation, our regression time hasn't really shrink drastically. It came down from one month to maybe three weeks. But that's not a huge advantage what we wanted to get from the UI automation. So we started thinking something is wrong into our approach, into our testing practices, into our development practices. And we need something, so that quality becomes everybody's responsibility. It's not just a department's responsibility, so that we build a quality culture, number one. And number two, our testing should be much more efficient, much more leaner, and faster. So that's where we started engagement with Theresh. We described him the problem we are facing. And maybe I will hand over it to Naresh now, where he can describe how he analyzed and started working with us. So I remember when I was visiting your office, I sat down with Kirtesh. I think Kirtesh is one of their test leads. So I sat down with him, and Kirtesh was explaining the testing strategy that they have used so far, the automation they have built. And he was describing some of the challenges that they were finding, which I think just summarized. And my question to Kirtesh was that having automated checks is fantastic. You're talking about some of the challenges. But let's try and understand that over the last one quarter or last two quarters, how many of the defects that your automation suite has caught, how would you categorize them? How many of those were high priority issues that it caught? How many of them were low priority issues? How many of them were UI related stuff? How many of them were data validation related issues or things like that? And when we actually started analyzing that, what we found was this, about 95% of the failures that the test had caught, where it had to do with basically data failures, which data failures would be in their case because it's an analytics product. It's basically instead of giving number 20.21, it would give 20.22 or something like that, like small level imprecision. So my question was, if that's where the failure is, trying to build it from the UI, does it even make sense? I think it was initially, it was like, what else is the approach there? That's how you do it. There's no other way to do it. It's like, no, well, let's think about it, right? Because your problem is not at the UI level. So let's ignore the UI for the minute, right? Let's not even worry about that. Let's go where the problem is. Let's go closer to where the problem is. So I drew, actually on the board, I remember, I drew this to highlight that this is how I see your current testing approach. This is what it looks like to me, your current approach. And he said, yeah, that's more or less a good description of how we are approaching testing. I think that's when we pulled in Sachin. Sachin, I don't think he introduced himself. He kind of leads the entire testing practice at ideas. And so we kind of pulled him in to talk about, this is where your testing approach is heading. And some of the problems that you talked about, probably you can try and understand why you're having those problems because of that. And then I try to draw this pyramid for him saying, what we really need is to invert that pyramid, to turn that upside down in some sense. We need to be focusing a lot on unit testing, right? I'm gonna draw an analogy. I'm gonna draw a parallel. How many people here have used or sat in a car? Expect everyone to raise their hand, right? It's a good exercise. Let's assume, let's take a company like Toyota because it's very popular, at least used to be very popular for quality, not so much now. But back then Toyota was very popular for things, right, for quality. So one of the things that Toyota would do, for example, is take every screw and bolt and every little thing that would go into the car, they would basically check if it meets the requirement. Whether it's threading on the screw is correct. If it is lightly off, it could lead to a leakage in the oil, right? Or it could lead to something else. It could slip out, things like that. So that's what we mean by unit test, right? Take every little bolt, every little thing that goes into your software and makes sure, in isolation, it does what it's supposed to do, right? Then building on top of it, right? You would have, let's say that little bolt that we were talking about, the little screw that we were talking about fits into a piston, right? You put a piston together. A piston is a unit from a domain perspective, right? And that has some function. That has some requirement that it meets. It does something for you. So can we make sure that when we put all these little units together at a smaller level, at a domain level, does that unit function correct? Does it do what it's supposed to do? Then we say, okay, this piston has to fit into an engine. Does it have the right diameter? Does it fit those specifications? Can we make sure that it functions correctly with integrating within that? Then we say, okay, this entire thing has to function within an engine, right? Is your engine working correctly? Forget the brakes, forget the transmission, forget the other thing. Let's just focus on your engine and see if it works correctly. Then we say, okay, let's look at, from someone basically turning the key on to your engine starting. Is the wiring and your engine, all of that connected correctly, right? You wanna ensure that. And finally, you wanna look at, is the key actually turning correctly? Do you have the aesthetics? Do you have the look and feel of the car? Does it feel great to be in a car, right? So that's kind of an analogy that I'm trying to draw, saying, you don't have to just test in this fashion. You could be testing in this way. And there's bunch of checking that we are building into it and there's bunch of testing that we are building into it. And I'll talk about these two elements, how we are bringing into it. But the point I'm trying to make is instead of relying on this kind of an inverted pyramid that we have over there, we should be focusing on building the right test at the right level, the right check at the right level. And ensuring that the quality is getting built all along as you're building rather than leaving it to the end. And in fact, in this also, you will have a little cloud on the top, but that would be very small. And that would be more of things like A-B testing, things like property-based testing and stuff like that. Which also, a lot of that could also be automated. You had a question, sorry. All right, hold on to that question. That's what our talk is about. What did we do? All the layers are automated there. Yeah, yeah. What did we do? How did we go about it? Like to his question to Srikant's question, did we automate everything? Did we start at the unit level? What is the strategy that we used? That's what this talk is all about. And part of it, actually, that's what all this talk is. The part of the talk is about that. The part of it is about a different problem, which is about how do you get people who are used to manual checking to move from that mindset to be able to do this? And I think there is more takeaways for people to address the second problem, because that's where I have seen we hit the maximum challenges. And there were certain things that we did to be able to transition from there. So I'll let Sachin kind of take over and then talk about a specific example of how did we go about from here, and then hopefully that would cover your question and your question. So let's kind of make some progress. We have one hour. I'm sure we will be able to have enough time to take questions, but I do wanna make sure that we at least talk the story and then we take some more questions. So that is gone, right? This is what we wanna focus on. Now, how do we get that? Yeah, so that is gone. And you know, this is like 180 degrees turnaround, right? For the mindset of the people, for the not only testers, even developers, because they are contributing a lot into the unit testing piece of this particular pyramid. And so we were having a fun discussions, you know? And then we coined one term there, like as, you know, what will we call the people, you know, working in this style? So like Egyptians build pyramid, right? So we came up with a term called as Tegyptian. So Tegyptians build test pyramids, okay? So, and they are both developers as well as testers, not only testers. So that's how all this, you know, that's how the title of this particular session, you know, born that time, like it's a death of inspection and it's a reincarnation. So now we are going to talk about the reincarnation. So what are these Tegyptians typically doing, you know? So we will discuss about the attribute of this part, you know, how they build the pyramid. So they keep their tests pinpointed, right? So when they're writing, I would use the term checks here, not tests. So all the checks are pinpointed. So as we discussed that in UI, you know, it's a monolithic test doing so many things. In this type of, you know, test checks automation, you are doing one thing in one test. So it gives a pinpointed feedback. Put them where they originate. So you are not doing all the stuff in the top, you know, topmost layer of the application. You are doing a lot of automation at the bottom layers of the application and keep them fast, right? So automatically as you go down, the tests are much faster there. They can be integrated with your CI, CD and they run much faster. So that's how they start thinking. The walls are broken, right? So Tegypsians work together, they collaborate together. Developers and testers are no more enemies, but they're helping each other to build the quality. So right from day one, when we start the story, when we start working on a story, they sit together, they understand the acceptance tests which are given by product owners. They break it together into the more detailed tests and they understand, you know, how we are going to address this particular story, how we are going to design and develop this story. Developers, testers start taking deeper dive into the code, how the code is being designed, how the developer is trying to solve this problem. So his test cases are no more, you know, taking care of the design as well. They are taking care of the internal wiring of the application as well. It's not just driven from the, you see one test for a frustrated tester there, right? So that's a real life picture taken from our team. So he's trying to understand the code, okay? So they're planning for no manual checks, right? So like when we started 2015, we wrote one communication to the entire team that let's plan all the new features in this year where there should not be any manual checks done as a part of regression testing. So plan, I mean, we should not call feature as a complete unless we feel that all the checks on this feature are automated and they are running either on CI or CD and you don't need to worry about manual checking of those when you want to de-plan production. So that was the mandate and many people have started responding to that. Many groups have started responding to that and taken it in a very positive way. So a lot of TDD, a lot of BDD practices are inculcated within the team. Okay, so that's what we started maybe a year back and as I said, it's like 180 degrees turn around for everyone, so it's not easy, right? When you are coming from a background where there were departments, now you are coming, you are forcing everybody to work together and build quality together rather than being in a competitive environment where testers are just finding defects and developers are fixing defects. Rather they are working together in preventing the defects now. So what are some of our challenges? Which we faced, we are still facing with some people. So people start losing their identity, right? This is the first challenge we faced. Like if team owns the quality, what is the role of tester? So far we have been groomed in a mindset where tester is owning the quality, but now it is everybody's responsibility. So let me tell you very personally, I was very happy. I have been doing testing for 15 years and when the production issue is found, I'm the first target, right? Now the team is the target. Anyway, so that's the fun part of it, but now everybody is owning the quality, right? So then what is a tester's role? So testers start feeling insecure, then developers also start thinking like if I am writing so many tests, what is tester doing there? So but actually they have different skill sets, right? Tester is coming more from user side, but he's working together with developers, so I think they come up with more better test cases with this approach. So overall it helps, and then they are not competing with each other. I mean, I can give you an example. Like one of my colleague came to me and told me that should we start a trophy? You know, he's still into that old mindset, right? Should we start a trophy in our organization you know when tester finds a lot of bugs, you know, in a particular module? So I said fine, but you know what will, how will that help to build a good culture? So then we brainstormed that idea and then finally we concluded that rather than that we should start a trophy for a group which has released a feature to a production and there was no critical issue found on that feature on production for last three months. So which means they delivered a very robust feature and let's start a trophy for that team which was working on that feature. So nobody's losing identity here. In fact, you know, testers are developing more skills because they are getting exposed to coding. They are writing tests at different layers. Even some of them are writing unit tests. So it's a lot of value addition to them. Developers are also learning a lot about domain, about the user experiences, and they are also enriching themselves. So doesn't matter, right? You are losing identity, but you are adding a lot of value to yourself and you are contributing to the quality of the product and you are giving a lot of benefit to your company. So that's how we are pitching to the people that you are not losing identity but you are adding a lot of value to yourself. In fact, you are becoming all-rounder in a sense. So it takes time to inculcate this type of thinking. Some of the testers who don't have, I mean, they have zero background of programming. They are initially scared about it. Just like when you start learning a car, you are so conscious in the beginning, right? So you don't want to do that. You feel like someone will, you know, scream at me if I do a mistake. So we have to take care of that type of, we started training them, right? So it's not like, I mean, we have to take them into confidence into this transformation. It's not like you don't know the coding, so we are going to lay off, you know, the people who are not ready to code. So we have to build a plan around them. We started training them in Java, Groovy, whatever technologies we were using for automation, Cucumber, so there were sessions for them, and we gave them, you know, small goals to achieve rather than giving them a humongous goals. And many people started responding very well. You know, some of our testers are now really very good programmers, and that's helping a lot, you know, to achieve the goal what we wanted to. Some of them are catching up, some of them, but we are not really behind them, like they have to catch up within this period. So we are taking it slow. We've done sessions right from basically helping them, problem solving, we've done sessions on logic, we've done sessions on basics of Java, basics of Groovy. We've really tried to give them a support and an environment where I feel they were deprived, and now they're getting that exposure to learn some of these things. We've been doing design sessions for them because I think it's very important for them to understand design decisions and things like that, so they can actually work with developers, you know, face-to-face and compare and challenge certain things. So, you know, there's pretty significant investment happening because I think the entire company is bought into that this would lead them to a much better future. Unlike a lot of companies where I've been, where there's a lot of nice lip service that the senior management would do, and then leave people saying, well, from tomorrow you're gonna do automation. And people have no clue. I mean, that's not a skill set that they have. So, you know, you're leaving them in a soup in some sense. Like, that's something we were very cautious not to do. Right. Quick time check. Okay, so the other challenge we were facing, like, we saw this pyramid, you know, you understand it conceptually when you are introduced to it. But when you actually start working, and you know, when we actually start to put your test at various layer, initially, there is a lot of confusion, you know. But as people start practicing it, then they realize, okay, you know, this test should be here, this test should be called as integration, or it should be a business logic. So, especially, like, for extremes, you are very clear. Your unit test and end-to-end test, you are very clear. But the layers in between, people struggle in the beginning to put them in the right, you know, right slice. So that's one challenge we faced. But the only answer to this is practice more and more, and then you get familiar with the concept and imbibe it in your day-to-day work. We also did a lot of reviews sitting down with them, kind of taking through and talking about, okay, why do you think this belongs here? You know, can you think of simplifying this further? Can you push it one layer further down? And you know, kind of just sitting through and taking specific examples and doing that. Right. So, one friend asked about legacy. So, that's the challenge. Actually, if you have a legacy code which is a monolithic code, it's difficult to build pyramid around it, right? So, in fact, there is a session, one of my colleagues is giving today afternoon after lunch, which talks about dealing with the legacy code and how you can build pyramid around the legacy code. So, many people who are interested can take benefit from that. So, right now what we are trying, how we are trying to address this problem is, you can't go directly to the bottom and write unit tests around them. But we are still trying to build intermediate layer of tests. So, we will build our workflow test to test the existing functionality. So, we have some safety network around those, and then we plan the refactoring so that all those tests can be moved down. And that will happen over a period, that transition will happen over a period. So, that's how we are trying to solve this problem. You want to add anything? I think about six months now we have been trying to do this. And in some areas we've actually been able to go through to fewer lower layers. So, Prasad has a session this afternoon. He's going to talk about one specific product where we went through and what were some challenges that we ran into. Just to give a quick review, right? We didn't have any rest services exposed or any of that stuff. In fact, you couldn't even do rest services because the technology that we were using had no capability to do that. So, how to even get a test around your code, the legacy code that is agnostic to the actual database structure and things like that. Because if you start tying yourself too tightly with that, your test will become too brittle, right? So, those are some of the challenges that you run into and we'll talk more into details of how we went about resolving those. Here we want to focus more on the people aspect and the organizational culture shift. But we can come back and touch on one specific example a little bit more. Yeah. So, we spoke about challenges. What we have been dealing with when we started adopting this new approach. Now, let's see where we have reached. I mean, what we have achieved so far. I mean, quick pulse, right? Are there any other challenges that you have run in when you've tried to do something similar? Do these challenges ring a bell? Do you see, you know, yes, this is practical? Yes. So, we specifically don't look at code coverage. Code coverage? I just don't want you to be over-tested, the same. I mean, that's very hard to, you know, figure out by even code coverage, for example. Code coverage will tell you if this code is covered or not. It doesn't tell you how many different tests covered this, right? For us, what is more important is we want to make sure that from a business perspective, right? Things are actually captured and what were manual checks which were automated at a high level? Can we take some of those and knock them down to lower layers and put them where they really belong? So, we're still trying to go through that transition to ensure that the right test belongs at the right level and is executed at the right time in your CI pipeline so that you get faster feedback. Now, you know, we could look at the coverage but we didn't intentionally do that because once you start looking at coverage, the problem I've seen in the past is that people start gaming. You start doing things just to show coverage and we didn't want to fall in that trap. We wanted to focus more on value addition of the actual feedback and coverage is only one part of that story. So, we have Sonar integrated, for example, and it gives you all the data but are we going every day and looking at it? Not really. So, we have all of that stuff in there. Do we really go every day and look at it or does that drive our strategy as of this stage now? So, that pyramid, those different layers of the pyramid is basically over the years, my experience working different projects and what we ended up in the end and we felt this was a good position so it's basically extracted from projects like that. One of the projects was the Gmail project, for example. A bunch of other projects over the last six, seven years that actually more than that that we've actually been trying to put this pyramid and we're saying, okay, what is a good level? Where did we end up? And that's where those are kind of ballpark numbers to give you an indication of how much focus you should have. Again, it's a pyramid structure that you should focus on. Whether it's 70, 65, 85 doesn't matter. What you want is the base is solid unit test and as you keep going up, the numbers keep reducing. You don't want a cylinder or you don't want inverted pyramid, basically. So the number is more of a ballpark based on the data derived from a bunch of projects that I've worked on, but it's to do with the structure. So mathematically, it is the ratio of the total test case versus those test cases. It's not feature coverage. Okay, that's the question. So you may be having thousands of test cases to certify or check your application. So what percentage of those tests are unit? That's the percentage. It's not feature coverage or something. No, no, no. So it's neither code coverage, not feature coverage. There's total number of tests, percentage of that. That was your question. Correct, correct. Correct, the total test. Okay, sorry, I misunderstood your question. Unit test. That's what it means, right? Correct. I think the important part is a lot of people talk about three levels of the pyramid. They talk about unit integration and end to end. I think that's a bit flawed in my opinion because there's a lot more in between which actually makes a huge difference. But I think we need to rush. Yeah, we need to rush, sorry. Hang on, I think the most important part of the story is still not covered and people are indicating 10 minutes, get out of the stage, you're boring, so let's. Okay, so what we have achieved so far. We spoke about one feature, right? We had 80 scenarios in that feature and it was taking 400 minutes. So after we broke down those test cases into these various layers of pyramid, that red slice you see is the actual, I'm now required to certify the same feature. It's pretty small, it's around 17 minutes now. And if I want to give you a further break up, you know, all the business logic that we were testing into those 80 scenarios, now we have converted it to the bottom layer business logic test. And the time it took is, you know, not more than two minutes now. And then we have certain tests, integration test, workflow test, and some UI driven test, which are doing other pieces of that feature. And those are taking maybe, you know, 15 minutes. So that's how, now that thing which we're testing in 400 minutes has come down to 17 minutes. And the, all the crux of the functionality is actually getting tested in less than two minutes in the business logic tests. And our Egyptians are still working back at work, right? To still improve this. We are not done yet, right? This is still in transition. We still don't have unit test, for example. When we do unit test, this will go down in terms of timing further, right? But as we have legacy code and some other challenges that we have, we still in the transition. But what we've been able to achieve so far is this. And what we are saying is, you know, the already the time that it takes, that's significantly reduced. And that gives us a huge confidence that we're actually heading in the right direction, right? I have just taken snaps from our Jenkins environment. What we see here is, now we have around 9,000 plus business logic tests which are running into our CI for each build. It's taking around 11 minutes. Then we have some intermediate layer tests, workflow or integration tests. They are taking around 30 to 40 minutes to execute. Again, this is running on each and every build. And then we have end-to-end scenarios. They are taking two hours, right? So they are not being executed in each and every build, but they are run once in a day. So we take a build at the end of the day. We deploy it on our test environment. We are running end-to-end test. So we clearly see the advantage here, right? So so many unit tests are running, but still taking just 10 minutes while maybe 100 end-to-end tests are running and they are taking, you know, two hours. So it's always better to break down your, you know, more and more checks to the lower part of the pyramid because it runs very fast. This is the overall progress we have made into our regression, you know, time required. So we have, like, from one month, now we are less than a week, we are able to regress our application. So which is a significant advantage, right? So one of our product, we are deploying every second week now and only this has enabled us to do it confidently. So from a quarter we are now down to bi-weekly releases. I think from a business standpoint, that's a huge advantage for the company because as you improve your analytics, as you improve your algorithms, you want that to be readily available because that's gonna make a decision, a better decision today for the client. So that makes a big difference for the business perspective. We further want to reduce that down to really continuous deployment where we can check in the code and it goes to production. But as you see, we're still in transition. We have more work to be done in that cycle, right? Give us two minutes, let's run. Number of issues found on production, drastically reduced from where we were two years back, biggest advantage we are getting from this. Other impact, so we already discussed about mindset changes, right? So it has also impacted our recruitment strategies now. So like when we are recruiting new testers or new developers, we make sure that they have that they have development affinity as well as they have testing affinity. They are ready to work in collaborative environment. They are ready to work in technology-driven environment rather than hierarchical structures. So this is... And the recruiting approach itself, I think, is changed because we don't do so much of sitting around tables and talking about fluffy stuff. We take a person who comes in, we say, here's a person working. Can you start pairing with this person and see if you like working on this kind of project? And we wanna see how it would feel for us to work with you. We're really trying to put them in and trying to ask them to solve real-world challenges rather than just talking about fluffy things. What do you want to do? What do you aspire? I mean, it means nothing. Yeah, they are actually spending time on our product with our team. They are pairing with our team when they are going under interview. And they are working on our products and they are getting the feel of our environment, how we are working on day-to-day basis. And we also get to know how they are actually hands-on in scripting or in development or in writing test scenarios. And the unfortunate thing is, if you pick up anyone's profile today, you will see Agile, Selenium, this JUnit, all of those keywords are there by de facto. I mean, there's no doubt about that. How much of it is actually, I heard someone say Selenium and it's in my resume versus I actually know hands-on. You know, how do, you know, for example, I have a ticker that keeps changing. Now, how do I, you know, automate that check? Can you help us do that? How much of that hands-on experience do you have? That's where I think our entire recruiting approach itself has changed now to focus on bringing people with the right mindset. All right. In general, I have seen a lot of positive. Though there were initial apprehensions in the mind of testers and developers, over a period I have seen a lot of positive feedback coming from them. Testers are really feeling that, you know, they are getting exposure to the latest trends. They are getting exposure to the development. They are getting exposure to the automation and that's real, that really keeps them motivated. They have seen the benefits actually happening into our release cycles, right? So they are pretty happy about it. They are getting rid of that manual, repetitive, frustrating testing and still passing on the defects of the production. So they are pretty motivated. Developers are also learning because they are getting exposure to the domain and the end-to-end scenarios. So both are learning and widening their areas, widening their horizons. So they are pretty, I mean, this is a real surprise to us that over a period of people, when they start adding value to themselves, they start, you know, advocating this type of practices. If I ask anyone of my team in one product now that would you like to go back to our old style of testing? I fear nobody will say yes now. In fact, I'm happy about it. I don't fear about it, but people are enjoying these new practices now. So with this, like, this is a concluding message. Your life does not get better by chance. It gets better by change. So that's what we have been experiencing and maybe we can open it for question answers. Two minutes for questions. Yes. So his question is, have we taken any feature where we have basically automated tests at all the different layers of the pyramid on this particular project? Right, is that? Yeah, Arina, let me rephrase it. A youth mentioned something like 70% of the unit test. Was any targeted set like that out of the 70% of the unit test? Some, let's say, 80% of it will be an automated test. Everything is automated, because they're all checks. They're all automated. They're all running on each build, actually. When the build is triggered, all these tests are automatically running on Jenkins. Yeah, yeah. I think that the term negative testing, there's a lot of misconceptions around that. If you actually talk about negative tests, a lot of what people call as negative tests is actually what we think is positive cases, right? Because expected behavior of your application is not a negative path, it's actually a positive path, and they have to be automated, right? Now again, you have to do impact versus likelihood mapping of that. What is the impact of this scenario, and what is the likelihood of this? And based on that, you decide if it's worth investing or not. There is a performance test. I mean, you mentioned about non-functional. So there are j-meter tests which are also there, and they're running in the Jenkins. So in fact, in the pyramid, there are two stages where you can actually do performance and security testing. One is at the domain logic level, you can do that, and one is at the end-to-end flow level. You can do both of those levels, you can do performance and security tests. So it's very important. What about the legacy applications of the old? So every time we touch any portion, we are gradually bringing them under the pyramid. So that was for UI automation, right? That was an intermediate stage, right? Where we took two years, but now everybody is doing this. It's not only two dedicated people doing on pyramid. Everyone is on pyramid now, right? Developers and testers working together to develop a story. Yeah, when you are adding a new features to them, we are trying to refactor them, or where we are fixing, in fact, if you are fixing a defect, it is mandatory to add a test first, which simulates the defect and then fix the defect. So it's automatically getting built into the legacy code. As and when we touch them, the rest of it is on project plan and Microsoft. Pushing them down. It is inbuilt, right? It is inbuilt. If I check in and something breaks here, I have to immediately fix my code and make sure they are passing because they are running on Jenkins on each build now. They're moving away from developers' responsibility, testers' responsibility. So we coined the new term called the Egyptian. So basically it is them working together to ensure that the build is passing all the time, right? And by pushing tests to a lower layer, your chances of failure are drastically reduced because, you know, and even if they fail, they'll give you very pinpointed feedback. It's not like some end-to-end test broke and now I have to go figure out which one broke for what reason. Was there a database logic issue? Was there some other issue? By pushing them down, you're able to get much more faster feedback and fix it faster. So your ability to quickly fix things is also improved drastically, right? So that's where the maintainability goes down. Someone else had a question here. So let's answer one question, stack overflow. So your question is that if we keep adding new features, how do we ensure that the tests are catching up, right? You don't. You move away from that mentality, right? If before you add any new feature, you have to write a test or a check to make sure that what changes you need to make has to be there. So you actually start with the tests. If you remember, Sachin talked about test-driven and behavior-driven, right? So that we've kind of doing that. So there is no- There is no backlog now. So I mentioned about zero manual checks after release, right? So you don't call feature complete unless you have checks automated for that feature. Correct, correct. We still have work to be done with the legacy stuff that is still there and we gradually moving it over. But that's not done yet. The question is on the identity crisis, the slide that you had. I also have had such questions asked to me. If the tester, if I as a developer needs to test as well, what would the tester do? So how do you handle such questions? I mean, you have to coach them. You have to coach them, right? You have to showcase how it is going to benefit. To them as well as to company, as well as to the product. So essentially the management's buy-in is, I mean, that's what my understanding is. The management's buy-in into this approach has to be there. Obviously, yes. You have to have a management support, yeah. Yeah. Next speaker could come up and set up. We are just taking Q and A. So if you want to set up, you guys could do. Are you doing next now, right? Whoever is the next speaker, they could come and set up. We just gonna keep. You talked early on about testing and encompassing checking. You talked a lot about checking through the talk. You talked a little bit about process and culture, things you've done about the part of testing that's not checking. Because there's a lot of exploratory angles to it, finding the subtler interactions and so on. And especially as you grow all these all-rounders doing dev and test, how do you kind of address those kinds of issues beyond checking? That's a good question, right? So we didn't actually cover much on basically. Testing the exploratory aspect of it. What we talked a lot was about how we're trying to move people from just thinking about checking as testing and kind of get them into a mindset where they can automate things. And so a bunch of their time now, which earlier they were spending manually checking too much is now they have that surplus time, a little bit of surplus time. And that they are doing things like dev box testing so they would sit with the developer as the developer finishes. And they would actually go through and run through some scenarios and which is kind of more exploratory, okay. Now that I see this built, what happens if someone does this and then closes the browser and starts again? What would it happen, right? Have you thought about that? Now that I see that we are actually going out to some other service and we are getting back, what happens if that service does not respond? I mean, have we built something around that? Let's try that. Let's bring that service down and let's try to do that. So people are now getting time to do that during the dev box testing, which is before you call the story done, the developer and tester would sit down together again to make sure that the exploratory testing part is taken care, right. And then we've been actually trying to run like hackathons, we've been talking about hackathons. We've been trying to run like a security hackathon where people just go, buzzer can try to break the system. So we're trying to inculcate practices like that where you could do a bug bash, you could do things like that to do more of the exploratory style of testing. Yes? Tested testing is ready by the time. Is it correct to test out a feature with automated testing or it's better to do manual testing for the feature, for the first time? Is it better to do manual testing for the first time for the feature instead of automating it? I am biased, I would say it's always better to automate it. Yes, completely. So maybe I will describe the process and many things will be clear here. So what is happening here is like developers and testers are working together, right, from day one. So they have an acceptance test plan for a particular story and they know that if I address all these tests, my feature is good to go. I mean, this story is good to go. So they are discussing the stories from day one and then they also figure out that out of these tests, how many tests I'm able to cover at unit level. They are discussing from day one. So they are covering it through TDD, right? So it's a test thing actually and tester is sitting beside a developer. How many tests are getting covered in the upper layers? So tester is writing those tests, right? So while the test, so it's not that they are not at all doing manual tests. So there is something called as Devbox, right? So when the developer says now everything is ready, my unit tests are ready, tester goes through the tests and find out how many of them are getting covered. His acceptance test plan, how many of them are getting covered in unit? So he's not bothered about manually testing them. But then he will still go and do some actual end-to-end exploration before they say that it's good to go and deploy on a testing environment and then run end-to-end. So it's not that we are averse of manual testing. We are doing it, but we are not compromising checks, automation of the checks at the cost of that. And it is giving a lot of benefits. I mean, I'm telling out of experience, I don't think we are missing anything here. I had a question. So if the testing is automatically here, why are you trying to take the manual testing? I think you missed the initial portion where we try to distinguish between checking and testing. So what we are automating is checking. What still needs to be manually done is the exploratory style of testing, right? Back to your question, I just remembered one other aspect of, you know, basically testing. So TDD is really driving that aspect of testing because I don't have code written, right? I'm saying this is my expectation, right? I'm kind of articulating that. And then I'm basically going and building the code to meet that. So that's, again, an exploratory option where I say, what happens if I do this and you run and then you say, oh, okay. Now I need to tweak this to handle this. So it's more of exploratory, but capturing it in an automated fashion. So I think TDD bridges that gap really well. What kind of role do you have in your new setup? I mean, is everyone a Egyptian? Okay, okay. So I mean, we haven't really worked on designations yet. Yeah, it has to evolve, but they are still having standard designations like QA engineer or developer. The feature you have for someone who's a dev and a test, they may rotate and then put on the test. Eventually we should reach there. As of now, we are not there, but that's where we wanna go. There's still, you know, you are the tester on this feature and you are the developer on this feature, but our goal is that once they feel comfortable, you know, playing multiple things, then we will bridge that. But you already see people feeling that the lines are blurring. But that's a good hint, you know. We can start designations like junior Egyptians, senior Egyptians. Junior Egyptians. You can only lift that brick. You can't lift the rock. All right, thank you very much. Thank you. And we will leave the stage for the next speaker.