 How many people here are familiar with this topic? Quick show of hands, OK? So this is not going to be 101 of the test pyramid. It's going to be more of an experience report of how we went about doing this at a specific company. We have some real data to share with you and real examples. We've worked through a lot of permissions in the company to be able to show stuff live here, which is very nice of the company because a lot of companies don't want their actual product and code to be shown in public. But we're actually going to be showing you some real code. So it is realistic. It's not a cookbook kind of an example. All right, my clock says it's 11.45. So we're going to get started. Good afternoon, everyone. So we're going to be talking about test pyramid. You're at a Selenium conference, and I'm going to be talking about why you should not be using Selenium. It's not funny, right? But it's a common trend I have seen over at least the last six years, if not more, that people tend to go from manual testing to end-to-end UI testing. And it could be Selenium. It could be QTP. It could be Mercurial. It could be Sahi. It could be whatever tool you have. And that's a common trend that we see in terms of progression is that manual testing to automated UI testing. And people feel that's the best thing to do, and certainly it's not a bad option. But here we're going to go one layer deeper into it and talk about when you do a lot of end-to-end automation test, like the image shown over there. You end up with what we call as the inverted test pyramid or the ice cream cone problem. And this is an experienced report of how we have successfully turned the test pyramid over into something that is more suitable and why it's more suitable. We'll talk about that. Disclaimer, we are not through this journey with this specific case study that we are presenting. We are still in the process, but we've made a good headway, and we thought it would be, you guys might be interested to hear how we are going about this, what challenges we faced, and how did we overcome those challenges. So I'm sure you guys have heard me talk enough, so I'm actually going to not talk anymore. I'm going to get the guys who really did it. I was just a consultant, and all consultants can talk a lot, so let me shut up. And I'm going to actually get handed over to the folks from Ideas, where we actually did this and let them come and present their example of how they went about doing this. So can we have Kirtesh and Aditya to come and talk about their experience or the journey of how they went about inverting? Thank you, guys. I'm done. Claps. Hi, I'm Kirtesh. We both are from Ideas Assas Company. Today we are going to present our journey towards achieving right test pyramid. This is an analytics of revenue management, revenue optimization for hospitality industry. So in short, our product helps hotels worldwide in deciding the room pricing. So the product began its journey more than a year ago. And now we have 5,000 plus leading hotels in the world using our product. Every day, they send their thousands of inventory data to our analytics, and it runs over it. So we are still going strong with the product. As we started adding more and more features and functionality, so like the journey that began 10 years ago, the product started getting enriched with new features and functionality. So there was no automation earlier. So Kirtesh, would you like to talk about the state of automation at that point in time? Sure. So as you can see, virtually there was no automation in the beginning. And as indicated, that was only due to there were some unit and integration tests are in place. So typically, we have three months of a release cycle. And out of that, one month is dedicated for the regression. So as we started adding more features and functionality, this regression cycle started getting tougher and tougher for us. Since regression is repetitive work, we started passing on issues to the production. So any guess what would happen after you start passing on issues to production? Anybody? Happy customers. Delighted customers. So you would get a frustrated client. So even the client started reporting issues back to us. So if we look at this statistics, it classifies the critical issues that were reported from production. And we have tried to categorize them between data, cosmetic, and localization issues. So if we see the first bar, it is approximately on an average 10 critical data issues getting reported from production after the release. So at this point, we strongly felt the need for automation. So as our manual tests are basically based on UI actions, so our first strategy was to find the automation for regression. So we introduced Selenium WebDriver. That is the tool, which is market treated in browser automation. So we felt that by automating critical number of screens, we'll minimize our regression cycle. So a dedicated team was formed and started working towards this goal. But as all are not automation experts, as same we have also a few members having automation skills. So we have around 50 screens and equal number of reports, which have been regressed in every release. So to start with, we have picked up some heavily used screens and reports and started building automation around it. So after this automation was introduced, now we can see the pie chart where we now have coverage to the different aspects of the product. So the Selenium based automation test comprises of 45% of automation coverage. And as we were adding new feature and functionality, the development team was also adding unit and integration test. So that comprises 20% of the pie chart. There is still chunk of the product, which was not under any kind of automation. So did we achieve something on the regression? Yes, we managed to shrink the regression cycle from one month to three weeks. And we could absorb more work in terms of feature. So in spite absorbing more work on feature and doing automation, we managed to shrink the cycle from one month to three weeks. So let's have a look at the data after this automation was introduced, the critical issues reported from production. So we can see that there is, as a trend, there is a significant decrease in number of issues reported from production. So definitely we all know automation works and helps in all aspects. So how much time for a dedicated team of two members took us to do this automation? As a two member team, it took two years to convert all the critical screens, their functionalities, and conditions into Selenium tests. Let's now quickly see the demo of one of the heavily used screens using Selenium WebDriver. But before we do that, let me familiarize you with what we are going to see about the product. So as I told you, our product is an analytics product working for hospitality, revenue, optimization. So all the leading hotels in the world, they have, they book their rooms through reservation system. Every day, every single day, the data through their reservation system, the historical data through their reservation system, comes to our product via the integrations that we have with those reservation systems. Every day that data comes to us, our analytics analyzes that data, runs on top of it, and produces decisions. These decisions are typically pricing decisions for different categories of customers in the hotel. These decisions are through the same integration fed back to the reservation system in the hotel so that those decisions now are in control in the hotel. So those rate plans or pricing decisions are in control in the hotel. One such popular decision is known as best available rate, that is bar. That is the price being quoted to you when you walk in the hotel. So the revenue manager in the hotel typically would come to this screen in order to review and validate the bar decisions. So if you can see, he would typically add a future date range, apply or not apply, choose to not apply the filter, retrieve the bar decisions for that date range, and review or validate the bar decisions with the help of all that data you see there, like number of rooms holds currently last year, whether that date is a special event or not. So based on this, he will validate whether the decision given by ideas, he wants to go ahead with it, or some specific business knowledge if he has for that day, he wants to change it. If he wants to change it, he can from this screen itself upload it the changed decision to all the rate distribution systems. Popularly, we all know like Expedia, Hotels.com, Travelocity, et cetera, et cetera, so that the decision is, so that the pricing decision is available everywhere universally. So that's the primary functionality of a revenue manager who would be doing on the screen. So now let's quickly run the automation. So this automation would open the browser window, perform login, navigate to the bar screen, let's see what it does. Enter a date, retrieve the data for that date. So while navigating, we are also validating data. We are firing query to database, and then validating data. So after the validation is done, the test is closed. So it took 23 seconds to finish one basic scenario. So for a screen or the module like bar, we have 40 different scenarios with different complexities. Following this approach, all those scenarios to finish collectively together take approximately on an average 25 minutes. So can I get from few of you an opinion on what could be the possible pain points following this approach? Okay. We can manage parallel execution as well as we can introduce multiple machines. We can execute parallel test as well. Anybody else? Mm-hmm. Not really. Yeah, yes. So to list our pain points, our tests were tightly coupled with UI. Till recent times, before our product became cross browser compliant, our product only used to work in IE. So our tests were very much browser dependent. As we all know, sometimes IE does not behave as we desire while using web drivers. So clicking on the menu or scrolling through the page, sometimes we used to get false fails and it really became very difficult to pinpoint who the real culprit was. Parallely, our product as it was getting added with features, we were also rapidly modernizing the UI for legacy modules with newer technologies. So everything was happening in parallel. So it was really getting difficult to maintain this test suite and any change in UI would warrant us to change the test suite. So it was getting fragile. Most importantly, all this automation was lagging with the current development cycle. So until the feature or the UI was complete, we could not really concretely automate it. So at this point, Nareesh helped us understand what this all means and how we should transition towards the right testing pyramid. So over to you, Nareesh. Thank you. So I think when I first visited IE as one of the things I sat down with Kirtesh and team and we started looking at, can you tell me, let's pull up the last one year of your bug report. Can you tell me out of the last one year of your bug report, what was the categorization of your bugs? Where does your bug majority of your bugs came? They were asking me questions around how could we make Selenium better? How can we do things better? And I was like, wait a second. First, before we talk about how we can make Selenium test better, let's talk about, let's look at your bug report, right? So we pulled out their bug analysis. We did some bug analysis last one year. And what we found was a good 95% of their bugs that were being reported were predominantly because of business logic problems or other kinds of data related issues. And only maybe 5% or 10% of them were to do with navigation or UI or look and feel kind of a problem, right? So we said, well, if that's how your analysis looks like, should we even be investing more on Selenium or for that matter any UI testing tool? What do you guys feel? No, that doesn't make sense, right? So what we saw was basically they had very little unit tests as they pointed out. They had a bunch of integration tests and then they had a large amount of end-to-end Selenium tests that they were rapidly trying to automate. And in spite of doing that, there was a good chunk of manual testing that still required because screens were getting developed, screens were getting, UI was changing and this was always a catch-up game for them. So this is kind of, we said, let's look forward five years into time. Do you think we will get out of this problem? The answer is no. I mean, if this is our strategy, this is our approach, this is how our state of testing would continue to be. And as Aditya pointed out, there were a bunch of issues that they ran into because of this approach. So we said, let's set aside this approach, right? Let's park this for a minute. Let's look at, ideally, in my experience, how it should be, right? Unit testing. Like, let's imagine you guys went to buy a car and let's pick on Toyota, right? So let's imagine you went to buy a car and the salesperson at the showroom told you that we've actually not tested the pistons, the brakes, the systems, none of that, but we would like you to test drive this. Can you please take the skis and sit in the car? How many people would sit in? I would if you're an adventure sports freak, right? But not for a test drive. If you're not testing your product at unit level, there is very little chances that somehow, magically, it'll work when you integrate everything together as a whole system, right? So we need to focus a lot on making sure at a unit level, each of your components, each of your classes, each of your methods are actually functioning the way you expect them to work. When I said this, they were like, whoa, that's gonna be a huge amount of effort. Well, yeah, it sounds completely illogical or counter-intuitive that when you start doing unit level testing, it's gonna be a huge amount of effort, I understand that, but my experience shows that actually the amount of investment you need to make is gonna be quite low, and the amount of feedback, the rapid feedback that you're gonna get, the ROI, essentially, is gonna be much higher, right? So I hope everyone understands what unit tests are and what's the value of unit tests, right? Then we talked about the next level of tests that we need to look at. So unit level, we look at units in isolation, methods or classes in isolation, but that doesn't complete your functionality. To complete something, even for a single component to work together, you might need a bunch of moving parts that need to operate together, right? So we talk about business logic acceptance test. Is your business logic working correctly? If I have this data and if I apply these rules between these five, six classes that will operate on this data, will I produce the final result correctly or not, right? So that's what we call as the business logic acceptance test. So on top of your 70% of coverage or 70% of code that you would cover through unit tests, you might need another 10% of business logic things which are without the mocks, without the stubs, which are actually integrating within the classes, right? So you need business logic acceptance tests. Then we talk about integration tests. So integration tests, again, there's a lot of confusion with all these terminologies and maybe I'll confuse you even more. But let me take a very simple example just to highlight the point I want to make about integration tests. Let's imagine a very stupid calculator application, right? I assume everyone can understand what a simple calculator application would look like. Someone's trying to call me. Sorry, can't respond now. So let's imagine a simple calculator and let's, for the sake of argument, let's say this is gonna call some kind of a server which has some database, some standard stuff over the service. So a typical unit test for something like this would be on the server side, I wanna make sure, let's say I have a calculator service. I wanna make sure that if I give two and three and I say plus, it gives me back five, right? Basic validations I would do. At the UI level, I would also do unit testing which is to ensure that when I actually click a button, the number shows up over here. I don't need the server lying around for that. So I could simply unit test this in isolation from my server or anything else, right? So I could unit test my UI. I could use, you know, if I'm doing JavaScript, I could use JS unit or any other tool you like and basically test a bunch of things. What would your integration test look like? What would the integration test do? Most often, I hear people say, when I click two and plus three and I hit equals, then it should send the request to the server and I should get back five and it should show five over here, right? That's my integration test. I was like, wait a second, that's not really integration tests. If that test fails, how do you know what went wrong? Well, I can debug, right? Job security. The point that I'm trying to make here is your integration test essentially, in my opinion, should essentially go at whatever this layer which talks to your server side, right? And basically send two numbers, x and y, doesn't matter what those numbers are. And then get back a number. Doesn't matter what the number is. I sent two numbers, I got back a number, my job is done, I'm integrated with my backend, which means I have the right APIs configured, I have the server, I'm able to talk to it, right permissions are set up and that's all my job is. Integration test really needs to ensure that two points can talk to each other, they don't validate whether they give you the right result or not, right? I might also want to test what happens if my backend is down, right? How would my system behave? Would it crash? Would it give me a meaningful error? Things like that. That's something that I could do at an integration test level, right? So that's the next layer that we're talking about is integration test. We have very heavy integrations with a lot of third party systems, which is where the integration tests are extremely important, right? Payment gateway. I need to make sure I'm able to actually call the right API on the payment gateway, send something and it sends me an authorized response back, right? So that's the next layer of tests that we talk about. Building on top of that is what we call as the workflow API tests. What are the workflow API tests? So I'm able to make a payment but think about a shopping cart scenario, right? I need to select a product. I need to select maybe multiple products. I need to then make a payment for those products. Then I need to send something to the shipping department to say you need to ship these products. I need to reduce it from my catalog of products. I need to reduce it from my inventory, all of that stuff, right? So that's a workflow and we want to do the workflow without independent of the API. It doesn't matter whether my functionality is working from a workflow point of view from one screen to another screen without even thinking about screens. In terms of logical steps of how things operate, can I do that, right? So that's the next layer. And at this point, when we talk about the next one, the end-to-end workflow tests which are one layer below the UI. The only difference between those two, the last two levels over there that you see is here we actually stub out external dependencies, which means I'm not gonna be going to payment gateways. I'm not gonna be going to third-party systems at this point because I'm only interested in workflow in my system, is it working correctly or not? And when I go to the end-to-end flow, which is the one here, I actually do it with the third-party systems to make sure that that's working clearly. But as you can see, those numbers will keep reducing because there are a lot more permutations on my side which I can test by stubbing out external systems. If I'm making a payment and a payment fails, I can simulate that from my test at the workflow level. Oops. At the workflow level, and I can basically do that. Then the last level is the UI test which is mostly for navigation, for layout, for other kinds of things which we wanna make sure that it actually looks fine. The data is being presented correctly and other kinds of things. It cross-browser compatibility, stuff like that. So that, in a sense, that's basically what my experience has been with testing and the emphasis on different levels of testing and how as we go up the chain, up the pyramid, the amount of tests keeps reducing. So if you look at UI tests, which is typically Selenium kind of tests, that would really be 1% of the tests that I would write. And a lot of other tests needs to be pushed, layers down in the pyramid to get the right kind of feedback. One of the guiding rule that we use is the single responsibility principle for our tests. Each test should have a single responsibility which means one that test fails. There should be a clear indicator what went wrong. If a test fails and you have to hunt around and figure out what went on, then your test has multiple responsibilities which is not a good strategy in my opinion. So having said that, this is kind of what discussion we had and they were kind of reluctant but they said it makes sense because this is not, you're not very comfortable with what we have today but we don't quite understand all of this. So we said, well, let's time box it and let's spend some time and take a particular feature and go through this cycle and see what it feels like and see what kind of results do we get. And let's pick the most complex, most critical piece of functionality in your system and do this on that, right? That's when we will know if this approach really works or not. So at this point, I'll hand it back to these guys to talk about what happened after this thing. Oh, do you guys? Sorry, any questions at this point before they talk about their journey through this? Yes, I'm very opinionated on this topic. I was hired by a company once, the founder hired me and my job was to set up a quality team for him, right? Two years later when I left, the company had zero testers. Sorry, wrong thing to say. But they were shipping their product six times a day at that point. Earlier before I joined, they were shipping the product once in three months. So I believe the path forward is that distinction between a developer, a tester, or to be even more precise, a developer and a QA, in my opinion, is gonna go away. That those boundaries are getting blurred. Already we see that. And I think more and more we see that there is no such distinction as much as we used to have. And that was good back in the days when we had those rigid phases through which we used to go. But as we progress and we want more rapid delivery, we want more flexibility and all of that stuff, these boundaries start getting in the way too much. So we're kind of trying to move away from that. Yes, sir. Just trying to rephrase your question, his concern is valid that when you're starting up, the effort that you put in unit testing, essentially you're doubling or maybe slightly more amount of code you would write when you're writing unit test and this thing. So can you justify that? And my argument to that or rather my experience is that if you look at typically how developers work, right? They would write a little piece of, let's say there's a complex piece of functionality. They would write a small simple case about it and then they would bring up either the UI, they will bring up the console and they will check if the logic that they just wrote actually works or not. And then they would add a next piece of functionality on top of it or the next complication or edge case and they would go back and validate manually whether all the previous steps were working correctly or not. If you actually look and I have some data from couple of projects, it's up to a 40% of time developers are spending, manually checking stuff that they know should be working. And if you look at the amount of time it would take you to write unit tests, I would assume it would come up to about the same. As you get better, it would probably be even less. So that's one angle. The other angle is we don't work as single member teams. If you were working as single member teams, maybe you could get away without unit tests. But let's say you're working with a multiple member team and I wrote some code, you need to enhance that thing. You need to know what all edge conditions I've already handled because that's all in my head at this point, right? And the amount of time you will spend trying to understand what's the code is doing and making sure when you make a small change you don't break anything else is huge. So my argument is again, you have a good amount of savings that you're achieving over there. And we can talk at length. I can show you some data I have, but based on my data, you actually save up to a good 30% of time writing unit tests as well. All right, I need to move little quickly. We'll take more questions towards the end. I do want them to share their story to actually see what happened and how did this journey continue. So over to you guys. We'll take more questions towards the end. So as Narish explained, in order to reach or to achieve the right test pyramid from where we were earlier, what we decided, we paused a bit. We decided to test things right at right place on right time. So if we look at, this is the typical application layer structure, what we have. Our earlier automation was around UI. So if I may recount the pain points, the tests were tightly coupled with UI. They were fragile. And most importantly, the automation lagged with the current development cycle. Now, in order to move what we call one level below UI, in terms of our application, that would be the service layer. Like in struts, it would be either action or the beam below it. So we call it as service layer. So we automated one level below UI. In our case, that is, we call service layer. So what did we achieve by automating at service layer? We had fast executing tests which we started giving us quicker feedback. Then we had truly UI independent tests. So scenarios getting added, feature like data conditions getting added, we could easily translate them into a test than depending on UI or UI structure. Because of these, the test started becoming robust and stable instead of getting fragile. And most importantly, the automation started happening well within the current release cycle. In order to do that, what we did is we introduced BDD using Cucumber for our feature development. Also, our Dev and QA started pairing together. So their specific skills to write better code and better scenarios synergized to create value. We had better collaboration. Most importantly, there was not any more need of a dedicated team for automation. So everybody in the team was a contributor to automation, not like two guys sitting and doing automation. So that was something we call it as a good achievement in terms of teamwork. So let's now have a quick demo of the same bar module, same basic scenario using this new approach. Before we get to the demo, let's see how that scenario looks in the Cucumber. So if we look at this first scenario and try to read it, it tells us that to retrieve bar decision for a property when search criteria, so when revenue manager wants to validate bar decisions for a given, so you can see there is a date range given. So if you can correlate with the screen that we saw earlier, there was a date range. So these dates are nothing but parameters. And let's say there is the search criteria is show all decisions. That means there is no filter specifically applied. So this then part, then following bar decision should be present is telling us that for that given date range, these would be the bar decision that the revenue manager would see on the screen. So essentially this scenario is exactly replicating the behavior revenue manager would do on the screen in order to see the required data. So now if we can go to the code. So this is the Java code that executes this scenario. So if we, so this then does the part of data validation. For the bar screen for multiple filter criteria, we would have multiple scenarios, but eventually this then part is taking care of all those scenarios universally like they get validated here. So what happens is we are calling the layer below UI for the bar module. Bar module being legacy module, there was no API exposed for us. So in order to first get the handle to the layer below UI, what we did is we moved this framework into our code base under the hood of integration test source. So that we could get handle to the code base one level below UI. Now being legacy code, we have very tight coupling with UI and like interdependencies with various other things. So in order to be called from this mechanism, we first had to refactor our code so that we could call it from this method. So we did refactor and then in this method, we are calling the service layer, getting the data based on whatever criteria we are providing and asserting it with the expected data table in the return in the scenario. Now let's quickly run the scenario to see what happens. So it's actually not running on a container, so it still has to mock request and response and then as part of basic setup and then it will quickly execute. So as we can see it took 13 seconds to execute the basic scenario against 23 seconds what we saw earlier. So for the bar module or the bar screen, we have 40 complex scenarios together. All the scenarios together to execute with this mechanism take approximately one minute to execute on the Jenkins together. So now let us look at the current status of our automation. We have succeeded to move down most of our data test cases which we can call now workflow test that are now 40% of our stack. We still have salient test cases as well because we still have some navigational test. Meanwhile, we also increase our unit and integration test volume and 20% is the code that is still not under any automation hood. So we successfully minimize our the regression cycle to a week. So now we can get more features under development. Yeah. So let's now look at the data after this, like critical issues reported from production. So the third bar is a comparative after this approach was followed. So we see that there is not much difference in the issues reported from production because so like we have maintained the parity of number of issues reported from production while we moved the test down one level below UI layer. So, sorry. So now we did not have a dedicated team. So entire team was a party to this. So how much time did it take for us, all of us to do this? The critical screens, three months. Together we did it in three months. This journey was not without its pain points. So as a team, we had a learning curve following this new approach methodology. So there was a learning curve. As we saw in the code, when we were dealing with legacy code, it was really difficult to implement as it had tight coupling with UI. So that was another pain point. Now, by now we know the product is data intensive. Every data point and the combination of it is a condition for us to test. So what we do is we prepare that in a pre-populated database. When this job runs on Jenkins, it runs on top of that database. We call it as a baseline database. So every new scenario or condition gets added, we prepare that in that baseline database. So that's a pain point, because since it runs on a single database, we still are not running the test in parallel. Secondly, as the volume goes on increasing for such conditions and scenarios, it would be really cumbersome to maintain that pre-populated or preconditioned database. So that's definitely a pain point for us. So how does the road look ahead for us as a team? So looking ahead, we have already started exposing RESTful APIs for whatever new features we add. So it becomes really easy to consume, test, implement. So even wherever possible, we are wrapping some of the legacy code in the RESTful APIs so that it can be at least tested at appropriate level. Remember, the journey is not about moving everything one level below UI. We have to achieve the right test pyramid. So we are and we'll continue to move relevant BD respects to lower layers where they belong to, like unit and integration test. So definitely that's something that's on our plan as looking ahead. Last point is we want to move away from baseline database as a team. We don't want a pre-populated preconditioned database. We want our test to be independently executing, creating their own data and finishing off. That will help us execute them in parallel, have faster feedback than what we have right now and write more and more tests. So in all, this is to summarize our journey towards achieving the right testing pyramid. So we would now welcome if you have any questions. Thank you. So you know one man had there, you had some question please, just covered that, right? Oh, the challenge is I faced convincing them. I think they already had a pain point, right? So they already had pain points that they highlighted that this is a problem. They came asking how do we make our selenium test better? And the question was that do you want to invest on that or do you want to address the 95 or 90% of the cases where you have a problem? And it made sense that investing on 90%. So I think the most important thing is a lot of time we talk, we evangelize things without data. We talk about stuff in the air. We say I feel this should be this way and that's where it is my opinion versus your opinion. But when you are able to actually pull out the data and put data in front of people, then there is no my opinion or your opinion. Look there, 90% of your bugs are coming because of data error. Do you want to cast it at the UI? Or do you think you would be able to catch it at a lower layer? So that I think was, I guess I can speak on your behalf. That was like, yep, it makes sense. I don't need to be convinced. The challenge was obviously all those different layers. They were like, why do we need all these different layers? Can we not just have three layers? The UI, the integration test and the unit test. But the point is that's a very naive way to look at it. There's more levels and there is meaning for those levels. So there was a little bit of learning in terms of understanding what those levels are. In fact, even now there are many cases where we have discussions about this should be this level or should it go one layer below or should it be what layer should it be. So that's still a challenge in terms of helping people understand what test goes at what layer and how you can further break it up. A lot of times we think of the whole flow and we want to test it. We don't necessarily think of how I can break it down and test each of them in isolation. So there's still a learning curve involved in that and that's generally where a lot of people need convincing that yes, it's worth investing the time to break it down because you want to get pinpointed feedback, okay? We'll go there, someone had a question here. Sorry, please. Regression cycle, yeah. So automation is happening in the development cycle itself. So part of the development. So in the time when we absorb the features and work on the features, at the same time we are working on automating them. So that at the end we regress only in like limited period of time because we have already automated it. We don't have to rework. Regression basically is rework. What do you mean by automation? So UI automation maybe will be hard if your UI contains five features, right? A UI might contain five features and you've sliced your stories such that those are five different stories and until you have the five stories the screen will not make sense, right? So testing the UI at that point might not make too much sense but each service can certainly be tested. Each feature inside that screen could be certainly tested within your boundaries, right? And that's the point, that the more you move things down the sooner you'll be able to start because of these dependencies and the sooner you're able to start the more parallelization can happen. The earlier you can get feedback. So some of those advantages you start seeing. So it is certainly recommended probably not at the UI but it again depends how you slice your stories and how you see your UI. But if you have let's say a screen with five features coming into it, maybe all five features are not ready and you cannot really automate the UI. But if you've tested 99% below it then the 1% I am okay to do it in the next print. So I don't know if the ROI is really long term. I think you will start seeing the benefits much earlier. It feels like it's gonna be a long term investment but that's certainly true. I mean long term is when you will see the full bang of the benefit. But even early on like in this case in three months time we were able to actually reduce our regression cycle to a week. And that's a real clear business advantage. And in spite of moving things down as Aditya was explaining we're still not leaking more bugs. That's one of the fears is that you will start leaking more bugs as you're moving down. In fact we actually were able to catch some other things that were not reported but were actually bugs in the system. We were able to catch that when we moved one layer down. Think about when you're given a UI you're fixated to that. But when you go one layer down you start thinking about what else could go wrong. And then you work backward and you say oh if someone did view source and did something and send something through it it actually crashed the system. Right stuff like that might have got slipped but now that you're going one layer below you open up doors to a lot of more things. So those are things again where you can show benefits. Which is immediate. You don't have to wait for six months to see things like that. You're like this was never possible through the UI or would have been very difficult to catch. Now I'm able to catch more bugs or I'm able to do more things earlier on. We'll go there. Yeah. When we started what was the size and now what does it look like? I don't think the sizes have changed. No sizes have not changed. So okay so the development and QA team together is like 30, 30. Almost one is to one mapping. Everyone's smiling. I'm sorry. Everyone is like oh my God. I know what's coming next. Fantastic question. Let me repeat the question so everyone understands the question. We have two different teams led by two different people and each of the teams are already overloaded or at least the development team is already overloaded and the moment you suggest even something remotely to let these guys do more testing there's obviously gonna be a big pushback. Because the myths is that when you do more testing you're gonna spend more time. So first of all I think you need to address that myth that if you write more unit tests or you write other kinds of tests the development time will increase. Maybe it will increase initially because people will have a learning curve. Unfortunately a lot of developers have the habit of writing something and then just tossing it over to someone else and let them deal with it. But now they'll have to think about some of more things. So that myth needs to be addressed that actually when you write tests yourself especially in a test driven manner or things like that you actually be faster. It's counter intuitive but you actually be faster. And there's enough data I can share with you as well if you want which actually shows that you'll be faster for X number of three. So you need to help them understand that. You need to say okay can we pilot this for three months. Let's not change the company today but can we pilot it for three months. Three months and then we'll break down your company. You need to break down the problem to a smaller thing. The other thing is the skill set gap that you talked about. At least that idea is what we started doing is we have started helping developers learn like we're doing Java courses for them. We are running problem solving courses for them and we are helping them pick up those skills which we think are extremely important so that the parity in terms of skill set disappears. And I think as an organization you need to invest in that sooner or later. So it's better to start investing in these things. The manual testers who have a lot of domain expertise and things like that but don't necessarily think it comes up. Okay this is the feature. How can I break it down what are possible things that could go wrong if I think from a core point of view. When they get educated they'll be able to contribute. And the last thing we are doing is actually getting them to pair with each other and work together. So the distinction of my job is to find bugs. We are strongly discriminating that your job is to stop bugs from getting in the first place. If you're finding bugs then you didn't do your job. That's the thing we have put in this organization. That your job is not to find bugs. If you're finding bugs the problem is already increased. Too bad. Your job is to stop the bugs from getting in the first place. So it requires education at the management level which was not a problem at ideas and a bunch of other companies. But there is education required at the development side of the tech sector. Okay. We had someone else over there. Sorry we'll go here. He's been raising the hand for quite some time now. We have introduced a lot of levels in front of Ben. So his question is that we have these different layers of tests and then how often do we run these tests and how do we schedule those tests? Right. So what our idea is and what we are doing right now is with every check in all of these tests run. Ashish has done some fantastic work to reduce the build time. What do you expect? 25 minutes? 90 to 100 minutes. 90 to 112 minutes. 90 down below 12. 90 minutes. The build used to take 90 minutes and we've done some very interesting work to bring it down to 12 minutes. So now at this stage we are running everything with every build, right? As we see it going up again there are few things that we can do to reduce that and then you can also parallelize it. You can make a build pipeline and split it apart but our first thing was to reduce it down. So from 90 minutes we brought down the build to 12 minutes and now with 12 minutes we are just running everything with every check in. Right? So there's a reach out to Ashish. He has some interesting material on things that you can do to reduce the build cycle like creating temp pipe systems and things like that using SSC where there's other kinds of size. So go there. Yeah. While you were doing the test, what did you get here? Background, we had came up with that code coverage against this and then we had identified there are a third minute like that not first. So what is the background of having that? So definitely we have code coverage job running. So the first pie chart if you're talking about that was when we started doing continuous integration so there was no automation. So we already knew that the coverage was not there. After the automation was introduced definitely the code coverage was the tool by which we could figure out what the coverage to the different aspects was. As far as the automation goes where the bug report analysis tells us that 90% of the issues found by automation were actually data issues. Probably Kirtesh would be the better person to brief you about it. Actually while navigating to the screens we are just asserting the data with the backend that is MySQL. We are using MySQL. So most of the cases there were no issues with navigational or link is not there or some menu is not there or some layout is broken. Most of time the business logic is not correctly running and it is giving false data. So that was the condition. So we have concluded that most of the cases are data issues. For the UI test and simply knock off the UI move them down. This process needs to continue till you find the right place for these things. This is for the legacy system because it's already things like that. For the new development that is happening and it's happening as driven which means that they would be right in unitness to start with. And so these kind of issues might not slip through. There's no guarantee, nothing 100% true but at least we are saying that we have a much higher probability of these data related issues not slipping out of the unit level or maybe even the business logic level up to the other layers. But for legacy system we are going because we already have this we're going from top to down. And our goal is to eventually push the majority of these at unit and then knock it off from the higher level. But if that's the right place where we can find it. Now how do we go about finding that thing is, you know, test coverage is one thing but I think a lot of times when you look at this issue you'll be able to say or never could be able to say you know what, I can actually test this over here without having these dependencies of these five other things. This looks like a business logic problem. So that should be able to do that. So that's what I told you. Core choice against this, not this one, right? So we can come up with a lot of the hundreds and hundreds of code base and if it is a test data issue, we want to have not been able to find why it's unit is because we still have no way to get internal integration and have some integration. We are actually, we can test that but, right? So there is something else that needs to be identified, point over that I think right now. So you're talking about getting right test data is an extremely important step, right? And I believe that trying to get test data right from the top level is much harder than starting with test data at a lower level because when you look at a unit, you know there are five different possible things that can go into it and you can play around with it. The negative part testing is a lot more effective at lower levels than trying to do it at a higher level. You're able to generate more test data at your level than you can do at higher level. The test data is extremely important. Same thing that you're mid-layering is still not able to consume for your downstream application, right? So all those things, whatever you're just taking in the unit layer is not certified of the internal as well as the perspective, right? That's what the michael said like that. I think we need to just apply on the data and then we come up with a lot of the area that we need to identify. This gap and all. That obviously is happening in terms of what goes at the level that is important to be identified. Not everything will go at the right level, okay? Can I ask anything? Go there, I can come to you, please. So I would like to know which you know as a way to use the opportunity to create that system or to test it. So as I said, we all together created it. We paired on it and created it. We talked about cucumber, right? So we are using cucumber other than that. So cucumber is only wrapping to the layer below UI. We are not using any driver to go to UI or something like that. Rest all is Java code, it's not. So cucumber is the only thing that I could tell as a library we are using. Other than that, it's all JUnit, HamCrest, et cetera, which is typical testing. Where do you, where you do the assertion? So yeah. Yeah. It was very hard at the beginning because people would say, okay, I'll find this and I'll come back. You're gonna sit here in this conference room. You're not gonna leave until this scenario is written. So that collaboration is extremely important. There is an actual tendency not to do it, but once you start doing it, I think from now we can't separate it. Now we want to really work together. Yeah. So we do understand that the entire approach and the methodology move from a regular to an agency approach to a giant approach where you do basically a VDD kind of approach to it. But the initial question was, when Aditya, right? Yeah. You came and asked me that Selenium is a problem. That's what basically is a question mark. What made you feel that where is the problem with Selenium is the problem is that, where is the connect? We do understand that the recycle was longer and obviously the approach was incorrect. But what made you think that Selenium is an issue? Because you said that obviously your legs are not working fine. When you were trying to involve the data, data was an issue and you already knew that. Yes. So what made you think that the Selenium was an issue? That you made a remark that you want to know what exactly was an issue? Yeah. Actually we are not testing the things at right place. Actually we are testing the data on UI layer. That was our fault, that was correct for us and we just moved one layer below for data issues. And as Aditya mentioned that we are on a phase shift of, we are modernizing our screens that leads to maintain our scripts many and many. So that's why. So we have actually in fact manually tested the screens which we had earlier automated. So such was the phase where we were modernizing UI as well as adding additional features. So that's where like we thought of