 Welcome to this session. My name is Sachin Natu. I work for Ideas. I guess we had couple of presentation from Ideas. But I am not sure how many of you were there. So I will take a quick minute to introduce about Ideas. So Ideas is mainly into the revenue optimization domain. So basically what we do is we help our clients to decide their pricing strategies. And the main domain where we are contributing is hospitality and car parking. So I am just glad to mention that the place where we are sitting right now is one of our customers. And if you are staying in this hotel, whatever money you have paid, maybe Ideas is contributing to that. So what we see on the screen is basically the spread of our clients. So you can see our presence in most of the major continents. There are 6,000 plus hotels across the world taking benefit from us. These are some of the prominent hotel chains which are working with us. Now coming back to today's presentation, Naresh, can you just give an overview of what we are going to do? Yeah, so let's quickly talk about the agenda. How many people here are in an agile environment and are finding that testing in general is a bit of a challenge? Just a quick show of hands. The rest of you don't feel it's a challenge. And how many of you are responsible for or involved somehow to drive your testing strategies in your companies? Don't worry, if you're not involved with driving your testing strategy, we think we have something for everyone. We think whether you're a developer or a tester, there are some little bits of pieces of information here that we'll share that will be useful. We're going to talk about basically when we started, where were we? And we're going to talk about a journey which goes back almost 3 years. So we're going to talk about a journey over 3 years. We're going to talk about when we started, where we were, what kind of problems were we facing with that. So basically the kind of problems we were facing. What was our initial approach? How did we try to solve that problem? How did we get started? What was the initial approach? And why after a while, we realized that our approach towards testing was not sound. It had some serious flaws with it. And then we took a serious look at it and then changed the testing strategy. We came up with a different strategy. And that's kind of what we want to talk about is how we started. We went down a certain path. We found that that was not an optimal strategy. We stepped back and then we took a different direction. Are we done? I mean, no, we're not. We're far away from being called done. But I think we've seen some early successes. We've seen some enough data as Linda was talking this morning. We've seen enough data that we feel we would like to share that with you guys and also see who else has similar experience and what we can learn from you guys. So that's what the purpose of this session is. And I'll let Sachin continue. Sure. If I look back into the history like three years back, this is how we were delivering to our clients. So typically, we used to release every quarter, which means every three months, we will be giving a major deployment to the production. And this is how our release cycle typically looked like. So first couple of months, we used to develop new features. Developers used to give weekly builds which testers would take and they would start testing the feature which is ready for testing. And this will continue for couple of months. So there is a seven weeks period where all the new features are getting developed and tested by the testers. And then we reached the two month milestone and we say code freeze. And that's how we used to plan that all the new features are now ready. Reasonable is stable because they have been tested so far. And then testers would do a regression testing. So like almost a month was spent in doing various type of regression testing. So we would do UI testing, functional testing. We basically would try to check all the modules and all the non-impacted, impacted areas of the application just to ensure that whatever we are releasing, whatever we are sending to production is going to be stable. Just to give you an idea, even that time we had more than 3,000 hotels which were using our application. So we needed to take a lot of care when we were deploying because any small mistake would lead to the revenue loss for them. So that's the kind of background under which we were operating. And obviously it looks a big period like one month of regression testing. So why it could take that much time? We will discuss that. So mainly if you see the current slide, majority of the testing that was happening was manual testing. There were not many people having automation skills that time. And if I just want to give you the length and breadth of the application we have or we have, like there are 100 plus screens. So there is a lot of UI testing, a lot of data validation happening on the UI. Every day we process the reservation data. So every day we are receiving the reservation data from our clients which we have to populate into our system. We have analytics engine which does a lot of analytics on that. And then we would give the decisions back. So every day we are sending back the prices to our hotels which they would be using. So there is a daily optimization happening for them. And so which means there is a lot of functionality, a lot of UI, a lot of backend. And so there are thousands of regression tests which were executed manually. And we were doing that. I mean it's not that we were not at all having success with that but there are a lot of issues. We will discuss that. So for testers like it was a miserable life. Every three months they have to test thousands of regression tests. All manual testing. And a lot of end-to-end scenarios were getting tested into that last month of regression. Since it was not automated, they could not really test them manually when the feature testing or feature development is going on. So they used to encounter a lot of cross-cutting issues in that last third month. And when they find those issues in the last month it's a big frustration for everyone. So developers would again fix those issues. They will deliver us a new release. And again where will we land because of these repeated regressions? So that last third month of testing, regression testing, like testers were just doing testing, testing, reporting bugs and they received the fixed builds. Again they did the same regression testing. So a lot of regression testing. So pretty frustrating for the testers. How many people can relate to this? A couple of people only? At some point in your career? Yeah, okay. So the other problem associated with this approach, like now everybody's talking about agile, right? So people are deploying monthly. People are deploying post-nightly. People are deploying weekly. Some cases even daily. I don't know, some might be, you know, hourly basis they might be deploying. So we could not even dream about, you know, that type of release. When we are having such type of constraint, like we have to regress our application manually. So we could not think about, you know, being agile because of this. And obviously there were pressures like on management, like we need to move to agile because we need to deliver features much faster than, you know, on quarterly basis. Can we compress this regression period? So what were our variables? So number of tests, can we reduce number of tests? I don't think we can, right? So no chance to make any compromise. Can we reduce the repetition of the regression? You can't predict the repetition, right? You don't know how many issues you are going to find in the third month. Can we add more and more of people? No, I mean, this is again beyond a limit, doesn't make sense, a business sense, right? So we were kind of stuck up there, you know. And the other problem with this approach, when people are doing manual testing again and again, that they are human beings, right? They miss some time. And you land up into passing on the issues to production. So if we investigate the, you know, the trend during that duration, like how many issues we were passing, like this trend is covering around three to four releases. And you know, every release we were passing at least seven issues. And that was not a very good picture for us. Like it directly challenges the credibility of our development, quality of our development and testing. And plus, since you are passing on these critical issues to production, again, you have to patch, right? So you have to fix those issues, test those issues, do, again, regression testing. And it was kind of a vicious circle. And this is happening again, parallel to your new release development, right? So what do you think is the solution for something like this? Automation. Who said automation? Okay. I need to give a pen to this. Please come. Thank you. Yeah. So that's what we thought, right? Obviously, I mean, it's a common sense, right? You want to automate your, the only way you can compress your regression period is go away from manual regression and automate all your tests, right? And that way you can come, you know, they are automatic. They are consistent. They are fast compared to manual testing. And you should be, you know, ready to then do faster releases. Now, see, the background here is like, we want to automate the regression test. And that's the agenda. And who is going to automate this test to begin with? Like, testers are going to automate this test. Who are these testers? So they are, till now, doing all, you know, UI-based testing. They are not really having a huge development background and they understand the code, internal intricacies of the code, how it is developed, the design of the code. They are mainly testing from functional perspective. They are mainly doing UI testing, right? So they would naturally think of automating the regression using some UI automation tool, right? So that's how it happened to us also, naturally. So we started evaluating the tools which can help us to automate our regression test. And as I said, this application had 100 plus screens. So there were thousands of regression tests. That was the major portion of our regression. So we thought if we could automate those UI regression tests, no, that's a quick win for us. Quick win, in a sense, that was a major percentage of our regression test. So we wanted to eventually reach a stage where, you know, you press this Go button and it executes this test overnight or maybe a couple of days, and then we are happy, right, with the results. That's what we wanted to do, and that's what we started. So we evaluated certain tools which are suitable to our technology stack, and we started doing that UI automation. So I'm just going to give you a glimpse of... I hope it is visible to all of you. A glimpse of one of the screens. We will also see a recorded UI scenario just to give you a feel how we automated. So this screen basically is used by our customer to create groups. Now, just to give you some background about it, so if you see hotel business, they will be getting customers from various sources, right? So there will be group type of business. There will be individuals coming and doing bookings. There will be contractual business. There will be non-contractual business, right? So our software needs that type of information to be provided by the client so that we can categorize the reservation data and we come up with the better patterns which we can use to forecast and price the future for them. So what we ask client here is to, you know, give those attributes to our application. So they come to this screen and they give all these attributes and they press that create forecast group button, and then we have our internal algorithms which will do that grouping and it will propose groups to the client which they will accept. So that's the basis of that UI scenario. Okay, so what we see is a Jenkins job which is triggering a UI automation tool and now it's logging. So that status bar you see is a UI automation tool bar. Now it's logging in. It is selecting the client. It will select the property. So this is the same scenario which I just explained where user will go. It will open that grouping screen. The attributes are already done through backend so we will not be doing that through UI here and it will press the create forecast group button. So these are certain checks which we see they are green for creating forecast groups. So the whole idea to show you is like how UI tests are being executed when you automate through UI. This is how things will happen and what you are testing is a one UI testing scenario here with one set of attributes. What do you notice? Boy, it's low. I think they get the idea so maybe we can move on. So what you see in the left is a proposed groups which are coming out of the system then user will select those groups, he will accept those groups and the grouping will be saved in the database. So almost more than a minute to do this type of functional ID test. So that's how we started automating our regression test. So we will see how that helped us. So this is again the release cycle after that and we still see that there is a regression period of three weeks. We will discuss why it is three weeks. So this is a very important thing. So how it impacted whatever automation we have done using this approach, did it help us in reducing production issues? So yes, definitely it helped us. So this graph basically shows that our issues reported from production have gone down significantly over releases like from 7 to 8 average to now maybe couple of issues which was I think a significant achievement. So what do you think? Are we happy now? Whatever we have done so far? Yes. It's good in terms of overall progress but it can still improve. It can still improve. Any other thought process? There are a few places where the issues have actually gone after the human interaction where you have a problem. So he is referring to the sporadic heights here. Did you have after you implemented automation if you have any issues having leaked into production? Yes, that's what this is talking about that the couple of issues that would leak out to production still would go. But it used to be much higher in number. It's reduced significantly. It's now one or two issues per release. And also the number of tests increased? Number of tests increased. So the similarity was also added because you can't just keep only doing regression testing automation. But the general feel I am getting from audiences is like people are happy. So let's go ahead and see some more data. And then we will revisit this question whether we are happy about this. How much time we took to achieve this? So far what we have automated is around 50% of the screens of the application and it took us almost two years. As I said initially we started with a background where all the testers were not very expert in automation and also even if they would be we could not use them directly full time on automation projects. So we have to parallelly develop new features and release them. So a bunch of people will be doing that and we could deploy a couple of testers who are good in automation to start writing this UI automation script. So at the end of that juncture this is where we are like we have some unit test coverage not more than 5%. We have automated some back end processing scenarios around 10%. End to end UI tests. So as I said we automated around 50% of the functionality using UI tests and still a lot of manual tests around 40% to 50%. So that's the overall spectrum now about the level of automation we have achieved at the end of two years using this approach. This is a problem with this approach like when you are automating through UI you are always catching up. You can't easily automate using UI screens when it's a new feature, number one because unless that feature is stabilized UI is fully stabilized you cannot write your automation scripts it will break frequently. Number two even for existing features which we have fully automated there will be always small changes coming up. So those will be under maintenance. So you are always catching up. It never happens that your feature is ready to release to production and regression is fully automated. It never happens in this type of scenario. You are always behind. The failures in this UI test the feedbacks are not pinpointed. So typically web application in our case it's a web application so there are multiple layers you have logic at service layer you have logic at database layer and you have UI functionality and any failure happens in the UI test can be because of any of this layer. So you don't get a clear feedback where exactly the application is failing when UI test fails so there is a lot of investigation involved. There are still silos so developers are doing their job they are developing new features or they are fixing issues and testers are doing automation they are executing the automated test or doing manual testing they are not talking to each other to the level where we want to. So many issues are arising even because of the perception difference between these two groups. Tests are slow. Now again slow is a relative term right so they are definitely faster than manual testing but they are still slow in a sense and UI automation what we did if we run them on a single machine it will take 14 hours so you can't plug them with the daily builds or your CI builds which are happening much frequently so you have to run them separately on some other box and maybe once a day or twice a week something like that so the feedbacks are still coming late not coming with each and every checking and they are costly to maintain because this application was like first release of this application happened in 2003-2004 time frame so there are a lot of UI changes happening to this application today because we have to keep the UI design, we have to keep it updated all the time so whatever automation we did for those 40-50% screens there will be changes on those screens some new suggestions coming on those screens will start failing again so there is a lot of rework here costly to maintain the other aspect of costly to maintain is as I said any failure in the bottom layer will fail your test so every time you have to check whether it is a genuine failure you need to fix the script or it's a bug so test bed issues other kinds of issues so maintenance of these tests false negative all of that stuff there is a lot with this so I want to ask that question again so after seeing all this data can we say that we are really happy with this type of automation approach at least we were not happy because even though we failed that we automated 40% we were still struggling to cope with it it was not a long way to go for us so we definitely wanted something different to happen so what's the way out what was the question we asked to Naresh when we engaged him so Naresh maybe you can it was interesting I remember the day I mean Sachin and I were sitting down and Sachin was asking we have these automation tests how can we improve our automation tests and I was like okay that's an interesting question let's pull some data out can we pull some data out and can we look at what is the percentage let's look at the last six months what are the bugs that your automation tests found right how much breakages did your automation test found can we pull that data out and do you do any root cause analysis on it and he said yes of course we do root cause analysis on it we try to find why that test failed and we categorize them into different categories so I said brilliant I mean that's a good step so can we pull the data for the last six months and see what does our test failure tell us and when we pulled out the data surprise surprise right I mean we saw that a good 90% of those failures are coming because of data failures which means that some business logic failure something else is broken and maybe very small percentage of that actually coming from UI navigation failures or something on the UI not behaving properly right so my question to him was then you know what do you do with this data I mean what can we do you know is this should our focus be on making our UI automation test better or should we be looking at something else because as you can see here a vast majority of these guys are you know not anything to do with the UI at all right another way to look at this data is if you look at what was slipping in or what test was finding a big chunk of them was essentially critical data issues very few cosmetic issues we support the product runs in different regions so there is localization and nationalization all of that stuff so there are some issues related to that but a vast majority of them were essentially from critical data issues business logic issues other kinds of issues so if you look at what the state of our testing was this is what we call as the inverted test pyramid people familiar with this term the inverted test pyramid or the ice cream cone problem the testing ice cream cone problem this looks like ice cream you know and I said what we actually need is a very big focus on unit test the foundation of your testing has to be built on small really fast running solid tests that you can rely on right and they're fast they're reliable they give you pinpointed feedback on top of that we need something which is what we call as domain logic or business logic test which help you understand that hey if you're building this particular piece of functionality it actually is working correctly all the units in that functionality are working together to give you from a business or a user's point of view one piece of functionality right there we also have a whole bunch of third party integrations we get data from various different places we rely on you know for example we get a large amount of millions of records that we need to take in and process that so that comes in in various different formats I think yesterday people were here for the ETL session we were talking about we accepted in file format, soap messages restful calls lots of different integrations so we want to validate if your integrations itself are working correctly so a bunch of things that thing is pretty big for us because of the nature of the application and then on top of that you need something which is workflow test which basically goes across a different business logic to validate something like if you think of a shopping cart example for example you know if you take that as an example you basically select a bunch of products you check out you do the payment then you fill in the shipping details and then you know you get a confirmation that's a workflow now when you talk about business logic at each stage you have written one test which validates one step of the flow but we need to ensure that the entire flow itself is working correctly and then on top of that we have so when we talk about the workflow test we don't integrate with external systems the workflow test stops within our system it just validates steps within our system but then on top of that we have something called as the end to end flow which basically goes to third party systems as well so integrate with third party systems and then finally we have on the top a very small section of the UI test okay so this is what you know I refer to as the inverted test pyramid and I you know that was my response to such in that you know this is the need of the R this is what we need and I think his reaction was like that's insane like why would I do that I have never seen the six slices of this pyramid I have three slices mostly we have a question there yes these two workflow essentially goes end to end inside my system end to end as in multiple steps to achieve one business flow you have multiple steps that you perform what I was giving is buying a product on Amazon there are multiple steps you go through when you buy a product from Amazon there are a bunch of variations in that I might have I might be an already existing customer I don't need to sign up when I'm checking out I might already have my payment details inside that I don't need to validate that so that's one workflow where I'm an existing customer and I'm paying using my existing credit card details another one would be a guest user I'm not paying once but when the workflow test we don't actually go to the payment gateway we don't go to the inventory system we don't go to the subsystem other systems which sit outside your application that you're building but when we talk about the end to end test we actually go across to them and these all will have slightly different perspective it's not the same test written in different things each of them will have a different perspective because your focus there the third party systems are updated correctly the third party systems are working correctly with you when we do the integration test there we tie and do a lot of negative path testing we say what if we send corrupt data what if we get corrupt data back how will that behave without worrying what caused that we are just focusing on the integration portion are we able to talk to these APIs correctly or not does that include the UI? no none of these include the UI only that includes the UI that one percent on the top includes the UI so the other three include most of the backend logic one two three four five doesn't test the UI this has a bunch of UI unit tests so if you're doing javascript for example that would use js unit or q unit or one of those things and do bunch of automation over there it's better if you speak in the mind because the video will just be blurred if you don't speak in the mind yeah so I see the percentage is like 100 how did you actually get to this percentage of 100 percent that's what we're going to talk next so I think we follow the same but the intent was we have unit tests but we actually combine workflow integration and domain logic acceptance tests as part of stories and the end to end flows are essentially the system tests that we call so why do we need to even differentiate workflow API integration tests and domain logic acceptance tests I think for me they are more or less the same either you use different tools so your story would essentially be what check out a product and do all the steps say I'll take another example maybe a login into amazon amazon.com if that's my story so I'll have acceptance criteria for that and if I have to automate it using some domain logic acceptance tests I might have service layer tests or I might have UI tests that depends on what kind of tool I use but for me all those three are categorized and as one is what I was seeing so the point I'm trying to make is that you talked about logging so I log in what value do I get with it, zero value that doesn't add any value to me no but every story has some feature associated with it my story is login my product owner or a business analyst tells me just finish this feature for me the value is I want the customer lead data for me it could be the value for business that could be the value well so the point I'm trying to make is that your workflow is essentially articulating what value someone will get so workflow to me is about someone being able to buy a product on this thing so that is what the workflow would be which is not easy to capture at a story level because your stories would be much granular like you were pointing out so when would you go back and do that at the end to end level not the end to end but the business logic forget out the external dependencies because you might have payment gateway integrations and things like that forget about that but where do you ensure that your session management is working correctly because when a user selects a product it goes into their session and then when they come back basically they select the next product in the meantime someone could have bought another product and the inventory could be zero there are all these scenarios that handle those kind of things so that we handle at the end to end flow test you don't need to complicate your end to end test with all those things because to get to that point you might have to do a bunch of other stuff before it which is not necessary you have to drive from the end to end you could do it at a much lower level it also gets slower as you go as you go up it gets slower as you go up the maintenance increases as you go up all the problems that we highlighted they increase you don't get pinpointed feedback all of those aspects so the more you can push things down the better it is because the reliability increases the speed increases and a bunch of other factors increases I think that's the idea and slicing it into more layers basically help you because integration tests the purpose of integration test is only to see if the two endpoints can talk to each other there could be 50 places from my product I'm calling to an external product I don't need to test it 50 times I just need to test it once which is at the integration test level just ensuring that I can pass but I also want to give some negative path testing what happens if I get corrupt data back from them how does my system handle that some of those are not even possible to do at an end to end layer so as you go lower you can do a lot more negative path testing I think we'll need to run because I mean about 10 minutes we've not even got to our story so we need to really run quickly now sure and this will get back to you, sorry we'll cover more questions so I mean after discussing with Naresh we realized that this is the way to go for us if you want to do the optimal way automation in an optimal way and can you imagine what would have been the biggest hurdle in this approach any guesses any other I think you are 70% of UI test yeah so we have some legacy and we have to deal with that legacy which one look at the I think you are just 1% there so I will admire your future channel okay other any other I think here the challenge must be with the legacy code and how 70% of it will be ready to touch okay all this is true but I was coming more from you know so all this automation is happening below UI right and your testing team who is going to automate all this test they are not really familiar with the code and that's the biggest challenge we were facing like if you want to embrace this approach how are we going to really use our UI team right so but we wanted to do this so actually we started you know discussing this with our QA teams convincing them that you know this is how they need to automate and which means they need to automate their skill sets so there were a lot of efforts which are in progress even now in our organization where we are training them in you know development we are training them into automation tools so for example we needed to use cucumber you know if you wanted to use this approach we even you know we even wanted them to write I will show you an example which is not exactly a unit test because we are hitting data there but you know that those type of tests but those are typically business logic tests so that's the biggest challenge we faced and I am sure many of you might face when you want to embrace this so I mean I was talking of this example here like the screen which has shown about creating groups functionality when we investigated the code for that functionality we realized that there are only couple of classes if we could write you know tests below UI for them we could have majority of the business logic even without hitting any UI screen for that so in this example we are basically cooking some data for testing some preconditions which is a set of attributes which I shown you on the UI then calling a class to create a group and then validating whether it is as expected so we also have a look at actual code how fast it runs so for example here there are some 17 scenarios which we have automated at the class level and I trigger that and you know they start executing at the back end these actually hit the database these are business logic acceptance tests or domain logic acceptance tests they are not unit tests but we use a tool which is JUnit doesn't matter so that's it like they are already executed so it took like 17 seconds and out of the 17 seconds 15 right so 9 seconds for 17 okay so actually it took 9 seconds here but this is a snapshot taken from our CI build machine where they take you know even less than 1 second they execute only in 0.4 seconds so that's the speed with which you know you can execute this test so I am now trying to rush so this is where we are I mean we started this almost a year back and this is where we are right now so we have now whatever automation we have for this product now majority of the tests you would find then under workflow category unit and integration category there is still 20 to 30% of the UI of the application which is not automated and we are working on that and UI piece is shrinking now because as we are converting the test to bottom layer we are getting rid of those and we have already started you know ripping benefits out of this approach because benefit is now automation is a part of the development right so when the story is under development developers are writing unit test because they are embracing the TDD testers are also writing behaviour driven test or the test which I just shown you it is written by testers now by developer they are also trying to automate things when the stories are being developed so when the story is ready for acceptance you are almost 60 to 70% of your regression for that piece of code is already automated it is running on CI it is very fast so each and every build is giving us feedback on these functionalities and now UI test are minimal and they are taking care only of UI functionality lot of collaboration happening between developers and testers because from day one they have to sit together they have to understand the functionality they have to slice and dice the functionality and check where the automation can happen for this acceptance space so they are crossing the boundaries now as I said you know testers are understanding technology more trying to understand the code more developers are also sitting with them understanding the domain more so in a way it is adding lot of value this is another you know the scenario which I have shown you you know create and commit forecast group if we want to run 80 UI based scenarios it would take us 300 minutes because there is lot of backend analytics involved in grouping the data but if we try to automate those using this pyramid and you put UI test for minimal UI regression required and you put integration test for workflow and you do all the you know functionality testing or business logic testing at the test level which I have shown you so now it is taking less than 9 minutes so that's the kind of benefit which we are getting when we are you know using this type of regression automation this is just a comparison like first bar is a manual regression period so lot of regression time required there it reduced slightly after we did UI regression and the last bar is like whatever state we have so it's like a week now we can regress our application within a week now and we are confident that as we go on automating more and more it should be like maybe 2-3 days and that enables us to now release much frequently than you know the 3 months in the previous case so now we really feel like agile you know now it's very easy for us to go to agile type of release cycles development model there are still some challenges I already mentioned one of the challenges like somebody here mentioned about legacy code so if you have a legacy code it's not very easy to directly go to the unit level right but still what we are trying to do is we are still building that safety net at higher levels and we are what we are thinking is when we have that safety net you know that will enable developers to move forward refactor the code you know we still have a safety net and you know when the refactor the code those tests will move to unit tests so at least we are making some logical progression there other challenges mapping test with the various layers right that was I think the we had that discussion almost for 5-10 minutes like that's the common challenge everybody will face when we try to embrace this approach initially you will struggle to map your test to the various layers but over a period it should be easier job building team competencies that also we already discussed you need to improve the technical competencies of the QA's they can't be just manual testers and pairing and collaboration is a backbone of this approach like it cannot happen if people are working in silos like developers and testers has to talk from day one to understand how they want to automate so these are some of the challenges but I think we are happy to face those challenges we are out of time so we are out of time okay so the key learnings use your automation only where really necessary don't just use it because it is available automation is a team's responsibility it's not just a department's responsibility everybody needs to contribute to automation testability is an important criteria so moving forward when you are designing new features of the application don't keep in mind that you need to build the application which can be tested easily and you should be able to build the testing network easily around it so that were the three key learnings for us which we wanted to share alright thank you Sachin we made it sound very easy but it's been a hard journey right it's a hard journey so you know hopefully more people will embrace that so her question I'll just repeat her question is we talked about building teams capability both developers and testers in fact not just testers but both so we had a whole series of you know introductory training sessions but more importantly we spent a lot of time sitting down and pairing with them so a lot of pairing time was spent trying to work with them to understand to help them in the context of their work how they can implement some of these things because when you do training it doesn't really appeal to them they're not really interested in it because they don't see the value when you actually sit down and work with them and you say okay you know you're doing all of this manually let's sit down we'll automate it and this is how we'll go about doing it then suddenly they say wow that was so hard and now you've made it much more simpler so they see the value and then we fill it up with trainings I think Ashish and his group has been running Java training sessions for testers because they have a programming background we run programming logic courses at ideas where basically we give a logic problem and people have to come up with pseudocode let them sync in terms of logic not just in terms of UI implementation or things like that and then you know they have to go back home and work on it and come back with code the next day so I think we've been investing quite a bit in that because we believe that's the path forward these guys have really good domain expertise they have a lot of deep domain expertise now we are saying how can we leverage the domain expertise into in an automated fashion so pairing lots of trainings, logic, programming sending them to various different courses a lot of online courses are available all of those things there is no just BDD I think there's a lot of misconceptions in general BDD is just a type of TDD Dan North will be really upset he's a good friend of mine but BDD is just a type of TDD I don't know if you attended the other session that I was talking about TDD starts when you think about a new product idea and you say I want a cheap test to validate my idea that TDD starts right there and goes all the way down so BDD is just a type of TDD is more of the mindset that you need what I guess you were referring to is inside out test driven development which is a type of test driven development I just want to understand the categorization of the test cases that you did with respect to domain logic acceptance and unit test cases so if you can just unit test is just one class unit isolation domain logic goes end to end service take a service it will go all the way it can be considered as a horizontal slice with respect to the application all of them are horizontal slices yes even your business logic is a horizontal slice of functionality because you can have a lot more layers of sophistication on top of it but it's a horizontal slice everything yes it is horizontal we are not talking of just UI or just this thing it cuts across what we want is a feedback and we want fast feedback and in the course of this preparing test cases for the existing code so was there the refactoring also being taken care of considering the certain design smell set or some of the components would be there so when people talk about refactoring right it's an egg and chicken problem right if you don't have tests how do you refactor if you want to refactor you need to have tests but we've been teaching people techniques of what we call a safe refactoring there are a bunch of things that you can do in your IDE which we call a safe refactoring and there are various different techniques that we can use where even if you don't have tests you can at least break up isolate decouple your code so that then you can go in and write tests and refactor it after that so even when we write test we talk about writing scaffolding test not the actual test that you would want in production but you write scaffolding test you would get in there do the job refactor it then write good test maybe even test drive that and then delete the scaffolding test you showed a graph where you had development time versus regression time and how the development time grew and this shrunk when you did automation and when you moved to inverting the pyramid and I hope I don't come across as I get my intention clear over there how much of that development time was spent in writing these tests because that is also an important information that could come across I found that probably missing over there or a rough indication of you know I don't have exact data for that but if you see that graph overall release cycle still reduced so even though development time was growing because now you are spending a lot of time in test development and BDD your manual regression has reduced significantly and even though your development has expanded a little bit it is still less than your previous release cycle so if you have seen it carefully it was like decline in the is it that significant from the part or since it's the early stages what you are talking about is the regression testing part right the other is you are talking about during development the time spent right I think there are a lot of misconceptions that when you do test development you will be spending a lot of time trying to write tests why those misconceptions exist the first thing if you see is when developers write code they don't just write code and ship it they will write a little piece they will bring up a console and they will manually test it all we are saying is take that time and automate it just reorganize the amount of time people spend debugging because they will go end to end when they are debugging here you can just write a little test hook right into it because I think something is wrong here and debug it debugging time goes down manual testing time goes down and your hand over time goes down if I hand over to someone else they now have to understand what all edge cases I had handled right because that's not documented anywhere at least if you have test now that will be so hand over time goes down if you put those three together actually you will reduce significantly the amount of time beautiful way of documenting also yeah the only challenge is that there is a learning curve there is learning curve at multiple levels you are rethinking how to do programming in a large and I mean that is a big learning curve so you will see the initial hit because of that is people are now rethinking about okay how to do even programming because what I thought now everyone is challenging that so that I think will take time but once you get over it you get the savings because of the other things I talked about I just had a question about unit test for legacy code what was the approach did you follow I mean you have let's say two thousand three thousand source programs bring them under the umbrella so how do you actually go about it so I mean let me try to answer but as I said like there were some pieces were directly putting everything in unit test was is not possible yet because the you know functions are big there is a lot of logic in a single method or single class and you can't directly go to that say 70 percent unit test coverage but at least what we are doing right now is you know moving down from you know UI test for that and so in a way right now we don't have exactly the pyramid we have something like this so but since we are here now you know it will enable developers to refactor the code because this will provide them safety net faster safety net and as they refactor this this test will start moving down to unit tests so maybe eventually sometime we will reach the stage where really unit test percentage is much higher than the higher level tests if I understand so right now what you have done is that try to build a safety net for the legacy correct and the new code that you are writing you are in each way new code yeah new code we are anyway starting with more granular code so that you know the coverage is as per the you know my question is that let's say you have a legacy application and you want to I mean invert this pyramid so how do you go about unit testing building the unit test for the legacy because even an idea we've looked at their code and we said okay this is what it is doing so let's basically you know try and write test for that and so we have there are a lot of different techniques we can talk about where we can actually write legacy code there is Michael Feather's book which talks about you know working effectively with legacy code he talks about the general approach of how do you go about breaking down things I talked about scaffolding test we write scaffolding test we get in we do refactor and then we you know cover it up so you have to have a multi prong strategy in my opinion so while you cover some things from a higher level from wherever you find your inflection point that's a terminology we use in legacy code is the inflection point right you find an inflection point and from the inflection point you can basically go in but that would typically be at a higher level not at the granular unit level because sometimes those classes might be private classes or a big private that's it can't even access it so you write start over there and that's where they call scaffolding test they are like these scaffolding that people put outside the buildings right and then you go and then you refactor that little piece you pull it out you make it you improve the design and that will help you test drive it right or that will help you cover it with this so that's the approach we've used with multiple teams for one of our projects we also never wrote legacy test directly as a loop but if we touched any part of that code then we start writing code so over a period of time then we put a code I was just trying to figure out if there is a similar code at that okay this is the approach that should follow for writing as simple as the latest hire a consultant but like that's what we are doing as soon as you touch a code then you start writing so bug is included then start write a test improve the bug in the test and there you have the possibility to refactor that refactor it otherwise we are going to go in such a way that you have a piece of material to improve that