 All right, so we'll get started. Good afternoon, guys. I should say thanks, first of all, in spite of having cricket match happening. You guys are here to listen to us. My name is Prasad. I work for a company called Ideas. I manage two product lines of Ideas. And he's Nareesh. Of course, everyone knows him. So today, we are going to share a journey, the transition of legacy code to make sure that we follow the engineering practices, the joint engineering practices in the legacy code. So I'll start with the question. What do you mean by legacy code? Anyone? Code without tests. Perfect. What else? Which exists for? Which exists for years and years together. Okay. What else? The code which we are scared to touch. If there are 10 ifs, if else conditions, new requirements, help makes us put 11th else condition. Right? So we are just scared about legacy code. Right? Okay. The code with no continuous feedback, no test cases running, no CI CD integration. So we don't know if we make any changes what will happen. A lot of manual testing. Right? All right. So here's our journey. So I'll start. So we, as I said, I manage two products for one of the products. We face the similar situation where there was no code coverage. There was no test cases running and no CI CD integration. So here's the journey. As India is batting in Sydney, we'll bat for the next 35 minutes. Hopefully we won't get out. So let's be, let's, let's hear us. Could have quickly to set the context. I'm sure people have stated the hotels and we understand that, you know, the price that is quoted to you when you come into a hotel, depending on who you are, where you come from, how frequently you stay, the price keeps varying. Right? And that's essentially in a nutshell if I were to describe what Ideas does. You know, one of the products of Ideas basically helps hotels quote the best optimal price to different kinds of customers so that they can maximize their revenue. While they started in that space, I think there was a lot of demand for the analytics that the engine that they had built and they kind of spread out into another vertical which is the airport car parking vertical. In general car parking, but specifically airport car parking vertical. So just as a data point, the entire Heathrow Airport, when you go to the London Heathrow Airport, the car parking is managed by the software, right? And this was the first deployment that we had of this product. Back in 2007, we built this product mainly for the Heathrow as the first beta client and we are very excited, you know, because you can imagine car parking industry, if I have to talk about that, if you have to sell a hotel room, I can only sell one night, one room at a time, right? But especially for the car parking lots, if I say the average length of stay that a car stays in the parking lot is one hour, so technically I can sell the same parking lot 24 times, right? There is a whole lot of scope for helping our clients make more money by optimizing or by making sure that we quote the right price for the right parking lot at right time. So we were very excited, Heathrow was our first client, but so firstly it was targeted for the car parking lots near the airports, so we thought, okay, there are hundreds of airports in this world, so we were excited saying, okay, we are going to get many clients. Soon to figure out that, you know, the car parking revenue optimization is not ready to take this as an opportunity to make money. So since 2007, other than Heathrow, we couldn't get any other client and, you know, our product went into the maintenance mode hardly, I mean, so product was working fine, hardly there was any support required. Until last year, when we figured out that, you know, the market is conducive now, it's picking up the idea of, okay, we can make money by charging the right price for the parking lots as well. So that's where we had couple of clients last year which are already in pipeline, yet not completely onboarded. And so product management, market-facing team, you know, sat together, okay, what are the new things that we want to put into our product? You know, now our system needs data. The more the data, the more granular data you give, the better will be our price codes. So they sat together, they said, okay, we have new requirements coming up, so which is like, get more and more data and make other new features available to the end client. Basically, the more data we have, the better forecasts we can do, the better analytics we can run, which basically meant that what the product was doing earlier, 2007 versus to 2014, now there's going to be quite a lot of changes in terms of the data we're going to do, the forecasting algorithms that we have and so forth. And came the first surprise. Can you guess what could be the surprise? I mean, especially with the fact that we were not having any bills happening for the last seven years, the crash-severe server. I mean, we said, okay, let's start building some features, let's see, you know, where are we right now? The first thing that we figured out, okay, the server is already crashed. So we fortunately could recur the code base from the backup and we put it into SVN, you know, because we thought at that time, because other products are already into SVN, so that way we can manage all the product lines very well. So we moved it to SVN using the tools available. And just like what you guys have mentioned, the legacy, here is our legacy of the product, zero running test cases, no CI CD. Production schema, I mean, this is a little embarrassing, but we did not have the production schema checked in. I mean, there were a bunch of scripts which were missing because of which the database, we were not able to build the database. So seed data was not in good shape. Old stack, JUnit, the version that we were using, yes, I mean, that's the, I've written correctly, it's 1.5, spring 2.0, and we are using and. So that was the situation of our product. And now, I mean, last year we engaged Naresh, so we all were excited to make sure that, you know, we have to go by, we have to implement agile engineering practices. How do we do that when nothing is there, right? And that's the challenge, and that's the challenge. I mean, actually we are excited. Okay, let's take that as challenge. The fact is, I mean, business is not going to wait. It's not going to wait, saying, okay, take six months, make sure that the agile engineering practices are in place, and let's take the requirement later. So that's the challenge that we said, okay, we are up for it. Let's see how it goes. So these were the typical engineering practices that we thought we should be having in place, but not by, you know, just working on this one. I mean, let's see how we went about it. So dev setup, safety netting to the legacy code, test pyramid, clean code, CICD integration. So these were the practices we wanted to implement. I mean, these are very basic practices that we want to make sure are in place before we start, you know, jumping in and adding new features, enhancing, while keeping the existing client, which is running, right? That product is running. We don't want to go touch that. We don't want to break that stuff while we keep improving these things. So as a owner, I mean, I wanted to have all of it, right? But I mean, it's obvious. The more we implement all of it, the better our code will be, and the better we'll be able to scale it. I mean, take new requirements and support the new feature development. But our team management, you know, they said, don't forget. I mean, it's all good, right? It's all good for us. It doesn't matter for us. What matters is we have the business requirements and when are you going to deliver it? If you say we are going to deliver it next year, we are going to take 12 months to deliver because six months for making sure that we follow the practices for the legacy code and then another six months to actually develop the feature, nobody's going to listen, right? You're trying to close down two big airports, right? You're trying to close down two big clients and you need to, in our case, you need to actually crunch a lot of data that comes in from their side, show them some results, show them what kind of improvements you can make when the deal actually gets closed, which is why he said it's still in pipeline because we run about a six-month pilot or a three-month pilot before they have confidence in saying, yes, this is actually giving me better predictability than what I can calculate in my head, right? So things like that. Now we are already facing a challenge, right? I mean, we have these engineering practices to implement. Which one to pick first? Right? I mean, all of them are important. So what we did is, okay, let's figure out just like product backlog. This is the agility backlog, right? The product backlog, while we are prioritizing the backlog, what we say is pick up the most important item that is going to deliver something fruitful. So we did our sprint zero, what we say, and where we did the backlog prioritization. We sat together for a week or two weeks, I guess, and we came up with, okay, let's figure out these are the things that we are going to implement and tackle one at a time. While we were doing the backlog prioritization, we did some few more things, okay? Because we want to really start good and developers and QA should not be scared about the legacy code. So we spiked out, okay, how do you write the workflow test cases? How do we remove the, I mean, we also worked on removing the unnecessary outdated items because we are moving to agile, you know, having HLD, LLD, EAP, project files, HTML, mockups, and there was Jboss instance, Jboss deployable, which was checked in into the SVN. And so we deleted all that code. I mean, we deleted all that simply because it's no more required in the way we are moving ahead. We use output 2.1 GB to 500 MB of code base. I mean, whether we delivered any business value to it, no, but as long as maintainability is concerned, it has definitely increased. In fact, the first step is to just get a handle on your environment, clean up things and get it into a shape where you have confidence. So we said, let's actually go do that. Let's get rid of the noise and focus on the signals. We also spiked out, okay, how do we spin up Jenkins and whether the current code base is actually, we are able to build, forget about the running test cases, but are we able to build using Jenkins? So it was pretty quick spike to figure out how do we go about it. That was essentially, I think, what are, you know, if you will call sprint 0 or like a pre-sprint. Before even we get started, we wanted to sort out some of these things. And then we actually had to prioritize how we're going to tackle some of these challenges. Next. So one of the core principles that we follow is the, like the doctors take the hypocrites oath, we as software professionals also take the hypocrites oath, right? First, do no harm, right? You don't make the code base from what it was in a much worse situation than when you take over it, right? So what, how do we now go about it, right? So we were looking at, we have spec 1, which does a whole bunch of things. Now we are going to get the spec 2 that we need to implement for the new airports that we're going to get online. One easy way to do this is to basically go see wherever in the code base you're going to have some deviation from the original one, put an if-else condition. If old version, do x, else do y. And you could actually basically go through your code base, copy in every decision point, and basically have those conditional flows throughout the code base. Tomorrow when we decide to knock off version 1 or spec 1 which we eventually want to, you'll have to go back through the entire code and start looking at where all you had those conditions and then start deleting those, right? Clearly, I mean, at this stage, we said that's not how we want to go about doing things. So we said what we ideally want is to use a better design technique where we don't have to have these if-else conditions, these conditional logic in the code. Maybe we use dependency injection, as simple as polymorphism, different kinds of things that could rescue us so that we don't have to fall into this trap of if-else, if-else conditional logic. But then Prasad said if I'm going to go and tweak the existing code, how do I know if I didn't break something? The last thing I want to do is while refactoring or while making this new requirement fit in, I break existing airport's code base and then we lose the existing customer and we don't even get the new customer. So we kind of had that dialogue and we said, you know, we need some kind of a safety net and we said in the pyramid, the test pyramid that we talked this morning, we want to basically start with the workflow tests and let's look at at least can we write a few workflow tests which will cover a few scenarios for the existing spec. So we have some handle on what is going on currently and as we make changes, we want to make sure that that remains intact. We don't break the spec one at least. So that's what I think we decided. We had some challenges of database level things which will come into it. But our idea was to have an environment where I can run my workflow tests for the spec one at least before we start going and doing some design changes and things like that. So a few things you can notice here. It was still database it still had information about which database to connect and all that's that's what we did not. That's what we thought. Okay, let's go ahead with this one. I think Naresh was not aware of this one that we are connecting to a database. But we went ahead and we said, okay, at least have some subtenating for the spec one. The next thing that we made sure that, okay, we have written the workflow test case. How do you make sure that they run locally at least on every developer's machine because if we hang on so we will kind of try to touch upon that because we just finished the session in the morning on explaining all of those. I'm afraid if we get into that we might run out of time. So let's assume some kind of business logic validation but maybe we'll explain it a little bit more as we go further. So we identified those test cases saying that no matter what these test cases should not fail because they break which means there is a potential that we are going to hurt the existing client which is not affordable. And so next thing that we did is, okay, we have to make sure that every developer has this test cases running. So that because they are going to touch the code they are going to modify the existing code while they modify the existing code at least they should be making sure that while they change the code the existing workflow does not break. Now all was good at least we had some good at least high level subtenating for the spec one test cases were running locally. We said, okay, that's fine. I mean, who knows whether a developer actually runs the test cases or not and what if it breaks and we come to know after three days saying for the check-in which was done three days back by X developer actually that broke the code. There was no feedback, quick feedback. So we thought, okay, how about spinning up the Jenkins for it? So let's try spinning up the Jenkins. Let's try to make sure that at least as we check in the code the build is taken the build is deployed and the test cases run. If they fail which means the last check-in has got some issues, right? So we took the approach. Okay, let's go Jenkins way. Of course for spinning the Jenkins and having the build jobs available in Jenkins we wanted of course to have and targets available. So that's the time we spent on and targets. Okay, we had now we can see here we have built CI where it builds the code base into the CI environment. It's a little different than the local environment because of the parameters that it uses. Then we have a deploy job which will pick up the artifacts which are uploaded into the artifact tree will download the artifacts and will deploy it in our continuous test box. CT box that's what we call and then we followed by that the next job which will get triggered is to run the workflow test cases. So once we have these targets ready, the next thing that we do today is spin up the Jenkins and have the jobs configured into Jenkins. So this was the state when we had the things in place at least now with the check-in, every check-in these are the things that the Jenkins will do that to build, deploy and at least run the workflow test cases. All right, so we've got the safety net in right, we've at least got some basic safety net in and we've got that hooked up with CI, so we're getting feedback going now. Now what is the next step and what is the next thing that we want to attack along this journey of us going along right. So we actually had again, I think I was off for, I work with them alternative weeks so I was not there that week and next week when I came I was looking at you know how are the workflow tests we have implemented and what we found were there were places where we were actually going in and basically hooking into the database, doing other kinds of things, we said well we'll ignore all this for now, let's just focus on refactoring the code, simplifying the code so we will be able to make progress to adding spec too right now. At this stage we are good enough, let's start basically delivering some business value, right. So we started focusing on three levels of tests so we looked at unit tests essentially at the granular level, can we build the safety net of the units. Then we said well if you take a set of business objects together they function, they provide some kind of a business validation. For example we want to know at what price you want to sell this particular car park lot, right. So that is one set of objects that interact and decide that based on some algorithms. So we said can we encapsulate that and test it in a business logic. Now the last was more of a workflow which is kind of for each of the new spec. We want to ensure that the one scenario completely which means you kind of get some data on a daily basis. You're going to crunch that data, you're going to generate certain kinds of decision. There could be scenarios where you don't get all the data, you only get delta of the data. Can you work in those. So there were a lot of different scenarios under which we need to do and that's the workflow test. So just a very quick introduction but those were the kind of three levels of test that we said let's attack next. And that's what we started focusing on. Sorry. Yeah. We wrote it by hand, yes. Right, so that was the mantra. Why did we do that? We added a legacy code, right. Typically it's huge. It's always been huge. So writing unit test cases by hand has never been an option because it's just so big. I would spend like three project teams, large teams just sitting and doing nothing for like three months. That's exactly what we want to avoid. We don't want us to go and start doing a big upfront design. What we want to do is as and when you touch a piece of code you're going to unit test that. So if you notice we for the legacy code we are not basically writing unit test. We only wrote a bunch of workflow tests and we stopped at that level. The workflow test helped us understand the end to end flow of what we want to achieve. This is for the spec 2, the new spec that we're building. We wanted to test drive that. So this is at that stage we are saying now that we've achieved the safety net for the existing thing, the new thing let's start writing unit tests. And we're not going to spend a significant amount of time trying to build the entire safety net because that's not going to give us the biggest bank for the bug. We want to focus on what parts we're going to be touching and then we're going to pass into unit tests, into business logic tests and into workflow tests. And we have to so I am a strong believer that using tools that generate tests for you is a disaster because I've been, I've done that, burnt my hands multiple times and it's not a good strategy. We can talk a lot more in detail if you're interested but in this case we consciously decided we're not going to use any generation tools and generate tests. We're going to wherever we touch write unit tests. While you write unit tests you're actually going to refactor the code, you're going to improve the design. So our purpose over there is essentially to improve the design, not just to write tests for the sake of writing tests. 10 to 15 percent. So the mantra was if you're touching the existing code for the new enhancement try to see if there are test cases available. If not try to write the unit and so instead of writing having a completely horizontal cut, let's write the test cases for the entire thousand class files of the legacy code we took. If we are touching it, what we are going to make sure let's see if we have, we can write unit test cases. Most of the times it's not possible because with the legacy you can have a method view of 100 lines. But then we went ahead to say let's try to see if we can break it down into unit parts and then write the test cases around it. That's the approach while we are moving let's see if we can do some improvement on the legacy. So just the incremental steps. So the typical people talk about the egg and chicken problem when you are trying to write test for legacy code, you need to refactor the code without which you can't write tests. And to write tests you need to refactor code you need tests. Now there is a way out of that is basically by using the IDE to do a bunch of safe refactoring. There's a set of refactoring that the IDE will do for you that does not require you to have the safety net of tests. It's safe refactoring which means I can do. For example extract a method I don't think anything will break the IDE will warn me if something will go wrong. So we could extract methods we could move things around there are pretty safe ways of doing those and then we could basically mushroom out something unit test that. So we used a bunch of those techniques to wherever required write unit test. If unit test was not the right level then we would say let's move up. So the idea was to go as low in the pyramid as possible rather than starting right at the top and trying to do end to end tests. Let's run I think we have next one in our wish list was the test pyramid. So I think if you have attended the 1030 session about by narration such in you already have the idea. So unit test so we wanted to build this pyramid where 70% of the scenarios are covered in unit test. 10% in business logic and so on and so forth. And so we wanted to and why we wanted to do that first thing first of all write test the test should be written such that they are giving the correct feedback. If my unit level logic is failing it makes no sense for me that UI test case is telling me that something has failed reason why because if I get the feedback from the UI layer I'm not sure which part because UI is calling under different pieces of the code but I'm not sure which piece fail right. So it should give correct feedback at the correct level right. So that's what we wanted to implement this test pyramid. Another option another thing another advantage why we want to do that is to make sure that we are avoiding the test duplication if something is covered at the unit or business logic level there is no point in writing again a UI test case say for example we are doing some algorithm we are running some algorithm at the back end which gives us the output at the front end there is no point in writing that algorithm in the UI test case when it can be easily tested at the unit or business logic level. Third thing if you write hundreds of UI test cases you can see in the 10-30 session that the build time goes too much because it comes at the cost right you have to load the web driver you have to login into the system access the screen put punch of inputs and then see the feedback. So we wanted to avoid keep the number of UI test cases to minimum yes sir. I mean so the team is working together on this we didn't really have again the distinction because by now in the organization we have the transition in most other places where we are not saying this is a QA department only responsible for this that people are working together to put this pyramid in place. Correct. That's correct. No it's about the confidence in the testers would increase when they start actually working at the core layers of the test and getting the insights into what is actually getting covered at different layers so they would have more confidence on the overall tests. So you are adding to the advantage that is listed over here. How do you define code quality? How was it? It was poor. How was it that it was poor but is there a specific thing like you obviously legacy code is going to be poor right no one is going to give you a very nicely written legacy coverage. I will come to that. The thing is when the code quality is poor identifying the unit and then having the confidence in the unit is very difficult because you are not writing unit test case on the legacy code. You are writing on the newly order of the code that piece that you are touching so just want to understand what was the baseline. See we were writing unit tests on legacy code as well wherever we test we would write unit test on the legacy code as well it's not like we were writing test unit test on the new code that we were writing because changes we were making where in the legacy code it was not a system in the side right and that's the hard part is that you know you have to actually go in and make changes where you have no safety net other than the workflow test. So there is some safety net by now we have put in place but we didn't have safety net at the very granular level because you know we would have 800 lines methods we would have you know files thousand ten thousand lines of code and basically just breaking it apart before you can write code which is where the earlier side we were saying you have to refactor and write unit test at the same time. Basically at this stage there is one thing that is deployed in production it's running for a particular client we are now building something for two new clients that are going to come on board right and for those we are saying that we are still not deployed into production for them we are saying how did we go about that so you know regression the way to avoid the long regression cycle was to essentially ensure the core workflows that were being regressed right we basically wrote them we converted them into the workflow tests. So we automated those to avoid those long regression cycles to start with. That was that that's what that's what the point was as to write the right test cases at the right place rather than having you know let's start everything with the UI because if you write UI everything will be covered eventually but in the 1030 you figured out okay it took us too much time and the regression cycle is too much. On another product in the same company it took us 7 hours to run the automated regression tests automated regression tests and we said that's not going to help us and that was only still 40% of the entire code base and there was always catch up game and things like that so that's basically not a good strategy so what we are trying to get here is that we want to move to lower layers and we want to focus at the lower layers. Now lower layers is fine when you are spinning up a completely new product because you have total control and you're going to complete test driven development. The challenge here is when I have legacy code and I want to make changes on it then what is this approach or what is the strategy we use so what we are saying is to summarize is we started with a few workflow tests which essentially gave us the safety net for spec one the existing specification. For spec two we said whenever we have to touch any code the first question is can I write a unit test which covers the legacy code and the new thing that I am doing right so capture that. If that can be done great write it if that cannot be done because of dependencies or other kinds of things then let's move one layer up and write it at the domain logic level if we can write there then perfect that's good we write it and forget it. If we can't do it over there then let's move one more layer up and write it at least at the workflow level. Now we can get into some more details around you know I want to validate something now I don't have access to that I have to dig into the database to find out if what I just did actually made a difference or not right so you know in some cases you couldn't write unit test over there because it's a completely black box algorithm running on let's say a SAS deployment right so we had challenges around that so that's where you start moving up but wherever it was Java code we could actually rescue it out and unit tested. So by this time we all started writing three layers the obvious thing is okay we wanted to make sure that the pyramid stays as pyramid at least we are building a pyramid over a period it doesn't become a cylinder a diamond or something like that we wanted to monitor okay how are we performing are we writing 70% unit test cases and so on and so forth so we said okay let's figure out if there's a plugin available in Jenkins so we did have one plugin which is called label test group plugin the only point the problem was it was grouping the test cases by these terminology unit smoke integration integration and we actually wanted to have our own terminology saying you know we wanted to have unit business logic integration workflow UI end to end excise what we did is okay let's try out if we can tweak that code so we downloaded SVN URL and we tweaked it and we had changed it to make sure that we see unit business logic integration and so on and so forth I have published that code and also the HPI file on this GitHub link you can use that if you want so after using that and configuring the Jenkins further this was the state at least we could we are able to see how many unit test cases are written how many business logic test cases are written how many workflow test cases are written every check in it work it runs and it gives us the report in the in the sort of pyramid format that we wanted to monitor still have improvement to be made to the visualization but at least it helps us understand you know like out of the total tests where how many tests are there what are those things how frequently they are getting run are those increasing decreasing are they passing feeling so it helps you at least visualize that so that was kind of the first step towards visualizing the pyramid how are we making progress on the pyramid itself so this was actually very important for us because while you know this was more of effort to pay off the technical debt that we had inherited we also had to make sure that the business was happy with the outcomes we were providing because otherwise we would lose two big customers like one customer essentially is what we have and if you're getting two more customers that's a big deal right for a product like this so we don't want to basically go off in our own world and keep paying the technical debt and lose the customer but what we heard from the business because of the approach we took is that they were actually happy with the progress we had made we had fulfilled set of business requirements that they had while we were able to at least check off three things you know to start with or at least make some progress on those three elements that we originally started with had we done everything no not yet but at least we were trying to balance the two which I think is the hardest part when dealing with legacy is how do you slowly pay off the technical debt and start making some progress in that direction while you also make sure that the business is happy with the outcomes that you're providing right so that I think was a very important step for us to win the business confidence because that meant now we could go and invest a little bit more time paying off some of those technical debts they could also see some of the advantages because of that right like there was no issues that are actually getting reported things like that confidence went up I think that's a very important thing is when the business confidence goes up in the team you know you can do a lot of interesting things in those context so we picked the next one as dev setup I mean just a little bit of idea about dev setup the setup we had on individual developer what we used to do is we used to okay new developer joins okay you know go and copy paste or take someone's help all those things on your machine we also did not have the as I said benchmark database so again copy the database on your own machine but that's not the scalable way right as we as we bring in more developers more we wanted some standardized process by which you know if we I wish I could have a command where I say and or whatever is that so we chose and because we already had and but if I run that command it will download the Jboss MySQL and all those artifacts on to our machine it gets deployed with a standard folder structure and then that becomes pretty simple right so whoever joins it becomes a piece of cake saying okay download certain folders and just run the command and everything will be fine so we did that activity of course devops was involved in that one we also made sure that okay how the database that we had we did not have the baseline schema so what we did is we copied the database structure from the from the production we identified what's the basic minimum seed data that is required to run our system we checked in that code we have that target ready to auto deploy the baseline seeded database on to individual machine so we made some progress making sure that the the dev setup is streamlined one of the other challenge that we had is we had this database for running the workflow test cases we wanted populated database now the thing that we are doing we used to use the massage production db I mean keeping the confidentiality in place we used to use the db and what was the size of the db 200gb that's not scalable I mean we are not it makes no sense that I'm copying 200gb of db on every single developers machine as the new developer comes in what we did as to achieve that is we wrote a tool I mean in fact QA wrote a tool to make sure that all of the database is extracted from the 200gb and then only that another command will actually populate or run those script into an existing database and will make the database ready with only that chunk the outcome from 200gb down to 1.7gb it's still a huge improvement because which means the developer setup ready situation is pretty fast quick check time check we have around no time for questions okay so 6 minutes yeah so one of the weeks I was back I noticed that the workflow test that we had actually was digging into the database and starting to read stuff out of the database and we said well this is not good because as we start building more of these now you have a very tight coupling with the database structure and with version 2 or with the spec 2 what we were trying to do was actually introduce a lot more things in the database change the structure of the database and these tests would become very fragile so you know Prasad and I were talking about how did you guys slip this in I didn't realize this so in ideas we also have the hackathon which is a quarterly event we call it as ship it day so we picked up this okay we have one day we have 24 hours okay how about trying to understand if we can restify this application if we can demonstrate just of it making sure that the application is scalable or make sure application is ready to support the rest end point we tried that actually it was tough night for us but finally we could crack it and we exposed our API such that you know at least for the workflow test cases that we write we have the rest end points ready and so instead of hitting the database to understand whether the workflow test cases pass or fail use the rest end points to make sure that the workflow test cases are now database agnostic yeah one whole long line to just get this basic you know rest end point enabled because of the legacy versions of things we were using now it's not it doesn't directly the test itself should not directly interact with the database it should be agnostic whether you run the database on your wherever right that's agnostic because what you want to do is you want to rely on the whatever the application exposes to you the application internally talks to the database right but we don't want the test to also depend on the structure of the database because that makes the test tightly coupled with your existing version of the code which could move and then the test could break so that leads to brittleness so yeah I think with that we were able to check off another thing which was essentially ensuring that we had the dev set up on the thing let's quickly move to code so we said okay we want to while we move on we want to make sure that we want to reduce the violations in the system we want to reduce if else conditions in the stream how do we do that so we spin up quickly sonar integration we use the jaco co for integrating with sonar and then we at least started seeing the status as to how is how good bad or worse is our code I mean to answer your question the code coverage still is pretty poor but from zero percent to that percentage it's still not as bad as important point is having confidence right I've worked in code business where we had a very huge percent of coverage but still you know the confidence was very low so I think what's important is having confidence rather than the coverage numbers can also see now number of test cases are going up violations are coming down how did we achieve that so I'll come to that one so we'll move okay we said okay while we do that how about you know covering one more layer of one more layer of the pyramid so we can see that while the workflow test cases and the normal day to day process is fine or the application works fine let's see if we are we are but it's make sure that we don't break the UI part of it so we write but you know around eight the test cases your automation for around eight just to ensure that the from the user's perspective they're able to navigate through different things they're able to use it we don't want to put too much heavy emphasis on the UI test we didn't want to make that portion is covered as well so it's not integrated we are making sure that it is well within the limits just like one minute for screens to cover up so so far what did we achieve basically at this stage I think we had safety net built so people had confidence in being able to go in and modify the existing code and build on top of it rather than basically taking copies of it we were able to get fast feedback once people check in because of the CI and Jenkins integration we were able to cover four layers of the pyramid one step at a time three are integrated into the CI one still we are working on that but at least we have that feedback cycle put in and then we were able to standardize the development environment so everyone's working on the same development environment it's scalable someone else comes in it takes less than maybe a few hours for them to be up and running which used to be few weeks earlier for them to just set up everything on a legacy project so I think those were the four key things that we were able to achieve I think it's been end of December end of December to till now and the most important thing still keeping the business happy about it so I think we'll pause here this further steps and stuff like that but I do want to give people the opportunity to ask some more questions at this stage we have not had looked at performance test but at some point we do need to do that the nature of our application is basically it's a nightly batch job that runs so per se performance right now is not a bottleneck for us but as we start building and scaling more or more airports into this we will have to look at that element so right now we are not worrying about that most of our so your question is where was most of our business logic most of the business logic was distributed all over the place this is an analytics product which had a whole bunch of stuff happening essentially in the back end if you will right because it's data a whole bunch of data comes in overnight we take the data we crunch the data we generate a whole bunch of we run analytics statistics on top of it and then we generate some numbers shove it into another database someone else comes picks it up so a lot of logic was in the back end but we also had some other places where logic had like databases itself had a lot of logic in them standard development environment was to ensure that everyone's working on the versions of things they have things set up in the same way so that when someone comes in new it's kind of similar I can run things in a consistent manner everyone's using similar eclipsed versions with the right sort of plugins all of those things to make sure that the environment is consistent across people we he also talked about you know instead of each one having their own copies of the database we all work in the consistent copies of the database so there are scripted now with the moment you check out code you run the ant task and basically will bring your database version up to the latest and greatest version so things like that that's basically what we meant by standardizing it somebody was running jboss as a console application some other was running jboss as a service that's not going to help I mean the short term when we have small size that's okay but in the long run it's not going to help someone over there had a question the UI is the top layer is to focus on the navigation so a user could log in and go from all of these things they can navigate automation tool as in when you say all the automation tool but UI automation tool selenium is what we used for the UI and we wrote 6 tests just 6, 8, 8 tests in total that's 1% of the entire thing actually other products are already in Jenkins we have active support DevOps already aligned so it was a pretty simple choice for us to move it to Jenkins it doesn't matter what color pant you wear I mean when it comes to basically focusing on this thing I don't think Jenkins team city that would make any difference pretty much all of them provide the same functionality alright thank you very much we are going to let the next speaker come in thank you very much