 I'm glad that today we have Anand with us with this session, rewrite versus refactor. Over to you Anand. Thank you. Thanks Vishal for this. Hi everyone. I'm very excited to be here with you, sharing my experiences and thoughts about rewrite versus refactor. Usually I do not really talk about myself as an introduction, but in this case, I think a little information about my journey is relevant to the topic at hand. So let's get started. I'm Anand Bhakmar. I've been part of the quality space now for more than 20, 21 years and I started off as a test engineer doing some automation, but very quickly I evolved into playing any and every type of role possible that helps team build a quality product. I've worked with product organizations, services organizations. I've worked on open source. I'm a contributor on the Selenium project and I have a lot of other open source tools as well that I've built and I contribute to. I also work with a lot of other product based companies as well from the solutioning perspective, architecture perspective. And since the past three plus years, I've been doing consulting where I get a chance to work with many more organizations to help them on the automation journey, the testing journey, the quality journey, how to make the product even more available and a better experience for the end users. So what I'm going to share today is based on my experiences, what I've seen in these years and when it comes to test automation, what does it really mean to really take it to the next level? So with that context in mind, let's get started about the core topic for today is about refactor versus rewrite. We will be focusing on test automation code in the context of the session today, but whatever I'm speaking is going to be very, very relevant to the developer context as well. My examples are going to be from a test automation space. In fact, it's not about dev versus test, it's about any type of code we write, how do you really take it forward from there? So let's start. A quick introduction to a couple of case studies so you can relate to what I'm going to be speaking about. So one of the clients that I was working with, they gave me a charter of saying that I want you to come and help revamp our testing practices, our QA practices. And I wanted to automate everything that is possible to be automated. I don't want to do any manual testing at all. And a typical reaction when I hear such things is those who know me know this reaction, but typically I go, okay, what is going on over here, right? So fortunately I was able to work through this and we got started with this aspect, did some assessment of sorts, validating the charter, understanding by investigation what is working well, the challenges, the constraints and it's very important to understand the constraints from what is a perceived constraint versus what is a real constraint, right? It's very important to distinguish between those because a perceived constraint is something you can work through relatively quite easily, but the real constraints is where the interesting solutions would probably need to be created and implemented to overcome those. So based on this assessment criteria or approach, the investigation approaches typically start speaking with the team members, do code reviews. It is very important to run the test yourself, right? In case of devs, you build the product, you run the product itself, you launch it or whatever that type of product might be and understand what really is happening based on what you have heard from the team members and validate those on your own as well. Understand the challenges, challenge the assumptions itself, right? That team members have tried to tweak your mindset into. So understand the challenges for yourself, understand the setup and infra related issues that might be there because many a times these are a huge bottleneck to doing the right thing that is required to be done. And then based on that, the findings come across in this particular case, the finding was they had a CI server, but none of these tests were running as part of CI. They had some form of automation and this automation I'm talking about the functional or the API automation, but this was not true automation because there was a lot of manual interventions required. It used to run only from local machine. So there was some setup issues and execution issues all over the place, lot of tweaks to be done before you get the test running. So based on these assessments, I was fortunate in a way that my stakeholder at the client side was able to listen to these assessment, these arguments, and of course, yeah, sorry, one large other finding was there large and separate code base for different types of automation. So I was able to work with the stakeholder to evolve the charter to say, you cannot revamp the testing practices because testing is a very narrow focused a very small subset of the team member focused activity. What we need to look at is quality practices which cuts across all the different team members all the different roles working on the product. And the client was open-minded to understand more about these. And also we said automate everything that makes sense to automate. You just don't want to automate something for the sake of doing that automation if it's not going to add value. So we'll talk more about this aspect, but these two are very big fundamental shift in the starting step itself of thinking about how does how can you really approach this particular aspect? And that was a good starting point. Another case study quickly I want to refer to as well. So it's been so much more broader picture for you is in this case, the charter was the client had thousands of automated UI tests and they wanted fast feedback from them. So that itself was a big flag in my mind that when you have thousands of tests and you're looking for fast feedback the first answer is obvious. I'll let you put in chat if you think you know what that answer is. The second aspect of the requirement or the objective was that the clarity of intent is important because the tests of looking at the test it was not very clear what exactly has been validated. And third, they wanted to make it a maintainable and scalable automation. Well, it makes sense because once you have thousands of tests, obviously it is things are going to start slowing down, right? So how can you make the test automation maintainable and scalable? So again, the investigation approach was similar to what I shared in the first case study. And then the findings in this case were slightly different. Of course, each context is different. So over here, the finding was that the automation code quality is subpar and subpar is a very polite word. There was excessive duplication, massive files, no encapsulation of sorts in any of the implementations and that was a huge problem. Excessive use of sleep, which again was my first point to look at why is the test taking a long time to run. Of course, you're going to have a lot of weights in your code, in your automation code to make sure that tests are passing, which is not a good implementation pattern again. The code was also extremely complex, which again, it's bound to happen when you have so many tests automated over there. And you don't know what is the trend analysis of your test execution. This is a long-living product, you've got thousands of tests. Is there any way to understand which test has been flaky, which test at what point sometimes starts failing and then you need to fix again, that can give you very interesting patterns as well. So trend analysis also was not there in such a large code base. And of course, this part is very obvious again from the first charter that was item that was there, but there's too much focus on the UI or end-to-end automation and that is going to be a problem. No idea of what is being tested again, relates to the charter objective itself, but this is what after looking more at the details of the code base, we figured out the more specific details around this. So the evolved charter again, right, based on the assessment again, it's very important to go back and relook at what your objective really is and does it really make sense. And fast feedback, clarity of intent, maintainable and scalable automation, of course, it makes sense. But based on the findings, what we want to also do is we also want to start focusing on automating the test at lower level of the pyramid. And this could be a unit test or API UI component test and your API workflow automation as well. We are not even considering security performance, visual testing at this point in time, we just focus on basic functionality. So this was a evolved charter again, which made sense based on conversation that needs to happen and then you get into it. But now with this, hopefully this, you are able to relate to some of these examples from the case studies. And now let's look at the details of the analysis. How do you really understand this type of ecosystem and how do you start taking decisions based on that? My view of sharing over here is from an external third-person perspective, right? As a consultant, I'm coming and working with you or the teams to understand this, but this could very well be you working on your team's quality or automation aspects and how you can take a step back and do similar type of analysis as well. So typically how the code base starts evolving and this unfortunately I have seen in the automation code as well. In many cases, I've seen references to Hello World in the code base and it starts off from here, but unfortunately you don't even end up deleting this Hello World example as your framework starts evolving. But in most cases, you would start off from a Hello World type of example. Again, text tag language, programming language and all is immaterial over here is a concept and the thought what I'm sharing is important, right? So it starts off with a Hello World, but very quickly as you start implementing more and more tests or as you start writing more and more product features and making actual usable products, your code base actually becomes very complex and at times it becomes extremely complex. Now I have seen such complexity of code base unfortunately on the test automation side as well, which is a very scary proposition, right? If my test automation code is so complex, I can imagine what the product code might be. So it's a very important thing to take a look at where exactly you stand and is that architecture that is really right for you. Now in few cases, what we see is that you have started fresh, but as you start growing in your automation code base has become very complex and in such cases you are working on this either or from ground up you've been working on this code base, but in many cases you might start off on a new team where you're working on such complex code base where you need to start implementing, extending or evolving an existing code base and that can become a big problem. In my experience based on a couple of these case studies that I've shared and many other experiences that I have had as well, unfortunately on the test automation side at least I have encountered multiple copies of such code as well just because we've not been able to use version control system in a decent fashion. There'll be multiple copies of such code, maybe one code basis for Android, another is for iOS, third is for web and so on, right? And it just doesn't make sense in various ways. These could be actual multiple repositories that are there or just physical copies of code bases, which are not even in version control, even a largest carrier proposition or in many cases these are multiple and long-living branches which there is no way you can really merge it with the master or the main branch again because they've just diverted so much and there's so much complexity in them that you cannot merge them together. Now think about this, what is a typical reaction if you have to start working on such a code base what would you do? For me, just to do an evaluation and help the team do the right things, looking at such examples my head explodes, right? I actually go for a spin over there, I have to take a step back and take a breather when I look at such complexity and so, okay, what is going on over here? And how do I really start looking at solving this problem or help the team solve this problem in a easier fashion? But we need to take a step back and think about why this type of reaction is there? I don't know if you would have that reaction or you have had that type of reaction before, right? But definitely the case for me but if you take a step back and analyze why you get this type of reaction there are different aspects that would come to mind that you would be able to come up with as reasonings. The first aspect is complexity. Why do you even need such a complex code base or such a complex problem? It's evolution of the code base and the thought of this thing definitely indicates this type of code base rather definitely indicates the complexity in the thought process as you have evolved your code base. And again, there could be different reasons why the thought process itself has become complex but that complexity is going to cause you a lot of pain. And this causes the complexity in code which not surprising a spaghetti code it's all over the place there's no aspect of frameworks or design patterns or a structuring of your thought process and the implementation accordingly that makes it logical for you to grow. So with such complex code base one typical approach that teams take is no we should document our framework or other code and that again is a huge problem, right? Because documentation is important only where it makes sense. It is very important to have lean and meaningful documentation. Like for example, what was the reason you chose this implementation this way of implementing that particular logic and maybe some references to requirements or some stack overflow or relevant articles to say this is what the references based on which I'm implementing it. But if you start adding comments in code just to explain what that implementation logic is sorry you're doing a very wrong thing your code itself should be very readable and understandable. You don't need to say this method is to log in it just does not make sense, okay? So don't just add comments in code for the sake of adding documentations add meaningful comments that explain the thought process or the rationale behind taking certain decisions and that becomes a very helpful way. So having decent and good lean and meaningful documentation is very important especially as a code base starts becoming complex and there are different ways to address this as well but that is a separate conversation again. The other aspect of complexity and comes due to the perception and perception in a huge way is a state of mind am I an optimist person or am I a pessimist person? Now these words might sound very harsh or negative. So let's call it do I look at a glass half full as a glass with some water or content in it as half full or half empty, right? That indicates the state of mind are you going to approach things in a positive way or you're gonna take some different decisions based on that. A big perception of complexity is also the emotional state of mind which side of the bed did I get up on today? Did I have an argument when I was driving to work which for I'm guessing most of us are not doing these days but traffic is crazy everywhere, right? And road rage is very common there could be a lot of different ways via a moods change and because of that change of emotional state our mental state, it starts affecting our work as well and you end up seeing things in a different way based on that state of mind as well. So it is very important to keep that in mind and if you think something is very easy it's also important to step back and relook at that decision if you think something is extremely complex you still need to take a step back and relook at it in a rational fashion. The point of view is also very important someone who's down in the trenches you cannot talk philosophy or big picture grandiose make grandiose statements because they are in the deep end of the pool. They know the kind of pressures and the workload that they are going through. So it is very easy for someone to come and make very generic or big statements because the point of view is very different. Again, this is not about right or wrong because each person has got their own point of view based on where they are approaching the problem. But it is important to eventually come together and understand each other's point of view to understand where we really stand and what next needs to be done with it. Also big aspect of perception is the experience level. Someone who's coming fresh out of college will have a different perspective versus someone who has spent some time in the industry versus someone who's spent some time in the industry from the similar domain. If I am not worked in the medical domain and I get onto a project in medical domain my experience level is not the same as what it would be in working for projects in a similar domain that I have contributed to earlier. So that again makes a very big difference the experience level not just number of years of experience but also relevance of that experience and applicability of that experience in different domains that you get onto as well. And that perception again makes a very big difference in terms of understanding what is required to be done and how you want to solve that. Diversity makes a very big impression or contributes a lot to the perception as well. Someone who has worked in different domains, different experience levels in different industries across different parts of the world working with different culture people. It makes a very big difference in terms of how they approach a problem and solutioning as well. So that perception also becomes a very important aspect. So these are various different factors that contribute to the complexity and the solutioning side of it. And of course we think we have to understand the learning capability, right? Now we continuously keep learning as humans and we learn good and bad they are both sides of the same coin you cannot control one versus the other but what is very important is what I think as was good yesterday may not necessarily be good today as well. And this I'm talking just about myself. So it is very important to think about this aspect as a learning and the experiences that we go through is going to change us and evolve us. What is important is we keep learning from our past mistakes and make sure we understand and take those learnings the good and the bad into the next context to make things right, try and make things right rather. Of course you cannot disregard this fact that people move in and out of the teams. So that is also going to bring different aspects of learning that are there and we should be very open-minded in learning these aspects from new team members who come on board because their experiences are going to help us grow tremendously. Now coming back to our problem statement, right? So far we've been talking a lot of other factors soft factors that contribute to this complexity till we get to this challenge of what do you need to do to work on such code bases? So typically, where would I start, right? I would look at where does the change need to happen that itself can take some time to understand on a complex code base. But once you find that piece of code or section of code where the change needs to happen you would then need to start thinking what is the impact of the change happening in this code base or the risk of something else breaking because of some change done to this part of the code base. You need to start doing that analysis, doing that impact analysis of the nature of change and what it would result in. You also need to understand what is the timelines that you're looking at when you're looking at this code base and for the change that needs to happen. And is that going to help you take the right decisions or that is you're already set for failure in various cases? And the pressure of time actually leads to a lot of common anti-patterns if you really look at it, right? Because of time pressure you start forgetting the past experiences and you end up doing the same mistake again and again. And that is a big problem. So how can you help yourself keep learning, take that step back, keep learning and applying your learning from past experiences into what needs to happen next? Is there a way that you can avoid taking shortcuts just because it is too complex to do the right thing or it's gonna take a very long time to do the right thing? So I'm just going to take, forget all that and take a shortcut, get my work done and now the problem is someone else's responsibility. That is a very big anti-pattern because that way you are never going to start fixing things or doing things the right way. You are always pushing things and pushing more complexity to other set of team members to handle it. So the spaghetti code is a classic reason or classic way that this evolves into taking shortcuts is going to result in spaghetti code. Because of this aspect about the effort required to do the right things, we end up not wanting to modify the existing code. We'll say, okay, changing or refacting a particular method is gonna take a long time. It has a lot of unforeseen behaviors that can happen or risks that might be there. So let me just create a copy of the existing code, make relevant changes to it and my work is done over there. But you're not understanding the kind of tech debt you are incurring, the kind of problems you're adding on to the system with this approach. So do not reinvent the wheel, do the right thing from a perspective of what is required and the change required to be done. As a result, what happens with this thing is large code bases and multiple copies of the same thing. You don't know what really you're finding in terms of reuse. And this again is a cyclic thing, right? Because you're not able to find what you need from a reuse perspective, you end up re-implementing by copy paste and some tweaks in it. And you start just adding on to the complexity of your code. And these are all very common, very, very common anti-patterns that exist in large code bases. And especially when it comes to the test automation side of it, a classic problem of UI automation or even API automation in various ways is to handle flakiness and to make the test table, we'll keep on adding slips or weights in the code to make sure the test passes. Why? Because the goal of this test is not performance. It doesn't matter if it takes 30 seconds or five minutes to do a same action. I'm just testing the functionality. I'm not testing performance over here. As long as my functionality is working correctly, I'm okay. And that's one classic case, a classic reason. I've heard a lot of people unfortunately say why they want to add weights in the code. And that is a big, big problem because what you're not realizing is functionality performance is a very inherent aspect of functionality. If you are expecting your login test is saying it is passed after taking two minutes to login and you're saying my functionality is working correctly, I'm sorry, we need to have a very deep conversation around this, why this thought process is completely flawed. So don't introduce slip or weights in your code to handle flakiness. There are better ways how you can handle. But now if you look at it, why do we really end up with these anti-patterns? It is because it works. My problem is solved. I don't care about what's going to happen next. I'm okay, my story is done, my task is complete and I have moved on and I have proof to say it was running in CI that test passed. So if it is failing after some time, it is not my responsibility. Someone else might have changed. We don't want to take ownership of the problem. We don't want to take equal ownership of making a good code base and that's where we end up doing this. Not saying your processes and practices on your team are very effective to help do the right things as well. That might very well be a huge contributing factor. For example, how much time is required to really do things the right way. That might be a very big contributing factor as well, but still what are we really doing to make things right? We want to let someone else figure out a better way and at a later point in time because today the release needs to go out. I have to give my reports today how much is passing and if everything is fine or not and if something is not working correctly, I might need to stay up late and try to fix things or whatever might be required. So there are a lot of these factors that contribute to the anti-patterns and in India, at least in Hindi, there's a very nice word in this context what explains this mindset is Jugaad. I will do anything that it takes to fix the problem right now and proceed without worrying about is this the right thing or not. Okay. Thanks for the comments in chat that you agree with some of this aspect, appreciate that. And of course, the collective responsibility, accountability of course, these are very important aspects as well. Okay. So let's come down to the refactoring aspect, right? What does refactoring really mean before we get into the solutions? So refactoring is basically, this is from Wikipedia what refactoring means is the process of reorganizing your code without changing its external behavior. That is what refactoring means. And that's where having tests on your product code base is a huge asset to help in the refactoring of your product code base because your external facing tests should still continue to pass the same way as it used to before the code change was done. So refactoring is a very important aspect to look at in any form of coding. It is extremely essential. But it is very possible as well that by refactoring, we may end up creating a different code mess, a different mess that is there. We've got a complex code just for the sake of changing the architecture. This is again a classic example, changing a monolith to a service-based architecture. Our monolith is a huge legacy code base. It is very difficult to build, maintain, scale, whatever. So we are going to break it up into components and microservices and use that. But what we don't realize it is that we very quickly get into a different set of challenges about managing those services, creating that architecture that really makes it conducive to scale and grow and do the right thing. So it is very important to bear in mind refactoring can at a product architecture level or at a small code base level, it is very important to bear in mind what is an end objective that we are looking to achieve. And then based on that, have very focused refactoring efforts. So what happens as a result though, right? Is we start addressing tech debt and it just explodes. It explodes into different ways. And again, coming to the automation context, because of the tech debt that is incurred in your implementation, the large complex code bases that are there, we end up with the situations where it's difficult to add new features or tests, it's difficult to scale your automation. Supposedly the simpler changes in your product functionality, which should be very simple to implement and update in your tests as well, end up taking a very long time because of the complex and spaghetti code base that is there. You make one small change and it starts having unexpected consequences in the execution. And because of this, what happens is your automation starts losing value. And as a net result, you end up doing more and more manual testing just because your automation is not doing the right thing. So my approach, tying all this context back together, right? Coming back to the case study. What do I do if I have to look at a complex code base, whether I'm working on it or I'm just consulting teams how to make it better? It is very important to look at the facts where we are. Understand the object is where we want to be. Look at the facts of where we are. And the facts has a very important structure as well to it. Again, the structure is dependent on your team context and your stakeholders as well. But you look at that assessment, you look at the reports and you create a summary out of it to say that if testing was truly a team responsibility or was it a QA team responsibility, right? If testing, the summary is testing was a QA team responsibility, then of course that's going to be a problem. And you are not going to get any value from automation. And in the report, then I would have details of what is working well, what is not working well. These are some samples of what I've shared with clients before. Of course, some aspects are anonymized to not give out relevant or proprietary internal details. But you look at all different aspects of your SDLC, right from requirements to release and supporting the releases, what is working well, what are the challenges, what aspects in each stage of that process is working well or needs to be evolved to make it better. And this report can become a pretty long report as well depending on context of your assessment, what you're doing on the port base. But this is very important to look at it. And then you summarize it from a testing perspective as well because that's where your key focus was, right? The other factors are also important. They are contributing factors but also from a testing perspective, what is really happening? So you come up with that summary as well and understand where the gaps might be from a team implementation perspective. You tied back to the reality of the existing test. What other aspects are that it's not just about the test or the port base, it's also about how are you working, if it's UI automation, how you're managing your drivers and devices, for example, is it really a run on demand or a fully automated automation or there are other aspects to it, manual interventions, other aspects of slowness because of just poor implementation or is your environment itself a problem? Are you using a lot of static methods on singletons and not using oops concepts in a, and these are concepts really, right? About how do you really manage your port base and create those frameworks? So are you doing that well or not? Are you running your tests only sequentially or can you run them in parallel to get faster feedback and to make your tests run in a parallel fashion, it takes a good framework architecture to make sure your tests are truly independent. They're not sharing any state as well, which again brings to a very important concept of test data management. How are you doing that effectively? And of course, the branching, right? Do trunk-based development. Of course, you will end up creating small short-lived branches to implement your own tests, fix certain tests or evolve tests, what is required and then how quickly are you going to get it merged into your main branch and run your tests continuously from the main branch? That is going to be very important. So understanding all of these and presenting it to your stakeholders that this is the current state and as a result, what is happening? That is very important. It's not just talking about the problems but talking about the impact this problem is having. From a port analysis perspective also, look at various different aspects of reviewing the code manually as well as there are a lot of tools and plugins that you can use in your IDs to give you insights into the quality of the code. Are there methods returning a default value? Then why even that implementation is there if it has to be a default value? Is it extremely complex? Look at complexity. So in Ireland, plugins are available as one of the ways you can analyze your code. Right in your ID, you will get for each method, each class, what is the complexity of that and if that is an indication that you need to make changes in your code base. The naming conventions again becomes important. Swallowing of exceptions, a huge problem and definitely it's a huge problem in our test automation as well, not just product code. If you swallow exception, you will not know where the problem really happened. You have lost that opportunity to fix the problem at the source and do the right thing over there. So do not swallow exceptions, sleep static methods and all. And these are big problems. And I typically also look at and share code snippets over there in my report, right? That I'm not just talking concepts or making things up on the fly. I'm showing examples from the code about what is going on. This is some code analysis from IntelliJ that is telling me what really is going on. How many mornings, errors are there? The complexity is there highlighted over here. So there are a lot of different aspects and if there is some CI, it's very important to look at CI as well. Just saying that I have my code running in CI is not sufficient. How are you really using that CI effectively is very important. How many tests are there in each of the pipelines? When was it last run? When was the last table released? Tests are going to fail, which is a sign of a good test. It is indicating there is a problem. But what are you doing with that test result as well? Are you fixing the test or are you reporting a defect or working with the test to fix the problem? These are very important aspects to keep in mind. So now with this case, how do you really turn the ship around? Because there are a lot of challenges that we are really talking about over here. And just talking about challenges is not sufficient. You need to start thinking about the solutions. You need to think about how do we really achieve our objectives and get to the next stage. But a big aspect of how do we achieve our objectives is knowing what are your objectives? In many, many cases, we think we know what our end state is, but that is a very short-term vision. You need to have a vision for the end state. What is it that you really want to achieve with this code base that you are working on? And if you know your objectives, then you'll be able to get to the next stage in terms of implementation. So in context of the automated test, the expectations that I would have from any automation test framework is you should have fast execution and very fast feedback cycle. The tests have to be independent only, then they can run in parallel. The tests have to be executed on every single change that is there, whether it's a product code change or a test code change, the test should run automatically to make sure nothing else is broken and you're getting feedback with that. It is very, very important to know what is being tested as the end customer or the end user, the consumer, right? Especially your UI automation, your API workflow automation, you have to make sure the intent of the test is extremely clear because that is what you are really trying to achieve over there. And you should be able to release with confidence. So this is what my expectation from automated test is. So I should be able to trust my automation test results. That is what the key aspect is over. So the approach to get to this stage is of course you increase the awareness within the team. What are the challenges? What are better ways of doing it? You start upscaling and pairing and training the team members about the better ways of working. You evolve your overall way of working itself. So it's not just about the people writing the code, it's all the other rules combined together. How do they need to work in synergy? That becomes a very important aspect. When it comes to the case study-based examples, in various cases, what we've end up doing is we carve out a dedicated team to rewrite or reimplement the automation as an independent project. We have got a huge backlog. We know the challenges that are there. We have a huge backlog. We have an objective set out. So you start off with an independent team working on it and then eventually you merge this way of working on the ongoing day-to-day basis. That's how you are able to build and revamp your existing automation and eventually start contributing to the new changes that are going on. Also, it's very important from a coding perspective, you look at what are the design patterns that are going to help you. That is very important. The framework structuring, the architecture, set up coding guidelines and coding styles that are going to help the team understand how code needs to be written in this particular framework. Use IDs and the power of the IDs. IDs are becoming so powerful. You don't have to do a lot of things on your own. The IDs will tell you and in many ways do things for you automatically in the right way. So use the power of the IDs to take help and clean up your code. Also automate the infrastructure set up for test execution. On automation side, there are a lot of library dependencies that might be there. You end up running the test in CI on any random agent as well. So you have to automate the execution environment setup and that is going to be a very key aspect and of course keep your test independent, manage your test data. Think about the test automation pyramid is very important again. What are the layers of the automation pyramid that make sense for you? And how do you focus on building a very wide base of your automation pyramid and progressively have less number of tests as you go to the top of the pyramid that will help you a lot as well. The criteria for automating the test scenarios is again very important when you think about the test pyramid, right? At which layer of the pyramid am I automating? And based on that, know your users and often simulate the user behaviors as you go to top of the pyramid. That is going to be very important. So have a clear and visible intent of the automated test. Think about the multi-browser or multi-device support that you are going to need. How are you going to run on every change, local and deployed, run it locally or CI or in the cloud depending on your automation strategy, right? You would have all of these aspects you need to keep in mind. Build for parallel execution, build for rich reports and trend analysis. These are things it's easier to do when you have a smaller code base. You set this up, verify it and then the focus remains on just writing good tests and automatically the results will start showing for you, okay? You can also refer to this article that I had written on InfoQ about the criteria for test automation and how to build this in effective fashion. So now the big question, do you rewrite or refactor, right? In many cases it is possible to refactor on the job while you are doing certain things, but in many equally high number of cases I've seen it is very difficult to refactor an existing code base without impacting the rest of the team on that. So in such cases, you really need to think about what the approach is going to be good for the team to keep getting continued value from what is there plus also make progress towards changing the code base, evolving the code base to making it better for the future. Based on this approach, you have to get buying from the stakeholders, you have to get buying from the team members. My approach in most cases is rewrite with selective reuse. If it's automation code base, right? So the case study number two, for example, that I was referring to thousands of tests, it is difficult to really stop everything and rewrite over there. So we created a new repository and started to use selective code from the earlier framework and wrote things in a better way. And that is what helped us. It is very important to lead by example. Don't just tell, this is what is to be done. Show how things are to be done as well. And that is a very important aspect, okay? There are different ways how you can do a rewrite versus reuse. I will not go through all these points over here. The slides will be available. You can definitely go through that and take a look at it. And of course, feedback from what your experiences also have been will be valuable for me also to learn different aspects. So there are different ways build versus buy, reuse. It's very important to set up over here. The framework architecture is of course a crucial thing to set up as a big picture goal about how we want to implement it. And this is a good documentation to have, right? To say, this is what architecture is and you annotate it with examples. So this is where certain examples of what goes in each layer that becomes easy for team members to use and evolve. Code quality has to be maintained from step one itself. We cannot take it after the fact, but remember as a last thing, right? It is very easy to say, I'm going to rewrite or refactor and build it great, but it is equally possible and very likely for that matter, you end up building another complex code base. In some ways it is okay because that is what the nature of software also is perception, complexity. These are all different human aspects that come into mind when understanding complexity and spaghetti code base. But is it really doing what is expected to be done? If not, what are the limitations? How do I need to evolve and make it better? That's what you need to look at. So with that, I'm going to actually stop over here. I don't know if we've got time for questions, but I will be there in Hangout for sure to talk more about this. There is a question about a link to the info queue article here. I will be updating these slides on the conference proposal page for this particular talk, maybe in a couple of hours maximum. So you will get the full slides and the link to the article from there. I think that will be a better way to get it, okay. Thank you, Anand. Thanks, Anand. We have three questions. If we can just take one minute and answer those, it will be great. Yeah, so I think the first and the third question we did address it along the way itself. So for the info queue article, actually just search info queue Anand Bagmar and you will be able to find that. I don't have a lot of articles there. So you will be able to find it from there. So that's a good way to look at it right away if you want to. Abhishek has a question managing complex architecture of microservices versus managing complex code base of monolith, which is tough to manage. Well, as I said, it's perception based, right? Abhishek, depending on your understanding of the code base, your understanding of complexity, your understanding of scale, and of course, what is the net objective of that code base? Are you able to solve it or not? Accordingly, you take decision about should I, is monolith better for me or microservice is better for me? What I understand from Martin Fowler and various other leaders in the space is from a microservices architecture perspective, you cannot really design the right services from scratch on a new product. You start off building a monolith together and then logical understanding will start appearing of independent pieces, which can be carved out into separate services. We could probably wrap this up. Thank you everyone.