 Welcome, everyone, for Ajay India 2021. Thank you for joining this session today with us. And so we have this session talking about technical debt prioritization by Narees. So I'll hand it over to Narees to give us a quick introduction and start the session. Thanks, Parveen. And welcome, everyone. Thank you for joining the session today. We're going to be talking about technical debt prioritization, one of the topics that I think many people are really confronted with, especially in today's day and time. And tech debt in general seems to have gotten a little bit of a bad, people have a bad taste when they think about tech debt. And like Ward Cunningham this morning, our keynote speaker spoke about it's actually, tech debt is actually a great metaphor for saying that it's not really bad. We all time to time take a short-term low-interest loan to speed up certain things. And if you think of technical debt from that perspective, it can be quite powerful. It can give you additional working capital to do things which maybe your competitors won't be able to do. Of course, if you end up taking what I call long-term high-interest loans, which means you just keep taking on building technical debt, it will get you to a grinding stop. And that's something that I'm sure most of you might have experienced as well. So with that, let's jump into the topic. And I'm going to hopefully give you some techniques in terms of how you can visualize the tech debt and then prioritize the tech debt and get it addressed. All of this work is very much based on my experience coaching teams, helping teams, and also building my own product. For example, the platform that we are currently using, Confinement, that's something that was also kind of we used the same approach for doing this. So the first question that often comes to my mind is, how do we know we have tech debt? How do we know we have tech debt? So if you can please use your chat and put in your thoughts, I would appreciate that. So what do you think, what lets you know what indicators tell you that you have tech debt on your team? Let's see the chat. Outdated versions of software, leaking issues, lots of bugs, manual testing needed. Wow, OK. Taking shortcuts, fraud issues, again, bugs. Outdated versions of third party software, yeah. So vulnerabilities related issues. Developers keep asking to rewrite code, missing clean code, low productivity. Fantastic. So great. That's absolutely some of the things that are good indicators that we have tech debt. Let me hide this chat thing again. So here's what we put together over the last few years, trying to help us see whether we have tech debt and whether it's impacting us or not. So the first thing that I think people mentioned about this is some way to basically visualize what is your bug leakage and then essentially also what are the root causes associated with that. So if you can have some way to basically visualize maybe over the last 30 days, what are the number of bugs that leaks through your various environments that you might have? So you might have a test stage and a broad environment or some other combinations. What quantity of defects leaks through these stages? And then what were the root causes of these and possibly even filter by priority and so forth? And see what's causing. So this can be a indicator of technical debt. And as you can see here, I think this team does seem to have a challenge of technical debt. The next one that you might want to look at is what I call as effort distribution. That is what percentage of overall time is the team spending on feature development? Even in features, there may be different kinds of features versus basically fixing bugs and so forth. And as you can see here, 80% of the time the team seems to be spending on defect fixes, which means the reliability of this software is quite questionable. And so this is another indicator that could tell you that you may have a tech debt issue. But one of the most powerful things is to look at is essentially try to visualize the cycle time for different kinds of things. And so for example, here you will see that for new features, this particular team that we are looking at is on an average taking 158 days to get the feature out. And that seems like quite a lot of time, I mean, in today's day and age. Of course, this is contextual depending on the kind of scale and complexity of the product that you're talking about. But I would still say 158 days is a very long cycle from idea to cash kind of a thing. And here, we're not even really taking idea to cash. We're saying actually from actual design start to production deploy. And what you can also visualize in this graph here is you see the basically the blue line indicates the average. And then you can see a whole bunch of dots on top of it, which indicates basically certain outliers. And if you probably analyze them, you might realize that there are certain root causes which is causing this delay. And it's quite likely that tech debt might be an issue causing this. So that's another way to quickly visualize. And the last one that I would actually look at is a metric known as flow efficiency to try and see essentially if a particular item takes, let's say 100 days to finish, what percentage of that time actually was spent in the actual work centers versus waiting and stuff like that. And so this gives you an indication that if your efficiency is pretty low, that means there is a lot of process debt that you have accumulated. And that might be an issue to handle separately. But again, trying to segregate the different kinds of debts might help you with this. So these are some of the things. And of course, this is not really measurable, but I'm sure all of you can relate to this in terms of when developers show this expression, you know you have a serious problem of technical debt. I wish we had a way to measure these. Unlike the previous ones, this is not really something that we found a way to measure. Of course, you can do surveys and other kinds of things, but nothing like looking at people's expression and you show them a code. Maybe AI can help us. But then the question is, OK, I understand that we have a problem of technical debt, but then how do we deal with this? How do we get started? Like, where do we start? What do we do about it? How do we convince other people? So there may be a lot of those questions that you may have. I think before we jump into anything, it's important to take a commercial break and talk the next 30 minutes about myself. And then we'll come back to this topic at the end of the five minutes. Yeah, makes sense. All right, my name is Nareesh Jain. I live in Mumbai. Don't act in Bollywood yet. I run a consulting company called Exencio. I started my career actually building neural networks for ISRO. Of course, 20-plus years ago, neural networks were shit, so I didn't get anywhere with that. I joined a company called ThoughtWorks. That's where I learned a lot of extreme programming practices and kind of started the Agile India Conference, in fact, back in ThoughtWorks in 2004. I was then fortunate to be part of Directive, which is, again, an amazing kind of a company that's basically grew and built many successful products and sold them. I also happened to be part of Hike Messenger's leadership team and helping them implement a hypothesis-driven development culture within the organization. I was part of industrial logic, building e-learning for helping developers learn some of the technical skills. I started my own product or platform, if you will, to help kids learn mental mathematics. Didn't really work out. Had to shut it down. And as some of you know, I've been involved with running various conferences in India. Started with Agile India, but then kind of grown and run a whole bunch of different kinds of conferences. Because of the conferences, of course, to scratch my own personal itch, I started building ConfinGen. And a small team of folks have come together to build this platform. And today, here we are running the entire conference of this platform. Who knew when we started? I've been fortunate to have worked with some of these fantastic companies and learned from them over the years. But over the last two years, a little over two years now, I've been very deeply involved with GEO. And kind of helping them with the digital transformation that they're trying in India, driving in India. In fact, tomorrow you will see Anishankiran come in, the CEO and the president. And hopefully we will have some very interesting insights from them in terms of how Mark Zuckerberg talks about move fast and break things. But at GEO, we talk about move fast and scale things. So we'll see that in action. Anyway, enough about me and my nonsense, let's come back to talking about debt and technical debt. One of the things that often kind of gets me thinking is that people think of technical debt almost in a single dimension. But there are many different types of debt here. I just few examples on the screen, there could be many more. And so what we've tried to do is visualize essentially on the left, there is a set of architectural components if you will. And then on the right, there are certain different categories of things that can come in. And you can start classifying these things into these different buckets, just to help you understand what all you should pay attention to. Often, we look at things like Sonar or stuff like that and they give you only a partial view of the world. They may give you certain standards, view of coding and stuff like that. But there are actually a lot of interesting tools out there and a lot of, in some cases, not enough tools that may help you actually look at that tech debt is far more than just code quality. Tech debt could also be, I think someone mentioned on the chat as well earlier, could be related to tech stack. You could be using an outdated language and you wanna move to a latest greatest language because programming language, I mean, that could give you a big benefit in terms of the developer experience, in terms of the safety and so forth. There's also debt related to performance and so forth. So what I wanted to just highlight again, don't want to spend too much time here, but it's important to think about debt in multiple dimensions, not a uni dimension. And technical debt is more than Sonar code quality. Just wanted to highlight that. All right. This is, if there is one most important slide in whatever I'm gonna talk, I think it's this slide. This is what we've been using for a few years now in terms of basically the thought process in terms of addressing technical debt. So the step number one is essentially visualized. You say statistics, tech debt, statistics, whatever you can get, get some numbers to help you visualize basically what is the situation we are in. Once you have visualized the situation, then you can triage and decide whether you want to go down the refactor or the rewrite route. And we'll talk a little bit about each of these items. And then once you've decided what path you want to go down, that's when you want to now say, hey, of course we don't want to do a big bank of anything. So how do I prioritize so that I can get the biggest bank for the buck and I can incrementally pay off my debt over a period of time and also get validation whether what I'm doing is actually helping or not. And finally, it's about more of the strategy in terms of what approach you're going to use. Are you going to go down a strangler pattern? Are you going to look at a tool to guide you? So we will talk about different strategies that can help you. So this is basically the overall zoomed out view of the world in terms of addressing tech debt and prioritizing it. Now let's deep dive into each of these sections. So the first one we will get into visualize and I'm going to give you various different snapshots of tools that I have personally used that basically helps me understand and visualize the scale of the tech debt that we are in. Often it's a question of how do I convince other people? And I think the first important thing is to actually visualize and help people understand how bad the situation is. I know if you attended Linda Rising's keynote yesterday she talked about, we think we are data driven but we are mostly making emotional decisions. While that is true, I agree with that. I think there is in her own influencing strategies there is a pattern about basically using data to strike that emotional card with people. And so sometimes when talking about technical debt and trying to convince people, the most important thing is to help them visualize how big a problem it is and what would be possibly the consequences or how it is actually hurting you. So visualizing that kind of becomes very critical to convince other people in my opinion. So of course we can start with the standard Sonar way of looking at things. And this certainly gives you a good first step where it basically is on this screen if you will see on the bottom. Sonar has a way to say how long it would take to fix this tech debt. So on the bottom it's the duration to fix the tech debt. On the left it basically plots against the code coverage that you have and basically says from zero to 100. And then the size of these bubbles is basically equivalent to the lines of the code that we are looking at. And so the one here on the top right corner is a big bubble. It has highest tech debt in terms of the duration it would take to fix and it also has lowest coverage. So this might help people understand that, hey, look at this. This is the kind of tech debt situation we are in. We do have some things here on the bottom left corner which are actually pretty good but there are things here on the top right corner that are problematic. That are problematic. And this is a standard tool telling you that. Once you have something like this you can go for something more advanced. There is a tool called Codescene. Last year we had Adam who built Codescene speak at the conference. He also ran a workshop. It's a fantastic tool to help you visualize hotspots with your application. It helps you see if the health of your code is improving or not. And it's a great way to start putting some metric behind a lot of data points being combined together and helping you kind of look at hotspots, health, et cetera. And it gives you some very actionable things from here but this could be kind of beyond Sonar code, Sonar cube, sorry. Codescene could be your kind of next level up if you will. Long back I also built this tool called C3. It's an open source tool. What it helps you do is it helps you visualize the quality of your code from three dimensions. One is basically, C3 stands for coverage, complexity and churn. So it looks at what is the coverage of this code coverage. Second is it looks at cyclomatic complexity and third it looks at churn. Churn is basically how often this particular file or method has changed. And basically my point was that, if you look at each of these metric in isolation, they actually don't tell you much. So it's not a great idea to look at code coverage or any of these things in isolation. For example, I might have very low coverage on a code that is extremely simplistic code. It's just maybe like, if you're using Java it's just getters and setters, right? And this code is never ever changed. Once it's there nobody ever touches it. Like do I really need to worry that this has low coverage? No, probably not, right? What I need to worry about is I have this code which is extremely complex, which means the cyclomatic complexity is very high. It has very low coverage, which means it's complex, but and then there's no safety. And this code is very frequently being changed. It could be changed because there are a lot of bugs. It could be changed because that's an area in which a lot of developers are working. So this is a very simplistic way of trying to identify what is a hotspot in your application. This is a tree map visualization. So basically all these boxes, the ones with this kind of a box is basically a package. And then within that you can see smaller files and so forth. You can further do a drill down and kind of go deeper into each of these. And it may give you what exactly is a troublesome area within this. And the idea here was to look for the biggest black or the red spot. I mean, black here is basically a black hole. It's beyond recovery in some sense. But you would look for the biggest black area and try and prioritize that to fix it. So these are all purely based on the quality of the code. Now I'm gonna slightly shift my focus to, you could use something like Lighthouse for frontend stuff, for web applications. And you could look at performance and it gives you somebody useful metric by which you can measure how good your page is. And this is certainly a form of debt that needs to be addressed. And this is what we call as the fit into the performance side of things. So this is a different kind of debt. Unfortunately, you won't get this out of Sonar. So you need to look at other things beyond Sonar again in my point. You could also look at basically the network, the this tab and then try to understand basically how long each of the APIs are taking and how long it is taking to render certain things and stuff like that. You can look at now beyond that, you can start looking at which of the performance metrics. Again, there's a open source tool that Harry and I have built, it's called Buffless. It helps you, it gives you standard dashboard straight out of the box. You just need to give your Docker container and it can start running some performance tests around it. And this can give, it can start helping you understand that as you scale your RPS or your TPS how essentially your resources are performing and so forth. And this can be another way to visualize whether you have a performance debt or not. Of course, if you, there's also on the database side you have tools like AWR reports that can tell you slow queries, that can tell you problems with your indexes that can tell you other kinds of problems. So there are AWR reports that are available in most tools that can give you insights and they can tell you whether you have debt accumulating on the database side of things. So again, just giving you a few examples of various things that you should look at in terms of helping you visualize what is the kind of tech debt and kind of build a business case around this saying what is important. Once you have, I'm gonna now fast track a little bit but once you have essentially visualized things then the next step is to say, okay, what am I gonna now do about this? Is this something that I would be able to refactor and rescue or would this need a rewrite? And of course, both of them have their pros and cons there isn't like a best answer in this particular case. There are risks associated with rewrites. There's a whole bunch of risks I've listed over here. Most of you must be familiar but there are risks associated with rewrite and there are risks associated with refactor. Both of these are problematic. So in my experience, I generally prefer a hybrid approach. What is a hybrid approach? So a hybrid approach basically is to say that I would refactor to decouple the pieces and then I would look at individual pieces and at times it's just much easier to kind of just rewrite those smaller components and if you're aware of Michael Feathers, he's one of the guys who wrote the, actually he's the guy who wrote the refactoring with the Legacy Code book. Sorry, not refactoring with Legacy Code but rather working effectively with Legacy Code and Michael Feathers and I were talking about this notion that what if all your code came with an expiry date? On a certain date, the code would just disappear and if that's the notion with which, imagine you're designing things that your code has an expiry date, then the chances are you will design things in a very decoupled manner. So they are small pieces that each of them have a different expiry date and then you need to kind of constantly plug and play new pieces into them, right? And that's the way I think of a hybrid approach where I decouple the pieces using refactoring and then I think of each of these pieces having a expiry date which will mean I just replace those pieces with the latest, greatest stuff. So that's kind of one way to think about this whole refactor versus rewrite approach which is a hybrid of the two. Let's come to the prioritization piece because I think that's where I really want to spend some time. So given that you visualized then you went into deciding which approach you want to use by triaging, now you want to decide how you're going to prioritize things. You don't want to do things in a big bank manner. You want to do things in a incremental manner so you get feedback and you iterate over it. So one way to help you prioritize and this is something I recently did on a project is basically take the log file, the access log file, if you're using something like Nginx or whatever, you can take the access log file and just we wrote a simple script to help us visualize that basically every day what are the most frequently hit API calls, okay? So it helps you say like this blue line over here seems to be the most used API call and then the next line seems to be the second most. And so you can get a sense of what is the most used API call if you're looking at the backend, of course, I'm just giving an example of trying to prioritize on the backend how you're going to refactor or rewrite. So one way to do is to visualize the logs and see what is most frequently being hit. You could also come up with a cumulative number and you could pick the one with the highest number of requests and that could help you basically identify a starting point prioritization mechanism which is basically looking at the usage statistics. On the front end, there are a lot of tools that basically tell you what are the hotspots, what are the most clicked areas on your application and you might decide that, hey, this component which is actually 10% of the overall clicks is pretty important and it has some technical debt and this component is what I want to start refactoring or rewriting first. And then next I will deal with the other ones. So again, this is a looking at the click rate and basically using that as a data point to prioritize to help you prioritize what in your front end application you may want to prioritize. So this is another example of a prioritization for front end. You can of course look at Sonar and again, Sonar gives you a very good way to basically visualize where your technical debt is and you can say, hey, I want this big red spot over here to make it smaller and greener. And so basically to do that, I need to start moving it down and left. And so that's again, another way you can help prioritize that of all of these things. I will start with the one where I'm gonna get the biggest bang for the buck. I earlier talked already about code scene. Code scene also gives you a very, you know, already a prioritized list of things that according to it are clear hotspots that you need to address and it looks at things like what is most frequently changing. Similar stuff like C3 that I talked about, it kind of looks at similar data, but a little bit more actually, so it's a little bit more advanced and it gives you already a prioritized list of things that you should attack from a code quality point of view. So here are kind of few techniques that I've used in terms of helping prioritize where do we start from? What is the most important thing we should be looking at? On the database side, you can also look at, you know, what are the most frequently hit queries and, you know, what are the most frequently used entities in your database and kind of again, use that as a prioritization. So in short, what I'm saying is depending on what architectural layer you're looking at, you look at usage statistics as a way to guide you what is important. And then you blend that with these statistics that you get from things like Sonar or code scene or C3 or other things. And then you say, okay, you know, given out of all of this, I want to, this is like, let's say the most important microservice. And then within that, I'm then gonna deep dive into Sonar or one of these tools and help me, you know, refine it. Assuming that you have done all of that, now it's time to actually, you've identified what you want to fix. Now is the time to actually say how you're gonna go about fixing this, okay? So there is a little, you know, we don't know what to call this. This is not neither a framework, not a template somewhere in between, but just for the heck of it, we just call it continuous evolution. And so we say we need to, so we identified a backlog item, right? We prioritize and we identified a backlog item. From here, we've also identified based on certain baseline KPIs. So you capture the certain baseline KPIs. Now you define what your target KPIs should be for basically this refactoring exercise. After your refactoring, where would you like to be, right? And then you come up with certain hypothesis in terms of what will take you from here to here. What is the simplest way to go from the baseline current KPIs to the target KPIs? So you come up with certain hypothesis and then you define an experiment to basically, you know, prove or disprove your hypothesis. You observe after the experiment, you observe the KPIs. You, of course, learn from this and then you repeat this cycle, okay? Again, I wanna give you a couple of examples that are realistic so you can, you know, understand how we go about this. So let's assume I have done some static code analysis and I have picked this, you know, closed loop redemption file, which seems to be problematic. Why is it problematic? Because, you know, according to Sonar, let's say the tech debt is quite high. The code coverage is very low. This is a fairly big file, you know, 2442 lines, the reliability reading is pretty low and so forth, right? So these are all my baseline KPIs that I have got. Then basis this, I am gonna define basically what is the end state I want to achieve? What is that I want to invest right now and get towards, right? Once we define this, then we say, hey, what is our hypothesis? You know, what will help us move from here to here quickly? So we can look at, hey, there's a large class smell. There's a black sheet smell. There are a lot of static methods. You know, there's a very high conditional complexity and so forth. So these are all the problems that if we address, then we would be able to move from here to here. So there are certain hypotheses, let's say in terms of code smells that we identify and then we put an experiment together saying, okay, I'm gonna time box X amount of time and I want to address these code smells by applying, let's say a single responsibility principle or by writing certain scaffolding tests or so forth. And then I would essentially, here I'm missing the actual KPIs and then there is a validated learning that actually your ID can help you quite a bit with a lot of these things. This is what the team had captured and then there would be certain action items in terms of maybe setting certain quality gates. So it avoids this from happening to future code. And you can kind of trade through this cycle a few times. There's a similar example for DB related stuff that I have over here. And again, that kind of, you know, we can use to help us improve, you know, some poorly performing, let's say, or not having enough, you know, indexes or so forth. So, you know, these are a couple of examples. Sorry to interrupt, it's just quick time check, 10 minutes to go, sorry. Yeah, yeah, thanks for that. I'm actually pretty much wrapped up two slides to go. So I went faster than I should have probably, but I want to leave 10 minutes for the Q and A, so absolutely. So yeah, so these are a couple of examples of how we use this continuous evolution approach in terms of basically, again, just to re-trade, prioritize the backlog item, which we've done so far, we discussed about various prioritization techniques. Once we've done that, then we establish a baseline KPI from these tools that we have. We set a target KPI, we define a hypothesis, what is causing this and what could be addressed to basically get away from this problem. And then we define a small experiment that we would run to, you know, like in this case that I was talking about, I would basically run an experiment of trying to eliminate, you know, trying to apply single responsibility principle to eliminate some of these things. This same thing could be applied, we've applied it to, you know, basically replacing a language with a newer language, replacing a framework with a new framework. So for each of these, you can, you know, let's say if you wanna replace a framework, what is the reason you wanna replace the framework? What are the certain baseline KPIs that you would define as a performance, as a developer experience, you know, you define certain KPIs, so you know basically, you know, what is the current state and if you were to replace it with something new, what is the target state that you're achieving and you know, you should be able to take a small slice of it, maybe a small part of your code and try and prove that hypothesis over there before you decide to start, you know, basically changing a framework in the entire code base. So we've used this approach quite heavily and we found it very, very useful to break large things into small chunks and then small things to make them data-driven. So again, just to quickly wrap up my talk, I know I'm a little getting out of time for the Q&A. So this is again, just to recap, you start with visualization of your current usage statistics and, you know, the tools telling you where you may have tech debt. There are also certain tech debts which the tool will not tell, but you have to figure out like I gave you examples. And then you have to decide, the step two would be to decide between refactor rewrite and I would say, you know, a hybrid of the two generally would be useful. Then you would prioritize using the several techniques in terms of from a usage perspective or from a tooling perspective. And then finally you would strategize using the continuous evolution approach that I talked about. That is pretty much it, what I had. So I will quickly stop sharing my screen and look at the questions folks may have. Okay. I see that we have about nine questions. Okay. Cool. I'll quickly go through this and if folks have more questions, please do add them in. So the first question I see here from Venkat, basically what is the source of this information about the source of debt? I'm assuming Venkat is referring to the early slides because this is at four or six. So at the right beginning, the some of the graphs that I showed. So that's a little open source tool again that we've built called Angioscope. This Angioscope hooks into, you know, as of today, it hooks into Azure DevOps, which is a, you know, overall DevOps tool, you know, similar to a lot of other tools. I mean, that you might be familiar with, you know, there is an Atlassian stack, there may be a, you know, that Atlassian stack as in Jira and all of the tools that come with it. So Azure, Microsoft has its own, you know, stack called Azure DevOps. So we basically hook into that and we pull all of these stats and it's an open source project. You can look up, it's called Angioscope. So that's the source of the information. All right. The next question is, please share the source of this information about tech debt, okay? So I've already done that. Moving to the next question, many teams have started creating tech debt deliberately to reduce time to market, but that leads to too much rework and cost of cost at late stages, okay? What could be the right time to start working on prioritizing and self prioritizing the self-induced debt? If a program slash project is one year long, managing tech debt at the end of the project release may be too much cost, okay? So this is again, interesting question. Let me actually mark these as answered. This is an interesting question. In fact, this morning after Wardstock, we were on the Hangout table talking about this is, is self-induced debt a bad thing? And there are two ways to look at self-induced debt, right? One form of self-induced debt is like, hey, the business or the product folks are saying that this particular feature is gonna drive like a huge amount of business or is gonna move certain business KPIs by a large number, right? And you say, well, that's great. That's a bet. That's a hypothesis, right? That's what you're thinking. So what I want to do is I want to run a very cheap experiment where I'm not gonna care about, the code quality or things like that. I just wanna do a skateboard version of this feature. I'm gonna just hack something very quickly together. And I wanna validate whether the business hypothesis actually holds good or not. I don't want to be worrying about performance because business thinks this is gonna bring 100 million users using this. I really wanna see if even five people use this. And at that stage, I might not really need to worry about performance or other kinds of things. So if this is the form of deliberate debt, then I think this is awesome, right? I would encourage teams to absolutely do this because as, again, I'm going back towards Keynote, he mentioned that we have a lot of problems. Nobody knows the full problem. We all still discovering in many cases. And it's a complex adaptive world, which means things will also evolve as you go along, right? So if this is the form of debt that you're referring to, I think this is a school and some experiments will be successful. Some business hypothesis will be validated, which means, yes, we now need to scale this to a large extent and we need to now go back and improve this. And it really doesn't matter if you think of this as debt or whatever, but you tried a quick experiment, you validated, it's successful, you scale it. If it didn't succeed, you discard it. Important thing is to discard the code, right? And that's where that expiry date analogy, it's important to keep the Michael Feathers expiry date analogy in mind is always think that code has expiry date and someday it will go away, right? Like someday you have to delete it. So if to miss or miss Mr. Anonymous attendee, if that is your form of, that's your definition of self-induced debt, then I think it's great and you should do that. But if your definition of self-induced debt is that I know, I have the time, I know exactly what is required, there's no ambiguity, but just for the fun of it, I'm gonna just write extremely sloppy code so that I can come back later and spend more time at it. That of course is not recommended, but I personally don't believe developers would do that because it's obviously more work just for them. So often this self-induced debt could be confused to people just playing around, but the way I would like to look at it is people are trying to validate whether it's worth, you know, engineering this or I can just, you know, get away without it, you know, and so that's a good thing in my opinion. Hope I've answered that. Interesting chart, which tool are using for this? Again, yeah, flow efficiency feature versus bug, et cetera, yeah. So a lot of these are from the tool that I mentioned, Ngeoscope, it's open source, you can go and have a look at it, you know, I'll also put a link if anyone's interested. Then there is another question from Venkat. In the context of microservices, it seems to me that the impact of tech debt would be less significant as opposed to probably business or architectural debt. What are your thoughts when it comes to how tech debt correlates to implementation architecture? Okay, so like I was trying to clarify earlier, to me, tech debt is not only about code, right? Tech debt is, you know, if you have architectural issues, it is also a tech debt for me. So I look at tech debt in a much broader sense rather than just code quality. So, you know, having architectural problems, having performance related issues, these are all forms of tech debt and they all have to be holistically looked and prioritized, you know, sometimes the architectural ones are gonna hit you much harder than the code related ones. But in certain contexts, the code related one will probably hurt you more than the architectural one. The architectural one may be when you really scale out, you may need to worry about it. But, you know, when you're at a smaller scale, if you have a lot of issues with the code, it's too complex, it's buggy, then probably that's gonna hurt you. So it's contextual, but look at tech debt holistically in my view. All right, the next one, sorry, I'm out of time. I greatly appreciate all the questions. Fantastic, thanks again everyone for joining in.