 All right, I'm going to be talking about, I think, a fairly important topic that a lot of organizations struggle with is, how do you improve the quality of the incoming code? We have all this knowledge about extreme programming for the last 15 years at least. But like Jess was pointing out in his keynote, not a lot has changed. I mean, we still see the same set of hands go up and I was honestly surprised so many hands went up. My experience in the industry is like half of the people were lying. It's far from that. And so this is based on a few really popular startups that have been involved recently, at least in the last four, five years. And every startup you go and you look at the state of affairs there and you're like, this is taking us 20 years back. But that's just the reality of how things are. Let me get this off. That's just the reality of how things are. And so what do we do with this reality? How do we make progress? And how do we get people to look at some of the things that are important from a code quality point of view? And also time for us to introspect and see, are we getting too dogmatic as a community that you should do this, this, this, this. Maybe some of those things are not relevant anymore. And I'm going to talk about test-driven development particularly. And my experience of not having done test-driven development for at least the last since 2013. So it's been a while. And some of the thought processes around why that. But more importantly, why do we care about code quality? Going back into the topic and just setting the context. Why do we care about code quality? The reason I'm asking this question is when you go in and you see organizations which are very successful, then you ask yourself, is your understanding of code quality itself flawed? Or is there something going on here that is just very superficial and these guys are going to burn down and crack? So you kind of keep asking yourself these introspective questions. So I ask myself, why do I care about code quality? I care about code quality because none of us want expensive disasters to happen in your software. If your software is accepting payments, you don't want to be basically taking people's money and then losing that money. You don't want to have bad things happen to you. So this is one of the fears which basically drives people to kind of build high quality software. There are things around delays, delays because of having issues in your software and not being able to put it out there on time and that causing people who are expecting certain things to be released not being available. Stress and burnout is another issue because if you have lack of quality, then generally you see people are kind of always in this firefighting mode and trying to deal with issues all the time and nobody likes to be in that situation. So you want to avoid some of these issues and it does cause a lot of frustration on the users if the software does not work the way they expect it to work. So you want to avoid some of these and this leads to kind of erosion of trust in people when the software doesn't do what it's expected to do and you start losing your credibility as someone building the software. So and there may be a lot more reasons but generally these are kind of some of the important reasons why companies really care about quality of code and whenever you're in doubt I would encourage you to go back to these first principles and say okay in this company is any one of these issues that we are dealing with. We may see the code quality is really bad and it may not be up to the standards that we expect but how is that actually visible? Is it one of these things that is actually coming out? How is it visible? A lot of times I'm sure you're familiar with the ice cream cone problem, the ice cream cone testing problem. So this is basically if you map out the number of tests, the percentage wise in your product, a lot of times you find very little unit tests, a whole bunch of integration tests and a very heavy focus on things like Selenium or APM or one of these automation testing frameworks and then a whole bunch of manual checking on top of that. So this is referred to as the ice cream cone problem. And I'll come to is this bad and how do we deal with this and things like that. But the problems that we often find that I talked about leads to us having to typically approach these problems in certain ways. So I'm sure everyone has faced some of these problems that I talked earlier. Yes, anyone who doesn't face these problems? Probably you should be speaking here, not me. Right, so what is the typical approach one would take when you see one of those problems, right? Delays, bugs, frustration, people being stressed out. What do you, how do you deal with those issues in your company? I want this to be interactive, so I know I'm not talking to walls. How do you deal with some of these issues? What are the typical approaches you've used? Yeah, so we introduce a lot more quality gates, a lot more checks, a lot more processes in place to basically make sure that bad things don't get out of control. All right, automation and quality gates, okay? That's a good one. So you kind of share actual, get users to give your developers to see actual users pain and experience that pain. Yes, get to feel the customer experience. Sometimes dog-fooding, right? People, you know, encourage dog-fooding. Use your own products. If you can use it, other people can use it. Anything else? So trying to create a culture in the company where quality is emphasized and, you know, it becomes kind of a key selling feature, as you said. That's cool. So you put a picture of your parents on your desk and then you ask developers that before you say ready for tests, you know, would you show it to your mom? Yeah, that's cool. What I typically see is people do things like these. Right? Send all your developers to a two-day training on how to write better quality code. Yes? And what do you see after that? If at all, there's a difference. It's not a sustainable thing. Yeah. It's great. It works for calculator problems, but doesn't work in our context. Yeah. Yeah. Design pattern in soup. Yeah. You come excited from a training and now you want to try all those things because it's what is referred to as RDD, resume-driven development. Right? Okay, so that's another interesting technique. I'm going to touch upon that. We tried that. It didn't really work very well. We emphasize that people have to write unit tests and they have to have certain amount of code coverage without which your code will not get accepted. Someone in the management decides that I think 80 is a good percentage, right? We don't know what it actually means, but 80 sounds like a reasonable number. So let's just force it down. 80% code coverage, otherwise no appraisals this time. And how does that work? It works beautifully. You get 90% code coverage. You delete the tests or you delete the code. You run the test. Everything still works. It's like, wow. Why? Is there no assert statements? Or if there are assert statements, assert true equals true. All you're making sure is no exceptions are thrown, but we know how to deal with that. Try catch around the test and boom, there it goes. So whenever people try and force these things down and we've all done this in our own individual capacities at various points and we've seen that it's actually not had much of an impact. Then we say, ah, code reviews. Let's mandate code reviews in the organization. Let's get every junior developer who writes crappy code to be reviewed by senior developers. But senior developers also write crappy code. Let's hire someone who can review everybody's code and make sure that we get great code quality. And then often they're just debating about, you know, sparenthesis should be on this line and, you know, the curly braces should be here and stuff like that. I mean, there is limited success with code reviews, but often, you know, the ROI is questionable. I don't know about your experience, but that's kind of been my experience. Not to say that these things are bad. I'm just trying to highlight that while these things are good in reality, you know, a lot of times we struggle with some of these things. The one other thing that I think a lot of people in the XP community will talk about is fair programming. So let's watch this old video of me and JB Reinsberg. How many people here know JB? Few people, yeah. One of the, you know, very strong founders of, you know, this, trying to basically, not founders, but trying to make XP really popular. One of the Guardian PASC award winner by the Agile Alliance. But let's quickly see this. Decimal to binary. Decimal to binary, all right. So given five, you want to convert it to one zero one? One zero one? All right. And this is the ugly pairing? All right. Oh! I'm not even fixing it. All right, I'll start again. God damn. Where am I going to live in here? Where am I going to get the pair of a jet? No suggestions for it. All right. So, what are we doing, man? Why don't you drag? You're closer. Come on. The hell is wrong with you? Okay. Yeah, it's the third time. It's not the time that already, that's cheating, dude. We can multitask, dude. Yeah. So what are we talking here? Decimal to binary, I think. Anyway, that was just a fun video we did in 2010 conference where we were trying to show like how, what are typical ways you see people pair programming in organizations. It's literally one person working the other person dosing off most often, or it's used as a way to train somebody on the job and things like that, which are not necessarily bad things, but they don't solve the code quality issue right away and something to be aware. The other common issue is that people start ramping up the testing team. Right? I don't know if you've seen this, but you often find that whenever you have code quality issues, the typical answer is let's hire more testers. It's not going to address your code quality issue, but it's going to stop bad things going out of the door, but it stops anything going out of the door. People try things like sonar, sonar lint, various other static code analysis tools. They try and integrate it with it. Some of these are, again, quite useful, but have their own problems. I want to quickly touch upon some challenges that we feel with this. One of the big challenges that we typically run is these initiatives end up being pretty in isolation, so they don't actually help much. They end up just being in isolation and then don't create a culture around this. They just stay in pockets, but your overall code quality is still kind of bad. People live in an age of instant gratification, where if they do something, they want instant gratification. And the challenge with some of these things that we were talking, pair programming, unit testing, test-driven development, et cetera, et cetera, is you need to invest. It takes at least six months or sometimes a year before you can actually see real advantage. Right? Has anyone implemented TDD and seen the benefit, like, next week, one person? It's usually a much longer journey in terms of actually getting some of the benefits out of it. A lot of times you have tools, I've put so in our cube here, but there could be any other tool, that overwhelm you with all kinds of feedback and so much non-actionable data that you don't know what to do about. And people just start ignoring, like, we ignore traffic lights, right? It's just too overwhelming for people to deal with some of these things. Often these are, like, all or nothing kind of a thing. Like, you need to do all of this or you get nothing. And to me, this is a non-starter. This is not a great way to approach this. So I'm going to look at some possible alternatives, again, not suggesting that this is a silver bullet and this will work for everyone, some things that we've tried to add some successes with this. But before that, I think we should talk about the most important part of this presentation, which is to talk about me, right? So my name is Nareesh. I live in Mumbai. I don't act in Bollywood yet. I started a long back with a company called ThoughtWorks, which is where I kind of got exposure to all these cool techniques of extreme programming and, you know, we did some really great work, learning extreme programming practices and stuff like that. One of them was considered a bad name, bad word, and it's still. Then I was part of this company called Decti, where, again, it's one of these places where I went from ThoughtWorks thinking like code quality, CI, CD. These are the standard way of doing things, version control. Like, who would question some of these things? And then you show up at this company and you're like, what? But they are amazingly successful. So you ask yourself, like, what's wrong, right? Is something wrong with me or with them? And same thing. And then I went to be a partner at Industrial Logic where we built e-learning. I taught a lot of people at Google, at Amazon, a bunch of other companies how to write good quality code and stuff like that. Started my own startup, frustrated with trying to help people, developers especially improve. It's just better to focus on kids. They listen, right? So I started a company called Adventure Labs where we would build games for kids to learn mental arithmetic. Didn't work out well, spent a lot of money, crashed, burnt. I continued to run conferences, and that's kind of where my passion for building a platform convention that we use. So I happen to be one of the developer on this project and right now the only developer on this project. But convention is actually something started in 2012, so it's been going on for a while. We've done about $2 million worth of ticket sales through this. So it's not a pet project. It's beyond the pet project status. We have about 100,000 users actively using it, and we have zero automated tests, right? So I'll talk about that a little bit later in the stock. But then I went and joined this company called Hyke Messenger where I was a consultant trying to help them. And again, one of those moments where this is the fastest Unicon, but some of the code quality things are pretty bad. And you're like stating and thinking, how could this possibly be working at 100 million users here when the code quality is so bad? So again, you go back to this question that is there something wrong with them or something wrong with you? Anyway, these days I run a consulting company and I'm helping a large investment bank with their data strategy so that's kind of a little bit about me. But let's come back to some possible alternatives and hopefully this helps you understand why I'm probably qualified a little bit to talk about some of the topics that I'll be talking about. So what are some of the alternatives? One of the things we saw is that when you put sonar and you have some interesting data that is coming out of the sonar, nobody goes and looks at sonar. So the first hack that we tried is what if sonar could post these feedback on your pull request? The moment someone creates a pull request, what if sonar could put some of this feedback on the pull request so the person who's going to merge or approve the PR is at least going to look at it saying what's all this noise on this page? And so the first thing we did was basically did a quick hack using PR Builder. So PR Builder watches whenever there is a new pull request that comes in, you basically look at the pull request, you run it against a sonar instance that you have, takes the comments from sonar, and puts it as comments on the PR itself. So instead of asking people to go and look at different places, bring that feedback into the pull request itself. And what we saw is people complained a lot that this sonar thing is nonsense. Like why is this a problem? Why is that a problem? What does this even mean? Why is this stopping me to merge the code? And that was a great starting point because people now started at least taking interest in understanding what this thing even means. And we went through a lot of refinement. Arun was there, Arun remembers. We did a lot of refinements to basically come up with the minimalistic set of rules in sonar that we thought were absolutely necessary. That the code, any new coming code should meet. And it would only check on the delta code, not on everything. So if there are changes that are made, it would only assert, only would verify the quality of that. And so what we saw is it would produce things like this and then we would say if there are critical issues, then you won't be able to merge the code. Which came as a bit of a top-down kind of a push, but at least got people to start paying attention. Did this improve the quality of the code? Not really, right? But what this did is started making people a little aware of some of these concepts that we were trying to talk about, make it a little bit more practical and contextual in what they are trying to do day-to-day. Instead of running a theoretical class on static code analysis, this is what systematic complexity means. This is what this means. I mean it's more of a pull-based approach rather than a push-based approach. Once we did this, one of the things we realized is we could actually do something slightly better than this because the challenge we had is people who were merging the pull request didn't really see what was the impact or didn't know what was the impact of this pull request if they merged this code. So we wanted to give them some kind of a measure or a sense of how risky this pull request is trying to merge because, again, remember, what we're trying to do is stop bad things from coming into the main trunk. So, you know, what kind of tools, how can we enable people to make those decisions better? So the next step we tried was what we call as the PR risk advisor. PR risk advisor is essentially, anytime a pull request is created, it will look at all historical data and it will give you a report like this saying, you know, this is what the pull request is about. This is the files that have been changed in this pull request. This is the number of lines in this file. This is the churn as in how many times this file has changed recently. Here are the number of bugs that this file has been committed in the recent times whenever a bug was fixed. This particular file was involved in those bug fixes, like that this file had changed when we did this. It would have things like what is the cyclomatic complexity, the percentage of duplication. Again, all of this information just pulled out of Sona. Some information that is there with whatever project management tool that you're using. So, you know, and we had a convention that anytime you're checking in, anytime you're making a commit or raising a pull request, you would put the ticket ID in it. So we could see if this is a bug, this is a new feature, and we could have some intelligence like that. So what this did is this actually helped us help the reviewers who were reviewing the code make a lot more informed decisions. And at this point, people started paying a lot more attention to some of the things in Sona that it was giving in terms of static code analysis. Still no tests, still none of that stuff is just very basic, kind of starting with making people aware of the code quality and helping people who are reviewing the code to make more informed decisions. Clear so far? The next thing was a big influence from this particular talk at the Functional Programming Conference, and I happen to be wearing the t-shirt today, was by Aaron, who is a professor and a PhD student in Indiana University. And he talked about design patterns versus anti-pattern in APL. How many people have heard of this programming language APL? It stands for A-Programming Language, which tells you how old and outdated this language is. But it's still one of the most fascinating language out there. It was originally created by Jacob Iverson to create, to help mathematicians, you know, represent mathematical notations. And so the language has been around for 50 plus years. The reason I'm bringing this up is these guys have been living under the rock, and they've been doing some very fascinating things. Everything that we consider as best practices, these guys consider as an anti-pattern. And so there's something really fascinating that's happening, and I kind of drew a lot of inspiration from this talk, and try to bring in some of these techniques back at work. So for example, one of the things they talk about is abstractions are considered humbly. We consider a lot of things that we have done over the years in terms of abstraction is very important. But these guys consider abstraction very... it's abstraction considered harmful. And one of the things they talk about is if you can make your things, you know, a lot more transparent rather than abstract, a lot more visible, then it's that much more easier to understand what's going on. Otherwise a lot of times in name of abstraction we just jump through hoops and hoops, hoops and hoops before we understand what's going on. The other interesting thing that they talk about is basically, what else can I talk about? There's a whole bunch of interesting things, but yeah, one of the things they talk about is libraries considered harmful, especially black box libraries considered harmful. And the thought process is that if you're going to have these black box libraries, you don't know what's going on inside that. And if you're going to use some of these things, then you don't know what impact it's going to create on your software. One other thing that they considered harmful, and this language has... right now it has if statements and control structures, but generally any APLR wouldn't write code with if or switch or any of these things. The way they write code is very interesting. You write code as if there are no conditional logic in your code. And the way they achieve that is by turning that into a data structure. And so you then just basically take the data structure and you do data structure manipulation. To give you an example, you've all looked at the schedule in ConfinGen, the multi-track version of it. Now imagine as a programmer, if you're building something like this, the way you would typically do is you would have a list of sessions in the database sorted by time. So you would get the list of sessions sorted by time and sorted by track. You would have a big for loop, and then you would trade over the for loop. You'll take each session of that list and you will check what type of a session it is. Oh, it's a keynote, which means it needs to be shown across all the four tracks. It's a 90-minute workshop, which means it needs to span across multiple sessions. Things like that. So you would have a big for loop and a bunch of conditional logic inside it for each of the special conditions that you want to handle these things. And then you would look at that code and just delete it because you don't write code like that. Instead what they would do is they would write these small functions. Each function would basically be like five characters long, and then they would basically say that if this is this, what should happen to this? So essentially you get the data from the database, convert it into a two-dimensional matrix, and then you apply... So you have a data which is a linear list. You have a list of functions which needs to be applied, and you do a matrix multiplication on that, and it spits out a two-dimensional, you know, schedule out. And it's fascinating to just see that there's no... If conditions, there's no, you know, none of that. It's just a matrix multiplication. So we started applying some of these techniques in the way we were writing code as a way to kind of help people improve and simplify code. I think what I'm trying to talk about is we get carried away too much with all the design patterns and all of the fancy stuff, but sometimes when you look at these old languages and draw inspiration from them, they have a very different and a very simplistic way of writing code, which I think can really help improve the quality of the code. All right. All this is fine, people say, you know, this is all great. If I'm writing new code, I can put in a lot of these checks and stuff like that, but the real problem is, you know, we have to deal with legacy code. There's a lot of stuff that has been written, you know, yesterday and it's legacy. Nobody wants to touch that and people fear touching that. Why do people fear touching that? Because we don't understand enough about it. There are no tests. We don't know if we change something, what will break, you know, all of those kind of typical challenges. So what do you do when you have legacy code? Where do you start? This is kind of like, we've been trying to solve these problems for 20 years, but every company you go into and you say, what's your problem, but where would you start? And sometimes consultants go in and say, just let's take six months out and we're going to refactor all your code and then we'll be good, right? We will have a team of people writing tests. We will have a team of people refactoring your code and at the end of it, everything will magically come together and life will be great after that. Never seen that work, unfortunately. So what we do is there's an open source tool that I built quite a few years ago. It's called C3. What it does, it helps you visualize the quality of your code. So let me just take a minute and explain what this complicated graph looks like. So this is a tree map. What this tree map is doing is those, you see these boxes, each of those big boxes is kind of a package or a folder inside your code base. So this is a fairly large code base and then each of these black lines is basically representing a file. So if you see a big box, that means that whole folder is fairly big. Inside that, this particular file is relatively much larger in size as compared to the smaller file. So it gives you a tree visualization, a hierarchical view of what your code base is and then uses color as a way to signify what is your C3 score. And the way we calculate the C3 score is we look at what is the complexity of this code, which is cyclomatic complexity, linear paths through your code. If you have if else in your code, then your cyclomatic complexity is too. There are two linear paths through your code. If you have more conditions, then the cyclomatic complexity is higher. There are a lot of studies that show the higher your cyclomatic complexity, the more likely people are going to make mistakes in that code because it's harder to understand. Then the next parameter it looks at is your code coverage, which is essentially, what is the code coverage on this code? And most often with legacy code, the code coverage is zero or very minimalistic. And the third parameter it looks at is churn. How many times this particular file has been changed in the last 30 days, right? So it takes those three parameters, put them together and does a weighted average of those three to give you a visualization. So whatever you see as big red things are problem areas. But the biggest red spot in that is probably the hot spot in your application. There may be like these two, three files that everybody touches on a daily basis, has very high complexity, has very low coverage, right? So that is the place where I would want to start investing sometime and addressing those issues rather than saying, oh, this file doesn't have code coverage, so I'm going to go and write some tests against it, right? So this is kind of trying to make it a little bit more pragmatic for people to approach this and trying to simplify saying, you know, if you want to figure out what are the hot spots in your application, you run C3 through it, it will basically give you a visualization. And as you make improvements in your code, you can see if actually you are reducing some of those hot spots or not. As new code comes in, you want to see how this graph is evolving over a period of time. It seemed to be a very kind of a handy tool for me, at least, to kind of visualize the code quality and start, you know, having discussions with people saying how can we improve the code quality of this. The other tool that we used, which was kind of very helpful, I don't know if people have attended Puneet's talk. Puneet was here a couple of years ago. He's the author of this tool called DIFFY. This was built at Twitter when essentially what they were trying to do at Twitter was trying to move from their rails back end to a more monolithic, from a monolithic to a microservices kind of an architecture. They wanted some way to know that as they break these services into microservices, they are not, you know, breaking the contract and they're not making, the results are not coming out differently. So one of the interesting things this did is to compare what the old code gives you, what the old API gives you, versus what the new API gives you. What they do is they take the traffic that's coming, the live traffic that's coming, they basically fork that traffic, they send it through this new instance that you have and old instance that you have, you get the response back and then you compare the two instances and you see what is the difference. So as you build your microservices, without actually writing any tests, you now have a way to validate what's the difference between that. But when they started doing that, one of the problems they found was, you know, IDs could be different or timestamps could be different, you know, slightly different and that would start failing the test, saying, hey, something is different. So how do you deal with that? That's kind of where Diffie comes into action is what they do is they set up three instances. So this is the candidate instance, which is essentially the new instance or new microservice instance or new API instance. Primary and secondary are the existing, you know, old code and the difference between these two guys and you get non-deterministic differences and then you take the difference between these two and you get the raw differences. Then you again take the difference between these two to eliminate any non-deterministic, you know, basically noise out of it. And they've done some very interesting stuff. They use a bit of machine learning to kind of figure out what is deterministic, non-deterministic, what should be important, what should not be important. And this is something that we kind of also heavily use, especially when in the legacy code, when people start making changes without them having to write tests, you would run these Diffie things and then you would get differences and then you would kind of show them, visualize like what is the difference between these two instances. And eventually we hooked it up with CI and stuff like that so that every time someone checks in, they get that kind of a feedback. But originally it was just as a pilot, we would run this in the side and see, you know, as you are basically making changes or are you breaking backward compatibility or not. Now again, the limitation of this is this works well for services, backend stuff, not necessarily for UI. I know Puneet is working on some ideas to make this extensible for UI as well. But this is again a pretty interesting tool that we have. So quickly going to pause. I've spoken at a length about a bunch of different ideas that we tried. So the first idea just to recap, we tried to take whatever feedback was there and pull that back into your PR itself so people can see that and kind of start at least be curious about it. Help give information to the reviewers from a PR risk assessor to kind of help them understand how they can improve when they are basically reviewing code. If there are, if it's a high risk, then at least they would probably check out the code locally and then run it locally and see what's going on. So that kind of helped improve the quality of the incoming code. After that I talked about, you know, some influence from APL and array-oriented programming in terms of focus and simplicity and trying to help people simplify things. Not focus so much on TDD or classroom trainings or things like that, but more practical stuff that people can appreciate. Then I talked about all that is great, but if you have legacy code, then what do you do? So I talked about C3 as a way to visualize what are your hotspots and kind of use that as a way to drive some of these changes. I talked about Diffie, which is something that we used in the back end to kind of visualize what are the, as people are making changes, are they breaking things? Again, this is all without trying to invest a lot of effort in writing test suites and things like that. This is kind of almost getting, at least for APIs, getting testing for free. So far with me. Yep. I talked about coming back to this testing pyramid and this is something that I used to show quite often in terms of the different types of tests that we would put in into a system. And what I realized is while this is great, how do you start with this? Like you want to have this kind of a test pyramid in your organization, but where do you start with this? Because developers are busy and they're not interested in this. So where do you go? One of the things that actually worked very well for us is we had a team of automation engineers who were very enthusiastic and kind of felt like they were second-class citizens in the organization. Do other people have the same experience? Yeah. So what we did is we said, okay, there are these guys who are good with automation. They care about quality. They really like some of these things. So how about we actually work with them, help them learn some of these techniques and then actually go in the court and start putting in some of these tests themselves without actually having the developers to come in. Because again, unless you see the value in your context, you won't be convinced. And trying to convince developers to invest in some of these things is much harder as opposed to working with automation engineers who are already kind of invested in this, and it's easier to kind of help them. And also they feel like they're learning new skills and they're growing in terms of their career. So that actually became a nice book for us. So me, Arun, a bunch of other people started working with the automation engineers and kind of helping them understand how they could write component tests, how they could write, you know, integration tests, how they could write other kinds of tests. And then kind of check that code back into the same repo, right? Not into a different place, but check that code back into the same repo. This is one of the examples I'm running short of time, but this is one of the examples where I explain the different levels and how we went about doing that. This is something, again, that has been existed a long time back. From a long time, I remember this slide is from 2003, one of my slides from 2003. And this used to work really well back in the days, but now, you know, trying to get this kind of a commitment from people is much, much more harder. So we kind of hijacked this, and we just started getting the automation engineers to kind of help us put some of these things in place in retrospect, not from the before. And once we started doing this for about three months, then developers started taking interest in some of these things. They started seeing that how this actually helps, and then there was some interest in the organization. I wouldn't stand here and lie saying, oh, everybody in the organization was doing that. It was not. Only a few people took interest, and then that started coming in. But I think I was happy because, you know, at least you get some momentum going. You can't get everybody on board, but you can get some people to start doing some of these things. One thing that actually worked really well is this whole discussion around acceptance criteria. And that was something that we saw was growing a lot more. And so I would say that that's something that I would also encourage all of you to kind of push back into your organization and start building on something, focus on acceptance criteria. And that will help, again, improve the quality of the incoming code because some of the assumptions will get caught at this stage. Standard slide on how we do continuous integration going to skip some of this, because I think I would jump to questions. So I know it's a little bit rushed, but I want to take some time for questions. I just talked about a few ideas that might be useful for you to consider as you're trying to help influence. This is trying to say that, you know, don't go big bang in terms of, you know, getting everyone to do trainings and then expect magic to happen because that never works. Instead, look at some of these simpler, smaller starting points to create this culture where people start caring about quality and start caring about simplicity in the code. And that can be a much better starting point for improving the quality of the incoming code. All right. With that, we have five minutes for questions. Yeah. So your last couple of slides kind of touched my topic, but I still want to ask how to incentivize a developer to do unit tests? How to incentivize the developers to do unit tests? What is the real value in unit tests? You can check every path probably at a function level. So every line of code is getting authenticated. Probably you may not... You may handle all the error scenarios at the unit level, not expecting your system level to handle all the error paths. Okay. So it's getting much more pinpointed feedback at the source rather than, you know, permutation of different combinations coming at the top level. So, you know, you think that's the real value of unit tests. And do developers really care about that? You saw your pyramid. Probably the biggest return that you could get for the time that you've invested because you find... I mean, as you said, the feedback is so early, you know. If the developer could see it that way, maybe that's one way of helping them do it. So if the developers saw that it would help them with feedback and it'll improve that, then they would be interested. But the catch there... Compared to the entire outcome, I'm not talking only about their program, but compared to the entire outcome, yeah, but the catch there is developers writing good unit tests, right? Because you could write unit tests, but you'd not get that kind of a feedback, right? So how do you incentivize developers to write good unit tests? And that's a hard problem in my opinion. I spent 10, 15 years trying to do that, and I gave up. I think it's hard to get developers to incentivize, right? I'm not saying that it's wrong. We should not be doing that. But I'm just saying it's a much harder problem to do that, right? So instead, what I'm suggesting is you don't start with unit tests. You start with some of these other peripheral things that'll help people get a feel for what it means to get quicker feedback. It may not be through unit tests, but maybe through some other mechanisms, static code analysis, PR risk assessor, or other kinds of things. And then you may start addicting them to this habit of getting some kind of a feedback every time they check in stuff. And then you can say, hey, you can do something better in the code. But then there is this whole contrasting thing that's going on in the industry that we also should be aware of, right? Fred George talks about this. So when they started doing microservices at Internet, what was that, a forward Internet company, their microservices were not more than 100 lines of code. And, you know, if you can't write 100 lines of code without making a mistake, then there's a problem. But also more importantly, the way you build the system is it's resilient. Even if you make mistakes, it's OK. You can deal with it. So sometimes, you know, the way you architect things and stuff like that, you may say, well, you don't really need that much of a pinpointed feedback and you can deal without that, at least to kind of start this whole notion. So there is, yeah, there is a different kind of a thought process. Yeah, I have a specific question on DeFi. So once the server goes down, whatever we are seeing, the difference, right? We are not able to see in anywhere. So do we have any log for that? When a server goes down, like one of the DeFi... No, yeah, where the candidate is running, basically. OK, so you have the candidate, you have two other, the primary and secondary, one of those kind of go down. Yeah, where there's basically a DeFi is running. That's where we deployed the DeFi jar, right? Yeah. So if it goes down, it is shutting down. So we are not able to see the differences in the UI, whatever you show that STTP is something, right? Correct. I mean, if it goes down, you're not going to see. So I'm not sure, like, what do you... Is it possible to save that log or somewhere, sir? Oh, so it does output whatever is going on as a log. You can actually suck out a log on a live basis. Correct. So one of the things we did is in our CI, we would actually keep looking at that. And if it went down, we basically paused the test, bring up DeFi again because sometimes it does go down and then you run it again. So yeah, you do have some of those problems. They have a hosted solution now where they put all of these things in place. All right. I'm out of time. I will be around. So I'll happy to take more questions outside and happy to talk about why I don't write tests anymore. So we didn't get into that topic, but happy to take some of those questions. Thank you.