 Welcome everyone to the session, Can Dictator Enhance Agility by Radesh Radhakrishnan. I sought introduction about Radesh that he works as a head of engineering in the hospitality at IBS Software Private Limited. We thank Radesh for his availability today and without any further delay, I'll pass it over to you Radesh. Thank you, Karthik. Hello, dear Ajay in this year. So as Karthik's statistic, QQ introvert introvert myself, I started as a software engineer around two decades ago and I worked two different roles mostly in the software product engineering space. Right now I'm heading the engineering for the hospitality line of business at IBS. IBS is celebrating 25 years of its inception and it's really a proud moment for me to talk to you this evening. And for all the leaders and I like Ajay in this year's listening in today, we all know that while Ajay makes sense for most of us, if you don't pay really close attention to right continuously enhancing agility, it can sometimes have some side effects and also backfire on us. So and also specifically like with Ajay, we cannot expect that everything will go smooth. We can also get into the times of crisis. In those kind of situations, what kind of leadership style will you use? So I'm going to really be focusing on some of those areas today. I'm sure you also have heard of things like peace time CEOs, war time CEOs, etc. So let's go and find out. Okay. So the question can dictate this really an answerability. If you're looking for a QQ answer, yes or no answer, I'm sorry, I don't, I have a bad news for you. We will not have that answer right away, but no problem. We will come back to this question and then we will sit together later. For now, let me walk you through a real story that really happened. So and also I'm sure like many of you as leaders of software engineering should find it very easy to relate to. So ready to listen to a story? Okay. Once upon a time, right not too long ago, there was this engineering team that was tasked with their mission to build a top class software product. Team was extremely talented with very high level of bonding also that typically all of our software engineering agile teams that we look forward to and that they kind of go any far to support each other. The captain of the team definitely was one of the best servant leaders that you could ever find. Really a true blend of product expertise, technical expertise. And the team identified really the best tech stack to build their software product on and of course they followed agile methodology. And, you know, with the very popular of two weeks sprint model, everything was just perfect. Team was progressing very nicely towards their end goals and things like that. Now, the story goes into the some disdentance, right? So typically you see the movie stories, how things go. As it goes, in one of the sprint retos of product, they say, hey, look, we are slipping on stories. Our goal is shifting. Q of energy is sitting next to him and says, hey, we are seeing a lot more bugs now. And really, we are kind of getting blocked as well. It's also so frustrating to see its context. So which is where the developers are coming in and saying, and everyone is looking at the scrum muster, hopefully, but she says, God, look, too many blockers, how many blockers do we have really? I don't even know what to do with most of them. Long cycle times, merge hell, releasing is really, really getting harder, right? So the team leader was really listening very patiently. And he said, I get it, right? So let me talk to the management and see if he can get some help. So he goes to the management. Management looks at all the data. And of course, they're not convinced. And the question is, why are you slipping? We as leaders, right? That's a normal response, especially when our team comes and tells us the same story. And we'll tell them, of course, come back and share a fixed plan to bring the project back on track. We cannot miss the sales guidelines. We cannot miss the revenue numbers. So our team leader goes back to the team and asks for help. It's a very closely meeting, right? So they really care for each other. So they decide after a lot of thinking that we will work, we need to figure out a solution. So they're saying, okay, let's work harder. We will doesn't matter. Extra hours, long weekends, doesn't matter. We will fix it. And the leader goes back to the management delivers the promise that they will work really hard to bring things back on track. Now, you know what will happen, right? In a typical situation, if you work in the software industry for so long, you know, how many times do we really see things coming back on track, right? It's really, really hard. And many of you might have also read the book like Phoenix Project and how things can go from bad to worse and, you know, deep red and things like that. So same happens here. Long hours, sleepless nights, becomes a norm. Some people get sick, work life balance is impacted, quality goes worse day by day, release dates are slipping, software is not scaling, management is losing patience. It's kind of like a big pressure book of building up. And of course, one fine day, the best engineers decides that's enough. Now he gets and the the load comes to the next set of people, they are they are stretching even further. Finally, people are leaving one by one. The leader is also feeling the feeling a lot of pressure on from both sides. And he also decides the kids product is just halfway through, operational costs are very, very high. And all the dates are and forecast everything is gone for a toss. Now the auditors, board investors, everybody's trying to question, you know, do we really continue with this project or not? Does this sound very familiar to many of you? How many times have you seen this right as software leaders? And I can tell you, this is in such scenarios, you're really not alone. There's several first hand experience that I had. And also, I'm sure most of you had with similar stories, same spot, maybe the squad change, maybe the products or projects change, companies or engineers change, but really the same thing. So as true agile practitioners, let's see what really happened here. You know, let's do the typical introspection, right? First, maybe let's look at all the positives like what went well. So with the retro sessions, right? One thing was the team really knew what was happening. They saw the warning signs, they saw the problems and they were really surfacing it. They were telling to the management to their leader and then leader is going back to the management and telling the story and all that. All that is good. Now what was wrong here? What was not right? The only thing was they just decided to work hard, right? Is working hard really the solution to all these problems? That's really the biggest question that we need to answer. Okay. So let's see what was happening in this case. Maybe if you do a deep dive as a leader, you'll see that there were really warning signs. Too many bugs, quality issues, etc. Your developers were under tremendous pressure and they wanted to finish their work faster, but they were merging things, changing things too fast, etc. And what about our QA folks? Too many changes coming their way. They are really overwhelmed, right? Can they test really that fast? Our product owner is really losing patience. The story is not ready. He cannot accept things and the releases are altered, sales timeline, slipping, all those issues. So if we, we all leaders know that if we continue to work the way we do and expect different results, we know what to call that syndrome. So in the rest of the session, what we will do is we look at some of the fixes, very simple fixes, common fixes, but very effective leadership interventions that you can apply when you get into such situations. So the first problem, what we just saw about the quality case issue, how do we as a leader fix this problem? So if you ask any leader, how do you solve this problem, you'll get an immediate answer. Let's prevent the bugs. How? The preventing bugs is just a philosophy, right? How do we put that philosophy really into action? So let me make a statement here. Bugs are created because developers did not know their code, had quality issues, right? When merged, right? It's not intentional. We need to, let's trust the developers, please. Okay. So can we help developers to know the quality issues before they merge and fix them before they become bugs, right? So we all know that the automated unit test from XP days and for a lot of time now, we have the build tools with quality gate six, static code analyzers, static vulnerability analyzers for images, et cetera. And we don't allow mergers, right? When these checks fail, do we? However, when the chaos sets in, right, we have the timeline pressures, cost pressures, that's when teams really fall back into bad practices. And what do they do, right? It's not intentional, but they try to see how to fast track things. The first thing they do is, right, go and buy past the quality checks. And that's really where things go, starts to go really worse, right? So then the quality drops, we get into that spiral drop of loop of never ending problems. So what should we do as a leader, right? In this case, what can be an effective leadership intervention? So, and this is a very simple thing, right? What I'm going to tell you, and you can almost always apply this. And this is based on practice for years. Just don't tell the team, right? Go see it yourself. So we can actually look at the quality bars. How do we raise the quality bar, right? So instead of just telling the team, hey, go raise the quality bar, why don't you just take a closer look? Look at your build system. How is it configured? When was the last time you checked the build system? Are they really operational? Are they really working? If the quality bar is not met, is the gates blocking the changes from being merged? So that's good, right? Now ask one thing, is that quality bar enough? Do we need to raise the bar? Are we missing some checks? Should we add some more? Maybe the last merge had, say, 90 plus percentage test coverage for new lines of code. Great. But are they effective in catching quality issues? When was the last time you reviewed unitus? Developer may have created some of them with the right intentions. But have they checked the effectiveness of unitus? How do we check the effectiveness of unitus? Is there any techniques that you have applied so far? And there are good things, right? That's the latest tools that are emerging, things like mutation testing, for example. It's a very effective mechanism that you can add in to improve the effectiveness of your unit testing, for example. Or you can add multiple levels of code reviews. So there are tools that are available for us and we could use them. So that's probably the first intervention that you can do as a leader to improve your quality gates, reinforce them. Okay. So let's look at the next item. And I'm just giving you a couple of examples as good leadership interventions and we'll look at some of the other things later. So we, of course, as software people, we don't love bugs, right? Because the moment we a bug enters the system, we know that it takes away all our precious time. Triads prioritizing, grooming, planning for next iteration, setting up the environment, reproducing the problem, finding the root cause, finding the fix. And then does it impact the existing design? If so, if not good, of course, then implement the fix, test it, run it through the test cycle. Again, it goes to the Q&A backlog, then again to their work queue. They need to understand the bug, validate the test cases, re-adjust, pre-test, and if we not making any things, then merge it right. Fix is demoed. If everything is aligning well or all the stars are looking good, then we can go to the release procedure. If something fails, start all over again. But if you add all of this time and say that it's really taking away a lot of your precious engineering time and effort on this thing, really the time of all of our software engineering life spent messing up with bugs. And as engineers, the best wish that we can ask our God is like, oh God, can I really make my software engineering life in a world that are not on the side? And of course, can we create a bug free world? Interestingly, it was probably not possible before, but today it is possible. And the trick is to basically get it right the first time. Let's see how. So, Radhesh, time check for you. Okay, thank you. So, let's see this example in this case. Let's first thank the advances in the software engineering. For the first time in the history of software engineering, it's not possible for us to imagine a bug free world. So, how do we do that? We have the concepts that you probably heard of like ship club testing. Maybe it's new for you. Maybe some of you are already doing it. But let's see what it is. It's a very simple technique. What we are doing is we are just moving the cream of our automated tests from executing post-marts to executing pre-marts. And for that, why would we do that? So, before we do, we need a good test strategy. Strategy that your automation tests are catching most of the problems pre-marts. And we already know and talked about static code analysis, code review quality gates and all that. They are catching the quality issues automatically pre-marts. And we also talked about unit discovery and the effectiveness of catching the issues at the functional level. So, we talked about mutation testing framework. Now, do you have APIs in your software? Many of these new age products do, right? And most likely, you also have automated them. But where are we running? Like where is your API automation test running? Is it happening pre-marts? Is it happening post-marts? So, a very simple technique is to bring them pre-marts and execute them pre-marts. So, if you do that, we can actually prevent the code, the problematic code being merged and then finding the defects and then the defects entering the system. So, this is a very effective, simple technique. The next technique, we have the, of course, all of us have the critical user journeys that we identified. Are they automated? And if they are automated, when are they executed? Are they executed post-marts? If so, why don't we just shift it and execute pre-marts? And if and only if they pass, let the code be merged, allowed to be merged. And again, by doing that, we can actually prevent a whole set of bugs from being entering the system. There are a couple of things to watch out for. We should ensure that the COG execution time is not too high. And if it is too high, your developers might not really like it. And also, there are a few anti-patterns that you might watch out for, things like automating everything that is a typical anti-pattern. If you do that, you can always get into some issues. Again, another typical anti-pattern is executing all automated tests with pre-marts, which can also be a problem because you will again see that the execution time is very high. There are a couple of techniques that you can apply here. For example, if you know that only one class has changed, you could basically just run the unit test for only that class. You don't need to really run all the unit tests. Or if you're one APH changed or there is a change only in one functional area, you don't need to execute all the CUJs. You probably can just execute the CUJ automated test only in that functional area. So this way, actually, you can speed up your execution time by computing the blast radius or impact radius or whatever you want to call it. Now, there is one more thing. So when you do all these things, we also want to change the way we are rewarding our people. So for example, if you're rewarding your engineers for generating more bugs, you may want to rating that strategy. Maybe instead of doing that, just change it such that you ask them to prevent bugs instead of generating more bugs. And how do we do that? You can actually change the OKRs to something like, say, zero critical bugs to program or flakiness percentage of the automated test should be zero. Or all CUJs in the OKR test should be automated. Or the LAPS time to find the offending code for a build failure is near zero. Things like that. So there are a few more examples. I might skip them in the interest of time. But let's now finally just get into a quickly summarize what we learned and see how can we put this into practice. So one thing is, we talked about the typical solutions that are followed by the successful teams. But based on the stage of your agile maturity, or where you stand on the adoption curve of these latest software engineering practices, your interventions might vary. If you're still catching up with these latest trends, like the end of sex ops, the formal environment, and things like that, shift left testing, et cetera, your interventions might be a lot more simpler or basic. Or if you're already doing many of that, your interventions can be a lot more advanced. And then some of the examples we looked at today. So the examples I was sharing was very generally common things. And also we saw this saw them in bits and pieces. So just quickly on the 30 seconds, I can tell you how to put this into practice. It's a very simple technique. You can use this DPSR technique. The most important thing is you should ensure that you document all of them. So all the decisions that you made, ensure that that's documented, because especially in these times, most of us are working in remote mode. So documentation is very important. Once you have that, you can actually take it and pilot it with some of your maybe one or two scores. And if you see that they are having issues, you can revise them, refine them. And then finally, you could basically roll it out to everybody else. And once you do that, you need to figure out how to sustain them. So the nicest thing about all these latest tools and technology is that we can automate a lot of this. We can automate the quality gate sets. We can automate most of this, what we just talked about. And finally, we talked about reward schemes. You may want to change your RPRs. You may want to change your reward scheme such that you are rewarding the right behavior. So with that, I think we already said that just a quick summary of the takeaways. As leaders, make sure that you're actually trying to intervene at the right time and do the right things for the team. Show them the way. They are actually really looking forward to you. You have to really deep dive, understand the problem and find the technical fixes and use these techniques like the simple DPSR techniques to roll out the right interventions. And if you are a team member, you could also help by being a change agent and support the implementation. So that's it. We'll stop now. Any questions? So yeah, thank you so much everyone for attending this session. And we thank Radesh for sharing his experience with us today.