 Today, we have with us Adam Tonhill with his talk, A Crystal Ball to Prioritize Technical Debt. This session is sponsored to you by Agile Alliance. We would love to thank Agile Alliance for sponsoring this session. Without much delay, Adam, if you can quickly just share your screen, and then I will leave the stage to you. Hello everyone, and welcome to this session where we will go on a quest to find a crystal ball that helps us to not only identify but also to prioritize technical debt. So this is a broad topic, so let me jump right in and start by defining what technical debt actually is. So the definition I use for technical debt is Martin Fowler's well-known definition, and the idea is that technical debt, just like we can take on financial debt to buy something we want right now, we can do the same thing with technical debt. Right? We can make shortcuts in our solution, and maybe we will be able to deliver quickly in the short term, but, and here's the key, there's a price to that. And to me, this is the most interesting aspect of technical debt. Technical debt incurs interest payments. And this is something that I've found that we in the software industry, we tend to oversimplify and occasionally even misuse the concept of technical debt, meaning that the concept technical debt might not be as helpful as it could be. So let me explain why by having you do a very small experiment. So what I'm going to do now is that I'm going to put up some code here, a small code snippet on the screen, and your task is to quickly judge the quality of that code. So have a look at this beauty. What do you think about this code? Yeah, I think you agree with me that this is not good code. In fact, this is code that's going to be very, very problematic in case it grows, that programming style, one scale at all, it will be a mess. And of course, we would never ever write something like this ourselves, right? But is it a problem? More importantly, is it technical debt? The thing is, from code alone, we just cannot tell. And the reason we cannot tell is because it's not technical debt unless we have to pay interest on it. An interest rate, interestingly enough, is a function of time. Meaning that if we want to decide if this code is technical debt or not, we will need a time dimension into our code. How can we get such a thing? Well, in a few minutes, I'm going to show you one possible approach. But before we go there, I would like to share a story with you that relates to the code snippet that I just showed you. So I analyzed code for our living. I developed tools for software analysis. And occasionally, I go to different organizations and I analyze their code and try to protest their technical debt. And this is something that happened to me four or five years ago. So at this time, I was visiting a large client. And prior to my arrival, they had done something very interesting. They had used a static analysis tool, capable of quantifying technical debt. And the way those tools work is simply that the scan of source code, and each time you find a violation of the rule, they have a cost assigned to it. For example, you have the overly complicated logic in a function that takes you two hours to refactor. So now we have two hours of technical debt. Or maybe your public function select documentation. So that takes five minutes to fix. So now we have two hours and five minutes of technical debt and so on. And they come up with a number. And in this organization, they have taken one such tool and thrown it at their 15-year-old code base. And this tool reported that on your 15-year-old code base, you have accumulated 4,000 years of technical debt. 4,000 years of technical debt. I mean, just to put it into perspective for you, 4,000 years ago, that's here. That's the start of record and history via the invention of writing. So you know, almost on a side note, I'm curious what kind of programmer language did they use. Most likely a Fortran, right? Now jokes aside, I actually think that 4,000 years of technical debt and a 15-year-old code base, I understand that a lot of that potential debt grew in parallel by having hundreds of developers work on the code. But at the end, how useful is it to know? It's depressing for sure. But how do you act upon it? Is all debt equally important? Besides that, I think it's a mistake to try to quantify technical debt from code alone. And the reason you cannot do that is because most of the things we call technical debt just aren't technical at all. More specifically, what I tend to find when I work with different organizations is that we, as a development organization, we tend to mistake organizational problems for technical issues. And the consequence is that we start to address symptoms instead of the real root causes. And I'm going to give you some examples later in this session, but for now, I just want to state that I think the main reason that we keep making misattributions like this is because the organization is invisible in the code itself. And that's why we're unfortunate because there is a strong link between technical debt and the organization. And we just cannot address technical debt unless we take a holistic view of it. And that holistic view has to include an organizational and social view of the code base. So what I wanted to do now is to raise awareness of the challenges we're trying to prioritize technical debt. And based on what I've covered so far, I put together a wish list. I put together a wish list with the ideal information that I think we need to prioritize technical debt. So let me share that list with you and let's see if you agree on it. I really hope you do, otherwise it becomes a very short session. So first of all, when prioritizing technical debt, if we should address any debt, we should focus on the debt with the highest interest, right? The most expensive debt. So where is it? The second thing I would like to know is what about the soft architecture? Does the soft architecture support the way our system evolves? Are we working with or against our soft architecture, which is actually quite common? And you will see examples on that as well. Finally, I talked about organizations. What does technical debt look like from the organization perspective? Are there any productivity bottlenecks? You know, those parts of the code where five different feature teams constantly have to coordinate the efforts. Now, looking at this list, I hope you agree with me that this is information that's going to be potentially useful for us to prioritize and address technical debt. But what I want to point out is that none, really, none of this information is available in the code itself. From code alone, we just cannot answer any of these questions. So how and where can we get the data we need to answer those questions? Well, those of you who might have read one of my books or attended any of my previous sessions, you know that I'm a big, big, big fan of version control data. Yeah, I know. We all have our odd hobbies. This one happens to be mine. But the reason I'm so fascinated with version control data is because it's something we have used as a backup system, an overly complicated backup system, and then occasionally, maybe as a collaboration tool. But in doing so, we have built up this absolutely wonderful behavioral data source of how we, as an engineering organization, have interacted with our code. And we can tap into version control data to calculate a lot of interesting metrics and statistics that can help us answer those questions I raised that led us to prioritize technical debt. So let me jump right in and show you a concept called hotspots based on version control data, and I'm going to walk you through it. Let's start with an evolutionary view of our code base. So what you see here is something I call a hotspot analysis, and I'm going to walk you through the visualization. The visualization you see is a visualization of a well-known Java code base, Tomcat. And there's nothing special about Tomcat. I just wanted to pick something that some of you are likely to be familiar with. The way you read this visualization is that if we focus on those large blue circles that you see, the ones that Lincoln screen right now, each one of those represents a top-level folder in that code base. And inside of that folder, you have other folders that also visualize these large circles. So this means that this is a completely hierarchical visualization that follows the structure of your code. It's also interactive, so I can zoom in on the area that I'm interested in. And once I get to the lowest level of detail, I see that each file with source code is visualized as a circle. So you might see the different red circles here. You see they have different sizes, different colors. So let me explain how that works. The size of a circle that I use to represent a file is used to measure code complexity. Now, what is code complexity? Well, fundamentally, it's about how hard is this code for a human to understand. And if you've worked in the software industry for a while, you know that there are multiple ways of measuring code complexity. We have things like cyclomatic complexity, cognitive complexity, Halsted's volume matrix. And the truth is that you can use any metric that you have easy access to, because what they're all having come on is that you're equally bad. Code complexity is simply too complicated to measure with a single metric. And when you look at the research, you will see that the moment you start to control from the number of lines of code in a file, more elaborate metrics rarely add any further predictive value. So unless you already have some of those more advanced metrics in place, I recommend that you just count the number of lines of code. And the reason I say this is because code complexity is the least, I repeat, it's the least important dimension. Because complexity has this property that it's only interesting when we need to deal with it. So we need to figure out, does this potentially access complexity? Does it have an impact on us? And this is data that we can pull out of a virtual control system. We can look at each module and see how many changes, how much development activity we have in each part of the code. We can just calculate the number of commits of each file and use color to represent that. And by combining these two perspectives, we can identify complicated code that we have to work with often. And those are our hotspots. So returning to the visualization here, now that we know how to read it, we see that there are a number of potential hotspots in the Tomcat code base. And to me, and from the perspective of technical depth, a hotspot simply means that that's the part of the code where it's really, really important that the code is clean, simple, and easy to understand and easy to evolve. Because we don't spend so much development time in those areas. However, in practice, more often than not, the opposite is free. And that tends to make hotspots into excellent refactoring candidates. And there's a fascinating reason why hotspots work so well as refactoring candidates. And it's a reason captured by one of my favorite authors, Mr. George Orwell. So maybe some of you have read George Orwell's work. Maybe you're familiar with Animal Form. If you read Animal Form, then you might recognize the next quote. Because in Animal Form, George Orwell states that all code is equal, but some code is more equal than others. And I'm pretty sure I understand what George Orwell meant with this. It's actually a deep insight into software development. Because George Orwell was most likely referring to the next slide. So please have a look at these graphs. The free examples you see at the top all show the same kind of data. On the x-axis, we have each file in a codebase. And they are sorted according to the change frequency. That is how many changes, how many commits they will do to that file. And the number of commits is what we see on the y-axis. Now, if you look at those free examples, you'll see there are examples from completely different codebases developed in different programming languages by different organizations targeting different domains, different lifetime spans. Everything is different, right? And yet, they all show exactly the same pattern. They show a power law distribution. And this is something I've found in every single codebase that I've ever analyzed. And I probably analyzed around 300 codebases by now. So this seems to be the way software evolves. And this is important to us, because what it means to us is that if you want to address technical debt, what we should do is we should prioritize improvements to the head of the curve when most of the development activities. And this is actually a positive message. Because what these graphs show us is that most of our code is going to be in the long tail. So it means it's code that's rarely, if ever touched. And that's the part of the code where we can actually live with a certain degree of technical debt. We can live with it safely because the interest rate is so low. Even a minor amount of technical debt in a hotspot is likely to be expensive. So this is why hotspots work so well to prioritize technical debt. However, occasionally, you come across codebases where file-able hotspots just don't do the trick. And I want to give you an example by turning to different codebases. So this is a codebase that some of you might actually have running on your laptops right now. This is a part of the .NET Core runtime from Microsoft. And if you look at this visualization, you see that there is almost like a group of collection of hotspots in a package called GIT. So that's the just-in-time compilation support in the .NET Core runtime. And we also see that to the right of the GIT package, there seems to be a large hotspot that's almost like an island and I highlighted it there. The name of that hotspot you can see it on the visualization is GC.CPP. And GC.CPP, it's the .NET garbage collector. And GC.CPP might look a little bit innocent, you sure see that it stands out but it still looks relatively innocent in this visualization. But that's only due to the scale of .NET Core. .NET Core is a gigantic codebase, so what you see on screen here is four million lines of code. And GC.CPP is a big file. How big? I don't know. Let's look it up at Github, shall we? Yeah. Turns out that GC.CPP is simply too big to visualize with the syntax highlighting on Github. So we have to look at the raw text file and when we do that, we see that GC.CPP is a single C++ file with 47,000 lines of C++. 37,000 lines of C++. So has any one of you ever worked in C++? It's not as popular these days as it used to be. I have to admit I spent approximately a decade of my career, 10 years as a C++ developer. And I know, I know, those are 10 years and I will get back, right? But what that means is also I can really, really sympathize with the people maintaining this code because for the 7,000 lines of C++, it's a scary thing. Let's face it. And besides, how useful is it to know that GC.CPP is a hotspot? I mean, it's obvious to confirm that it's correct, but how do you act upon this information? It's a little bit like the 4,000 years of technical data. It's simply too much to be actionable. So what I do when I find a large hotspot is that I use a technique I call xray. Here's how it worked. We take a large file and then we parse it into separate functions and then we look at the git log and see where do each commit it over time and we sum it up and we get hotspots on a function level. And this is what I use to prioritize specific and actionable refactors. Let's look at the data from GC.CPP. We see that when I did this analysis, the number one hotspot in GC.CPP on a function level was a function called growbrickcardtables. We see that growbrickcardtables is the most worked on function and we see that it consists of 332 lines of code. That's a lot of code for a single function, isn't it? It is. It is for sure. But 332, it's much, much less than 37,000 lines of code which was sized with total file. And 332 lines of code is definitely less than 4 million lines of code which was sized with total system. So more important, we are now at the level where we can act upon this information and do a focused refactoring based on data from how we, as an organization actually work with the code. All right. I'm going to come back to the hotspots again. But before I do that, I would like to take a step back and make a small reflection because how do you get to a single file with 37,000 lines of code? How does this happen? Why doesn't anyone refactor it much, much earlier before we get to that number? I think the best explanation I've seen on that is in this book, The Challenger Launch Decision. It's a book I highly recommend. It's not about software at all but it relates a lot to what we do. And that book is written by Anne Wagen. Anne Wagen isn't a software developer. She's a sociologist. And what she does did was that she coined a theory called the normalization of deviants. And as a case study, she used the Challenger accident. So for those of you who might not remember the Challenger accident what actually happened back in the 1980s was this. So if you look at the picture to the right you see the Challenger as at its launch. So the actual space shuttle is to the left with the United States text printed on it. What you see in front of you the orange part that's like the main fuel tank, right? Full of rocket fuel. And in front of that you have this white object. That's a solid rocket booster. And what actually happened was that those solid rocket boosters they are huge, huge rockets. So they are transported from three different segments that were then assembled before launch. And what happened in Challenger was that if you look closely then to the right on that solid rocket booster you can see a puff of gray smoke. That's not a good thing. It's not supposed to be there. Because what actually happened was that those different joints where they joined the different segments of that solid rocket booster they simply failed to seal at lift off so that hot rocket gases could escape and impact the structure of the main fuel tank. And that simply meant that once Challenger got into the air it simply due to aerodynamic forces it simply broke up and disintegrated. And the result was of course a tragic loss of human lives and something that turned out could have been prevented. Because what Diane Walgan tells in her book on normalization of deviants was that already in the 1970s where the first design inspections were done of the solid rocket boosters already there it was found out that the solid rocket booster joints their actual performance didn't match the predicted and expected performance. So this is not a good thing, right? You're building a spaceship after all it sounds dangerous. So what do you do? Well, if you're nosa you're not in the form of a committee and they did and they discussed the problem and they decided to pass it off as an acceptable risk. Years later in the early 1980s during the first actual in-flight tests again there were some measurements that clearly showed that the solid rocket booster joints their actual performance deviates from their predicted performance. Again it was discussed and passed off as an acceptable risk and in 1986 the space shuttle exploded and what's so fascinating about this is that this is what Diane Morgan calls the normalization of deviants that each time we accept the deviation we get a new point of reference. We get a new point of normal and this is really really dangerous and I would like to say that we have exactly the same phenomenon within software. This is the reason we get to 37,000 lines of code. I mean imagine that you start on a new project maybe inherit the file of complex hotspot with 5,000 lines of code. You might not be happy about it but if you spend enough time with that module after a while it starts to become familiar you start to find your way around and besides if you have 5,000 lines of code in a single hotspot what difference does a couple of extra hundred lines of code do? So soon you have 6,000 lines of code 7,000 lines of code so this is the way it continues So how can we catch and detect the normalization of deviants in a hotspot? What I use is a technique I call complexity trends So complexity trends are again calculated from virtual control data So what I do is when I find a hotspot I go to virtual control and I pull up each historic revision of that code and I measure two points for each revision which is the blue line which is a simple accumulation of the lines of code The red line is a specific complexity metric I focus on something I call nesting complexity that is when you have if statements because that's the kind of complexity that actually has some predictive value and is responsible for roughly 20% of our programming mistakes But most of the time you will see that lines of code and complexity tend to follow each other And besides the interesting thing here is never ever the absolute numbers The interesting thing is the pattern the trends So how do you interpret this? Well the way I interpret this is that if we look at the data we see that a year ago we have this dip in the red line and the blue line So this looks like some kind of refactoring maybe some code was removed or maybe some code was cleaned up a little bit and then it grows slowly until we reach a pretty steep increase and then we plateau for a bit and then we have a really really steep recent increase in code complexity And that steep increase is something I use as an early warning system Because the normalization of deviants is one reason that whistleblowers are so important in an organization And I have found that complexity trends of our hotspots make great whistleblowers in code so that we can prevent that our hotspots spiral out of control So to sum up this first part of the talk Hotspots help us identify the code with the highest interest rate so that we can inspect them or draw it as technical depth in those areas where we have the largest return on investment And the reason this worked is because all code isn't equal The development activity varies widely across different modules And hotspots help us separate the stable parts of the code from the more volatile that need our attention And finally, I showed you how to use complexity trends to supervise your hotspots so that we can prevent the normalization of deviants Now I hope you found this interesting and that you would like to try this on your own code So what kind of tools do I use to create these visualizations and these analyzes? Well, you have to remember that this is a young discipline When I started to work with hotspots like maybe 10 years ago there weren't any tools available that could be the kind of analyzes I wanted to do So I had to write my own tools And I open sourced my first tool suite It's something I call CodeMet It's available on my Github page It's entirely free What I've been working on for the past 5 years is a more advanced tool called Codescene So Codescene can also provide code level metrics and automatically separate good hotspots from bad hotspots and of course, optimics all the steps in the analysis So Codescene is available at Codescene.io It's entirely free for open source and available for a closed source project as well So if you're interested in these techniques, please consider to have a look at Codescene and support it Finally I always find it so fascinating that you can actually get far with just a command line So, I have lots of examples on this in my latest books of producing x-rays and my favorite example is this Did you know that with git log if you pass it the minus alt flag then you can specify a function inside of file and have git trace the evolution of that function So this is basically like the basis of an x-ray right at your command line Alright, you're going to get the references at the end of the talk as well Before I get there, I like to scale it up a little bit and talk about architecture and more specifically organizations And I would like to start with a concept from social psychology called process loss So process loss is a concept from social psychology that social psychologists can borrow from the field of mechanics and the idea is that just a machine doesn't efficiency all the time due to things like friction and heat loss neither cannot team So, the way this model works is that let's say you have a number of individuals together they have a potential productivity However what you get out of a team is never the full potential The actual productivity is always slightly smaller and part of the potential is simply lost What kind of loss is that? Well, it depends on the task but in a complex endeavor like software development where we have lots of interdependent tasks, lots of people a large part of our process loss is simply due to formation and communication of your head And the thing is that we can never ever get rid of process loss It's a very well researched topic within social psychology The trick is of course to try to minimize it That's why we have things like processes and collaborative practices So the first step towards that improvement is to understand how severe is your process loss today That's something I like to cover And one of the most common reasons I've seen for process loss within software is something called the fusion of responsibility This is another topic from social psychology And here's the thing the fusion of responsibility is something that's studied in the real world And it's something that you might have experienced yourself if you've been unfortunate enough to witness an accident or an emergency What you might have been able to notice is that the larger any group of potential bystanders the less likely that any individual will offer help So this is really really scary So it turns out that if we ever find ourselves in an emergency we're better off if there are just one or two other people that can help us rather than 50 or 100 The more bystanders the less likely anyone will help So this is a scary thing And this also explains a lot of the observations of diamond software because the fusion of responsibility is a very human thing and it's humans like us that write software And we have the same biases there So one thing that surprised me was that when I analyze code and I find overly complicated hotspots what I tend to see is that those hotspots have been problems for years and still no one had really acted upon them But then I do a second inspection using virtual control data I can calculate how many people have been working on this part of the code And what I typically find is a very strong correlation that the more people that are working on that code the less likely that anyone will refactor it And again I think the root cause is the diffusion of responsibility So if we have an important social perspective or social bias like this why don't we measure it? Because the diffusion of responsibility leads to processes How can we measure something like that? Well remember we are in version control wonderland meaning average control system knows exactly which developer that wrote which piece of code and when they did it So what I do is I simply go to version control and then I look at I can do this on individual level but I typically find it more interesting to take individuals and map them to different development teams So I simply aggregate contributions from all developers that work on a specific team And by using that I can calculate things like team coupling how many different teams work in the same part of the code at any given time And I use color to visualize that the visualization you see is the same style that I used for hotspots only now the colors carry a different meaning The more red something is the higher the overlap between different teams in that part of the code And you see an example here This is an example from another Microsoft codebase ASP.net MVC core And we see to the right there are a number of different types of code in parallel And this is problematic because not only does it expose us through the diffusion of responsibility by us it also makes changes more risky Did you know that the number of developers touching a piece of code in parallel is one of the best predictors you can have the number of defects you will find in that code And now you have multiple teams working in the same part of the code that also increases the risk for things like unexpected feature interactions which are some of the worst bugs we can have So this is clear the data that I would like to act upon How do we do that? Well, it's challenging it's more challenging to act upon than the hotspots because there might be multiple root causes One is of course that we might have a shared responsibility in our codebase that lacks of clear ownership So maybe the solution is to introduce a new team to take on a shared responsibility Or a very common finding is that code attracts contributors for many different teams because it has good reasons to do so The code typically has multiple responsibilities It's low on cohesion it does too many things So the reason multiple teams have to work on it is because they might work on different features but they all end up in the same part of the code because that code has so many responsibilities So this is something you can find with the technical analysis or a code inspection And if you find that, what you might have to do is that you might have to take that code and you might have to split it according to its different responsibilities so that you separate that code and that also helps you align the teams to distinct modules in the code so that you can minimize parallel work So it's interesting that you can sometimes solve what looks like organizational challenges with a technical refactoring But this is really just a starting point Just like we can find that team coordination is just like this we can also start to measure things like Conway's law that we normally just talked about Here we can get actual objective data And Conway's law at this essence is how well aligned is our organization as into teams with the way our software architecture works So what I did in this analysis was again I picked first control data to figure out where have each individual worked I aggregated that into teams and I assigned each team a specific color and a team that has worked the most in a specific part of the code gets the color of that piece of the code So what I want to see here from the perspective of Conway's law is I want to see an alignment between the architecture and the teams because the better aligned they are the cheaper it's going to be the easier the communication And if I look at this visualization really really quick I see it looks like an almost perfect alignment Each one of those larger bubbles that represent features have the same color meaning one team is responsible for each feature their operation are responsible So this is the way it actually looks from the code so this is beautiful This is like Conway's law in Iran However I have to admit that all examples you see in this presentation are from real world code bases except this one I had to make it up because I'm yet to find an organization that's that well aligned with their architecture So let me show you what it might look like in a real world case study So this is the same kind of visualization the same analysis but from a commercial code base And it's an example I'm allowed to show as long as I keep the data anonymous in order to protect the guilty So what happened here was that this was a large organization and they were using component teams and by using component teams they found that they had a lot of handovers they had to do a lot of coordination in the interfaces between the different components they had pretty long lead times but then someone went to an agile conference and they decided that hey you know what let's switch it around instead of component teams let's do feature teams so that one team can implement a feature end to end beautiful So what they did was they took their existing organization sliced it into 12 different feature teams and then they started to assign features to each team almost like a wrong robin style and they let the teams use in the code base and what you see here on screen is what the work distribution between the different teams looked like for just one sprint Can you see any patterns here and compare it to the previous image No You cannot see any pattern because there isn't anyone. What this visualization shows you is that you have contributed from all different teams working across the whole code base all the time but you have 12 teams working the same parts of the code but for different reasons since they work on different stories so not only is this going to be incredibly expensive to coordinate for different teams because they're going to run into a lot of conflicts in the code it's also going to be a communication disaster and to make it worse what happens here is that you miss synergies between different features that is you miss opportunities to make simplifications in the solution where simplifications really really have a big impact so whatever you do and I'm going to give another session on this tomorrow where I go much deeper into the social and organizational aspects of code but whatever you do align your architecture and your organization your code is going to thank you for it So I've come to the end of this session and I hope you enjoyed this journey for the fascinating field of evolving code and what I call behavioral code analysis where we emphasize the behavior of the organization with respect to the code as much as we emphasize the code itself and ultimately it's all about writing better software software that's able to evolve with the pressure of new features novel usages and change circumstances and I'm pretty convinced that the writing code of that quality will never ever be easy so I think we as developers we need all the support we can get and I hope that this introduction to behavioral code analysis has inspired you to investigate the field in more depth and to get started I have a number of resources here I have my new books software design x-rays that covers all of these techniques in much more depth I also blog about it regularly at the codesin.com and my private blog and on porno.com where I have different case studies and examples and if you're interested in trying this out you might want to have a look at codesin.io where I have some interactive analysis examples you can actually explore different well-known open source projects so now before I take questions I just want to take this opportunity and say thanks a lot for listening to me and may the code be with you thank you awesome Madam I don't know if you can see it can you see the likes coming in yeah I hope so you there's a rain of likes coming in right now for you which actually shows how much people love the session and trust me throughout the session I saw a lot of interaction with a lot of people were trying to let you know that you know they're loving every concept that you're picking out on the screen so unfortunately since you're on presentation mode you were not able to see but I was actually watching that happen and the rains are still in I'm going to start picking out questions because there are tons of questions people apologies if we are not able to take all the questions but you're surely going to have Adam available for another 15 to 20 minutes at the lounge he's going to grab a table so you can grab a seat there you can watch the conversation as well there's a watch link on the side next to the table as well but Adam let's get to the questions because there are quite a few many questions out there I'm going to start with the first one did you mention a tool that creates the hotspot yes so for the tooling there are several options what I recommend if you want to get started quickly is to have a look at the code scene that's the most advanced tool then I have my open source project the code mat so if you go to my github Adam Tornhill at github you will find it and it's our command line tool that's easy to get started with as well and it can do the hotspot analysis and you have those links if you google me you will find them and I can give you specific links in the lounge as well so I hope that you also awesome yep yes yes it does, find him on github complexity referred here is cyclomatic complexity and or cognitive complexity or anything different that's a question from Selevi yeah so of those I like cognitive complexity the best but what I tend to use is I have two different complexity metrics you know what I'm actually going to share a link to one of my blogs where I discuss this because I cannot talk about it awesome you can actually put it on the audience chat if you would like yes I'm going to do that here I send it in the audience chat here we go so what I do is basically I don't care so much about cyclomatic complexity, cyclomatic complexity is very rough metric it's basically the only thing it's useful for is to estimate the number of unit tests you might need but what's interesting is to look at the number of nesting levels so I basically count how many nested conditionals do I have in each function and that's very closely related to chronically complexity as well so I recommend those but if you don't have it then just a simple count of number of lines of code takes you pretty far and points in the right direction awesome so the next question is from Anaka she's asking does the code scene work for application code developed in any technology or language basically most languages I think it now supports 25 different programming languages so all the big languages go lang, csharp, c++ Java, Python and it's growing all the time so we're adding support for more and more languages all the time the only language to support that has been requested that I know that we lack today is Rust so that's going to be next up next up okay the next question is from Justine she's asking how do we get the X-ray of our code or complexity of the code you need to use tooling for it I mean it's the calculations themselves are not hard but they're very cumbersome to do for a human so basically to do the X-ray you have to parse their source code that's the most challenging thing to calculate what's up to file level you just need to look at the change frequencies and I have examples in my book where I can do this just from a command line but the X-ray requires you to understand the source code so you can parse it into different functions and then simply look at the Git log where each function has been modified so that's where you need the tooling support and I gave this example on how you can you can build your own scripts if you want using a Git log and the minus cell flag and you can trace functions use that as an X-ray or you can look at I mean CodeStine optimates that for example interesting that space that's another option I'm going to take one last question Adam because there's just too many questions and I'm so sorry for the people whose questions are not being picked right now time is our will in is there okay CodeStine looks at technical that related to what aspect maintainability, reliability or security is there a particular aspect it's their maintainability and their financial aspect how expensive is it to maintain that code so if you want to increase your delivery efficiency CodeStine will point you in the right direction CodeStine doesn't address security there are so many good security tools already but I don't find so much so much tooling support for this financial aspect of software development how do we become more efficient how do we make our job more fun at rate of the most important I'm going to ask the audience one time if we if we I know it's the lunch break now so can we take another five minutes if you guys are okay staying back for another five minutes hit a thumbs up please I see a lot of thumbs up so let's take another question sorry Adam I'm just going to I mean this is anyways the thing that we're going to do at the lounge so let's just have some more work done here is there any way we visualize the effectiveness of collective code ownership yes there are ways of visualizing that it's I have to say this is an area that's still evolving I again I had a blog on it let me post that one if you want to dig deeper let's see here I'm going to talk about it tomorrow as well so I'm just looking for the length to the blog and I'm posting in the chat now this is something that we visualize groups as well so what you can do is that you can basically look at how much output you get because that's the interesting thing you need some kind of some business related metric so what I typically do is integrate data from life cycle tools like gyro, Trello, Azure DevOps and then I look at how much does do we manage to deliver what's our throughput and then you can look at what happens when we change the teams if we go to collective code ownership model does it have a positive impact or does it lead to less output so that's the quantitative metric that I'm interested in but it's also very very important to have some kind of quality metric and the one I tend to use is the number of bugs that stay true because it's very easy to increase throughput if you're asking quality so you actually need both to balance each other and this is what I tend to see in works really really well in practice so hope I managed to clarify otherwise I'd be happy to elaborate in the launch that question was from Rajiv Rajiv you're listening to this so he just mentioned that he's going to have a talk tomorrow so you know what to do next rest of the folks thank you so much for your questions Adam is going to be available at the launch thank you so much Adam for the wonderful session I'm sure everybody was blue to their screen and that's pretty evident with the thumbs up that are still coming in I would also take a moment to thank the sponsors for the session as it aligns thank you for sponsoring the session for us guys if you love the session which I obviously felt you guys did alright thank you so much Adam thank you for joining in