 So welcome everyone to meet the social side of your architecture. And the reason I want to do this talk is because I've found that we as an industry, we often talk about the importance of teamwork, the power of team autonomy. But I've also seen that those things are notoriously hard to get right. And when we fail, we tend to fail miserably. So in this session, we're going to approach software development from the people side, from the side of social psychology. And you might very well ask yourselves, why do I, as a software developer, feel qualified to talk about this? Why do I feel qualified to talk about psychology? Well, my background is a little bit different. I've been in the software industry for more than 20 years. But also I have a slightly different background. I also have a degree in psychology, which happens to be one of my big, big interests. And what I do these days is that I try to take my psychological perspective and apply it to my technical work. And I've written two books about it, your code is a crime scene, and my most recent book, Software Design X-rays. So we're going to talk about a lot of psychology today as well as software architecture. And I actually like to start with the controversy. Because when I studied psychology, one of the most fascinating things I came across was within the field of personality psychology. And the reason I got interested by this was that some of you might have done a personality test, maybe as part of a recruitment, right? So you do a personality test and you get to score in different dimensions, like are you introverted, are you extroverted, or how open are you to new experiences, things like that. And there's one group of psychologists that claim that personality is really, really important to measure, because it can kind of predict how we will act and how we will behave, right? It kind of makes sense. That's why we do these tests, right? But then there's a different group of psychologists who claim that personality really isn't that important. And the reason it isn't that important is because the situation, right, the context where we work and live is so much more complicated. So the social forces and the situational forces will dominate any personality traits. And the first time I heard about this controversy, I was like, wow, I've seen this before. This is static versus dynamic typing, right? This is Emacs versus Wim. It's almost like this tendency towards controversy and strong opinions. It's a human thing. Who are you? But of course, like us from controversies, the truth is somewhere in the middle. But with the strong tendency towards giving weight to the situation, because it turns out that 91% of the differences in people's behavior in different situations can just not be accounted to by personality tests. A personality is still important because it kind of predisposes us to act in a certain way, but the way we will act is mostly determined by the situation. And the reason I share this with you is because this has influenced how I view software systems. To me, a software system has both a personality as well as situational forces. And to me, the architecture, the way the system looks today, that's the system's personality. And just like a personality for a person, our system's personality or architecture kind of predisposes what we can do with the system. It determines if something will be easy or hard to modify an app. But I like to put weight on the people and organizational side, the situational forces. Because if we get that one right, then we can succeed no matter what our architecture looks like. Right with the right people, the right organization, we will succeed. It might be challenging, it might be inefficient, but we will succeed. If we get the people's side wrong, we will fail no matter how good our technical architecture is. And this is something that affects us all the time. And I like to share a story about something that happened to me almost 10 years ago. So this is a story about a company. And it's a company that's wearing a very, very good position. They were in a good position because they had a successful product in the market. That product was almost 10 years old and generated a lot of cash for them. And what they needed to do now was to modernize it. So they decided to re-implement the system from scratch. And they were in a good position because they had very detailed historical data. We kind of showed them that, you know, re-implementing the system on a modern platform will take our five in-hocs developers roughly one year. I know, I know what you think right now. A software project that's predictable. Crazy, right? And of course, someone from marketing came down and said, you know, one year to implement the new system, that's not good enough. We have this important trade show in just three months. Can't we do this in three months? How do you take something that you know will take one year to process it down to just three months? Easy, you just throw four times the number of people on it. And they did. So they hired 25 software consultants in addition to the five in-house people and let them loosen the software architecture. The software architecture was already set with the original developers in mind. I wasn't part of the project from the beginning. I came in later for, how should I put it, to post more of them because this project didn't finish in three months. In fact, it didn't finish in one year either. So I analyzed the system and I interviewed a lot of the people and one recurring theme in those interviews was that everyone was claiming that that code was so hard to understand. And this kind of surprised me because I've been reviewing the code and I mean it wasn't perfect in any sense of the word but it was pretty good. But then it occurred to me that the reason that code is hard to understand is because of this. Because as software developers we never remember the details of the code, right? What we remember instead is a mental model an imperfect high level abstraction of what the actual code looks like. And when you have lots of people working a project when you're overstuffed like this particular project what tends to happen is that even if you write a piece of code yourself today then three days later it might look completely different because you had five other developers working in the meantime. So it makes it impossible to maintain a stable mental model. This is one of the reasons why code might be perceived as hard to understand it's also a frequent source of defects and delays. And this shouldn't come as a surprise to anyone who has read The Mythical Man Month. So The Mythical Man Month is one of those classic books and it's perhaps best known for what we now call Brooke's Law. And Brooke's Law states that adding more people to a late software project makes it even later. And Brooke even has a formula where he explains this and I've tried to visualize Brooke's formula, here it is. So what you see on the x-axis is the number of people we can add to the project and what you see on the y-axis is the number of months to completion. And what we can see here is that to a certain degree we can add more people to the project and we get a shorter completion date. But at some point the coordination costs of adding more people tend to outweigh the extra hours we get available adding more people so that completion time pushes away further in the future. It simply takes longer and longer to implement new stuff and get done. And according to Brooke's, the main reasons for Brooke's Law is due to the increase in coordination and communication over had when adding many people. But I like to claim that there are other factors as well. We're also what I like to call soft risks with over staffing. And these are mainly motivational factors. So what tends to happen on overstaffed projects is that you get exposed to something called social loathing. So social loathing is a motivational loss. It's known from social psychology. It's a well-studied phenomenon. And it's something that happens when we feel that the success of our team depends very little on our own actual efforts. So what tends to happen is social loathing is that, you know, we pretend to do our work, we sit there and try to look busy, but at the end we're really just trying to keep up their parents and hope that our peer peers will keep up the actual effort. And it sounds horrible, but it's actually a very, very human thing. And social loathing is something that happens when we feel that the goals of a particular project, maybe they are not clearly communicated, or maybe arbitrary deadlines are enforced upon us. When that happens, people lose motivation in a task. And it simply becomes harder in a lot of projects to see how my individual contributions make a difference, right? So why do we keep repeating the same mistakes over and over again? Why don't we keep falling into the trap described by Groups Law? I think I can explain that to you. And I'd like you to take a look at this small piece of code here, which is part of a much larger module. We're looking at that code. Can anyone tell me if this code is a coordination bottleneck for our five different development teams, or if it's code written by just a single individual so that we have a key personality? We just cannot tell, right? Looking at the source code, there's no way of answering those questions. And this is something I like to call the great tragedy of software design, that we, the people, the organization that build a system, we are invisible in the code itself. And this means that quite often we tend to downplay the importance of the social side of code, and we start to treat symptoms instead of the real issues. So if you ever worked on a project where you've seen some of these symptoms, things like frequent and complicated and merge conflicts, you know, unexpected feature interactions leading to tricky bugs or maybe have very long-lived feature branches. I like to claim that each one of those symptoms, the real root cause is always an organizational issue. And to truly improve, we need to uncover those issues. So where do we start? How can we uncover the social side of a code? Well, the first thing I would like to do is to find a way to detect if we have some coordination bottlenecks in the code due to the way we work with it as an organization. And the metric I'm going to use is something that's called fret belt figures. Here's what they look like. So let me walk you through this visualization. What you do here is that you take each architectural building block, could be a service, could be a layer, could be a component, and then you represent it as a box. And then you look at which teams have contributed to that code. And the more each team has contributed, the larger the area of the box. And each team is represented by a distinct color. So at the bottom of this page, I have a reference to the research paper that explains all of this. There's actually a formula in the paper. I won't spend much time on this formula. But the reason I'm presenting it is because visualizations are great for inspections. But to be able to kind of compare different patterns, to compare different findings, we need to normalize. We need to take these visualizations and normalize it to a number. So that formula, basically all you need to know about it is that you can take the amount of fragmentation, the contributions from different teams and calculate what's called a fractal value. A fractal value, you see it in the bottom left corner, is just a number, right? So zero means that all the code in this component is written by the same team. There's no additional coordination with other teams. And the closer to one we get, the more fragmented the development efforts, the more different teams do we have contributing in that code. So I hope I managed to clarify that. The next challenge, now that we have a way of visualizing and measuring this, is to find out where do we get that data, and how do we know which team that has written the code. Those of you who attended my session yesterday, you know that I'm a big fan of virtual control data. And I'm a big fan of virtual control data, because virtual control data is social data. We know from our virtual control which developer that has written which piece of code, and we know when they did it. So what I do is simply take the individual contributions and then I aggregate them depending on which team it comes to, and I sum it up, and I get the total amount of code contributed by each team. The limitation of virtual control data is that it's very file centric. So you look at git or subversion, and it basically manages files. But the interesting unit of analysis for us is the architectural level. So just like I aggregate individuals into teams, I also aggregate individual files into architectural building blocks. So I just take all contributions of git or directory that might represent the component or multiple directories and I aggregate the contributions so that I can visualize fractal figures on an architectural level. All right, I'm going to show you some examples on how you actually use this, and I think that would clarify a lot. Because armed with the fractal figures and for fractal value, we can start to identify some social ways that can fail with software architecture. And what I want to do now is to present a few different architectures and see how they typically hold up when worked on by different organizations. And I'd like to start with one of the classics. Is this something you recognize? Yeah, I think so, right? This is the classic model view controller pattern. But if you've ever seen a model view controller implemented, you know that this is never, ever what it looks like in a backend. Because we know that we need to have some kind of services layer. And we often need to have a repository layer. Why do we need a repository? Well, I've been writing code for almost 30 years, and I have to admit that I still haven't found a good reason for a repository. But let's just skip that and call it the best practice, so I don't have to motivate it. And of course, we want an object relational map because we don't necessarily want to deal with SQL directly. But of course, we would need one more layer, right? We need a data abstraction layer because otherwise we don't get to use those really cool mock frameworks to write unit tests. And then, of course, it's always good to have a business layer, right? So we can express some business capabilities here, and we might have new models and whatnot. So in reality, we'll often find nine, 10, 11 different layers. Now, given the view to the right here, I'd like to ask, what's the cost of change? What happens if we want to add something to this architecture? Let's say we want to add something to a system, maybe just a checkbox that the user can take to persist those specific options. What would that code modification look like? It would look like this. It would ripple through the entire architecture. And we would have to do a small predictable change to each one of those layers. And it turns out that I have a lot of data on this. I've started this on real systems. And I have found that in a layered architecture like this, somewhere between 40 to 70% of all commits modify multiple layers. And I find it's fascinating because one of the motivations behind the layered architectures is a separation of concerns. But to me, given this data, it looks like with layers, it might be the wrong concerns that we're separating them. And this is visible if you look at the layered architecture using fractal values. So in the visualization to the right, you see those circles. This is the same visualization style as I used in my session yesterday when I talked about technical depth. And for those of you who weren't in the session, each one of those large circles represents all the code for a specific subsystem. And the larger the circle, the more code we have in that subsystem or that component. And the color is the fractal value. You know, how much fragmentation do we have between different teams? And the closer to one we get, the maximum fragmentation, the more red the corresponding circle. And I can then have a look at one of those components and see the fragmentation and coordination between different teams. So if you've seen a layered architecture, everything tends to become a coordination problem. And this is the reason I get for one of the most frequent questions I ever get. And that question is, should we use component or should we use feature teams? We kind of tried both, none of them really seems to work. The reason for this is because if you have a layered architecture and you put component teams on it, so you might have things like each layer, things like that. So you have a database team and application team or UI team. What tends to happen is that you get really, really long lead times because you need to coordinate in each one of those interfaces to do a handover for the next team. So you tend to get very long lead times. So it's tempting to switch the feature team that you cannot work across the whole stack. And what tends to happen then is that your whole architecture is a gigantic coordination bottleneck because you now have multiple teams working the same parts of the code but for different reasons since they work on different features. So I actually think that the only good answer I have found to these questions, component teams or feature teams in a layered architecture is this one. It's a quote from one of my favorite childhood movies, Wargames, where the only winning move is not to play. What is fundamental on architectural issue but purely organizational changes? An architectural organization always needs to evolve together. And I think this failure of layers to easily accommodate multiple teams is one of the reasons for the popularity behind microservices. So microservices have been incredibly popular over the past five years. And when I talk to organizations that adopt microservices, they often claim three different benefits that they expect from microservices. The first one is that the organization expects the microservice architecture to be loosely coupled. Because by having loosely coupled microservices, they can provide both autonomous teams where each team can take full responsibility for one's service and involvement. And by doing that, they gain a number of benefits. Services independently, which can help short and lead times. But they can also scale services independently. So a lot of interesting benefits here. Over the past five years, I've analyzed lots of different microservice architectures and I'd like to share some of the findings here and see how well they hold up to the expectations. The first one is something I noticed. I think I saw this two years ago. It's real data, but I changed the names a little bit to keep it anonymous. But we see the same thing here. Now each one of those circles you're right is some microservice. And you cannot see that here, but each one of those circles, they represent a lot of code. Each one of those services might be 20 to 30 to 40,000 lines of code. So 20 big services, right? Not so micro. And we see that a lot of them already. And the reason they are red is because they have a high degree of team coordination in them. A high fractal value of lots of team fragmentation. And you see an example to the left of the contribution patterns between different teams within the same service. And when I saw this the first time, it's kind of obvious that you don't have any kind of team autonomy or independence between the services here. And I think you can see the reason for that in the names of those services. So you have things like access control server, transaction server. And to me, that doesn't really sound like a good service boundaries. So what we have here is I would say a failure because the architecture is technically oriented. We have a lot of technical building blocks, transaction server, diagnostic server. And that technical partitioning kind of misaligns with the work of the team. Which tend to be feature in use case oriented. So when I work on a new feature in my team we need to modify multiple different services because the actual business scenario is distributed across the technical building blocks. So this failure of aligning the architecture with the way we work with it kind of leads to first of all we can't get a thing like team autonomy. We cannot have a team work independently from values because all will have to coordinate just like they had to do in the layered architectures. And the reason they cannot do that is because we don't have any loose coupling between the services. They are tightly coupled by necessity when we choose technical building blocks as service boundaries. And as a consequence of that we cannot lose all the potential advantages of a microservice architecture. We cannot scale services independently and we definitely cannot deploy them independently because they all depend on each other. So, like I said, each one of those servers representing a service they are pretty big in reality so 24,000 lines of code. So again one can claim that's not so micro. So maybe this is a failure due to lack of modularity. Maybe small and modular services will solve the problem. Before I discuss that I like to talk a little bit about complexity and then we do it together. Because to me, complexity comes in two different shapes. In particular, accidental complexity. Parts of the problem we're trying to solve, the way we solve the problem, that's the accidental complexity. And the first type of complexity is when we have complex parts. So this kind of complexity I talked about in a session on technical depth yesterday we pick up a piece of code and it's incredibly complicated and implemented, right? Leakness, logic, load cohesion, all that stuff. But then there's another kind of complexity where each piece of code is easy enough to understand in isolation that the emerging system behavior is anything but simple. The complexity is still there, only now it's distributed in the interconnection between the different parts, between the different services. So how can we highlight things like complexity to dependencies between different services or different layers or components? One way of doing that is by using something I call change coupling. This is something I cover in Software Design X-Race at length as well. Here I just want to give you a simple example. Because change coupling is very different from the way we typically talk about complexity. Because change coupling is something that can only be measured from the evolution of the code, from virtual control data, from behavioral data. So we have a simple system here consisting of just three services to the left. And the first time we make a change to that system, we're modifying the subscription service and the sign-up service together. The next time we modify something we touch another service. And then the third time we're back to modifying the subscription service and sign-up service together. Now, if this is something that continues, we know that we have a logical relationship between the subscription service and the sign-up service because they're always co-evolving like this. And this is something we can use to measure and visualize the cost of change in our architecture. I'd like to show you another example. This is, again, real data from a real microservice system. And I did a change-capping analysis of it and I visualized it this way. So each one of the labels that you see around that graph represents a microservice. The real visualization is interactive so I can hover over one of the services and then I see its dependence light up. The other services, which from change-capping. And the first example we see here is that if we modify a service called estimated conversions, we have seven other services that need to be modified as well. That's some tight-cut one, isn't it? And just to show you that this isn't a flu, to the right you see another service, estimated profit. Change that one and you have to modify five other services. And this tight-coupling leads to what I would like to call the change-capping bond, which has addressed dramatic consequences for how we can work and organize as a team when working with this kind of architecture. So here's the team perspective. Again, same visualization style. Each circle represents a service and the color represents the team fragmentation, the team coordination. I will see here that even though these are small, small services, right? Again, you can't see it from the visualization, but if you look at the data you see that each service here is maybe 1,000, 2,000 extra codes. Really, really small services. And yet, we have this incredible coordination amongst different teams. Multiple teams work in the same service. How can that be now that we have small, simple services? Well, again, I think the answer is in the name of those services. If you look at them, you see that they have names like subscription costs, payment accepted, payment received. And those would be pretty good names from objects. But they are not serving as well as service boundaries. So what I would like to claim here is that again, we haven't found the right boundaries. What we have here is not so much different services as distributed objects. And as a consequence of that, we will have again very tightly coupled services, meaning that we just cannot get any true team autonomy. We cannot get teams to work independent. They will have to coordinate their changes in the code. And as a consequence of that, again, we are not benefiting from any of the promises and expectations in our microservice architecture. Now, of course, you could claim that what I have presented so far is not so much microservices as microservices, right? And that's kind of that point. Because microservices are incredibly hard to get right. And when we design a microservice architecture, we are also doing social design. And if you forget that, everything is lost. So let me try to move to the solutions part and give me and let me share some kind of the tips that I've seen works well in practice to address these potential problems. And I would like to start this by actually referencing Conway's law, right? That the way we are organized influences the kind of architecture we design. And Conway's law, what's important to me in Conway's law is that Conway's law is about modularity. But modularity alone doesn't guarantee a successful software architecture that we just saw in this microservice example here. What I have found is that when we manage to align our architecture with the problem domain, you know the business problems we are trying to solve. We manage to take those business problems and express them as architectural building blocks. Then we also, as so almost perfect side effect create natural team boundaries. And to me this is the core of successful software architecture. So is it possible with microservices to align each service with the problem domain? Yes, definitely. I've seen teams and organizations be very successful in that. But what I like to point out is it's not a guarantee. Microservices won't in any magical way pure our dependency blues. In fact I would claim that it's harder to manage dependencies in a microservice architecture than it would be in a traditional way. Microservices is also an expensive and high-discipline architecture. So it's still up to us to identify the proper service boundaries based on concept from the problem domain. What I've seen work well in practice is that each service is team-sized and they're partitioned by business capabilities not data not technical responsibilities. And I've also seen, I mean this is like the eternal question how large should a microservice be? And I think it should be as large as a problem requires. But I think it's important is that the services are kept team-sized so that kind of small team work on it. And with a small team, I mean free, maximum, full people. Because with free to full people you more or less minimize any excess coordination overhead, right? You don't even need a development process because you're just free people. And you also minimize those motivational biases I talked about earlier. You minimize the risk for social uploping because with free to full people it's very, very easy to agree upon the shared goals, the shared principles and also to see how your work has an impact on the outcome of then the work. But of course not everyone is doing microservices. Myself for example I'm working on a system that is more of a traditional monolithic architecture right now. And of course you can align that kind of system with the problem domain as well. And there are multiple ways of doing that. I just want to show you one example here. This is a pattern I've seen successfully applied multiple times. It's a pattern called package by component. There's a fantastic write up reference at the bottom of this slide. It's a write up by Simon Brown. I highly recommend it. And the idea here is that instead of slicing our architecture into different technical layers we identify the business capabilities and we package the code according to business capabilities. By doing so we will again create natural boundaries for the teams. However in a monolithic architecture there's one piece that you need to put extra care on and that is the database. I like to view the database like it's almost like a black hole of maintenance efforts and change-capping. It just tends to drag multiple teams in. It's of course possible to share a database and avoid the different coordination needs but it's a trade-off because personally I prefer to use databases for what they are good at. Searching, sorting, merging data stuff like that. However if in my architectural context a principle like loose dependencies with building blocks is more important then what I might have to do is I might have to take some of the responsibilities from the database and pull them up to the application code and then encapsulate them there. It's all a trade-off, it's doable but I really recommend that you put extra care and reviews into making sure the database doesn't involve the code in each environment. What I think are the main benefits of package by component is that it gives you a consistent macro architecture. Everything is a business capabilities. Your top-level design elements are business capabilities but it gives you freedom to provide different implementations of those building blocks. So you might for example see that one particular building block representing one business capability it might be a little bit more complex so you might see that all right we could really benefit from a bit more technical structure in this component so let's introduce layers here. Then you might have another business capability that's really really simple. You might in fact not be the new database so why not implement it as a single file and this is a big advantage because one of the reasons I don't really like layered architectures is because it kind of enforces the same architectural style on all features right no matter if the features are simple or complex. You know adding a single checkbox to a user interface should be ridiculous or simple right and there's no reason to enforce the same change patterns on that single change as a more elaborate and complex feature and in my view package by component kind of gives you the best of two possible words and in particular when we look at the package by component architecture from the perspective of the team if we identify the right business capabilities then those will serve as natural lines for the different teams. It kind of helps you align your architecture and your organization by creating natural team boundaries. Now I hope you found this interesting because we can actually evaluate the architecture so using the kind of data I've shown you so how can you collect that data? Well you need tooling support on top of the raw data because it's quite complex to accumulate and calculate these different numbers. So in my books I have several examples on how you can mine data with Git and in the session yesterday I gave another example but this one is a little bit different. You can use Git short log and pass it the minus size flag and then you can specify a path to a particular component or directory representing a component and by doing so you will see the contributions for each developer and then you can easily write some scripting on top of that to kind of group them into teams and calculate the factor value. Then there are tools that can actually do this for you. One of the tools is an open source project I released many years ago called the code mat. So code mat is a command line tool that takes Git data and can kind of calculate factor values and factor figures for you. It's available for free on my GitHub account. If you want to get serious on this and actually start to evaluate things like complex microservice architectures then I recommend that you take a look at Codescene which is where I do most of my work these days. So Codescene is the evolution and the next generation of these kind of tools and it comes with the visualizations and all of that out of the box. So please have a look at Codescene if you're interested in this and consider to support it. Now I still have a couple of minutes left before I'm going to take questions so I want to take this opportunity and claim that Conway's law we often hear about it at software conferences but Conway's law is actually an oversimplification because when we take Conway's law to the extreme and isolate the different teams too well where we minimize their interaction too much then we run into other problems and those problems are social in their nature more specifically we run the risk of running into the fundamental attribution error. And the fundamental attribution error is when we attribute the same observable behavior to different factors depending on whether it concerns our team or another team. So just to give you an example let's say that your team breaks the nightly bill. Yeah I know, I know this is a highly hypothetical scenario right but please play along. So your team breaks the nightly bill you know that was because that nightly bill you know it has never really been that stable to start with and besides you were under tremendous pressure to deliver a specific feature right and you really did your best and showed your commitment right so a lot of situational factors that explain why you broke the bill. But you also know that when my team breaks the bill you know it's because we are a bunch of careless factors right so that's our personality suddenly and this is the key in the fundamental attribution error that we overestimate personality factors when explaining the actions of others and as you have seen in this session I kind of like to visualize things so I try to visualize the fundamental attribution error in software development and this is the best visualization I came up with I do think it kind of captures the essence here but it leaves us with a challenge because what this means is that yes we do want to certain we keep the teams independent we want them to minimize their coordination needs but at the same time we cannot need to get some bridge between them and that sounds a little bit like a contradiction but it's only a contradiction because we failed to distinguish between operational boundaries and knowledge boundaries of the teams so let me show you how I typically approach this I like to keep the operational boundaries of software teams small and operational boundaries are the parts of the code that we as a team are responsible for where we do most of our work that should typically be small and well defined however our knowledge boundaries the parts of the code that we are familiar with the other teams that we know as persons should be much much wider and include all the subsystems, all the components, all the services that we need to interact with and there are several ways of making this happen one thing is to simply invite people from other teams to your code reviews make that the habit you get a valuable different perspective and you get to know them as persons which minimizes the risk for the fundamental attributionary another thing that I always try to do is to encourage people to rotate teams you shouldn't enforce it but if someone wants to work on a different team they can do that make it a habit finally what I also see work well at scale is to adopt an open source ownership model where each team are responsible for a particular building block they are the ones that own it they are responsible for the quality of that code but anyone can make contributions to it and if those contributions are good enough then we are going to accept them and merge that change and this gives us the best of two different options right because if the team is busy the team only in a piece of code is too busy to implement something that you need then you can do it yourself and you do a good enough job and it's going to be included and you become familiar with another piece of code so this been a session where we have been covering a lot of different topics I want to wrap it up by claiming that there is no such thing as a good or bad architecture in isolation not in the layers an architecture is good when it supports the changes we want to make to a system and when it fits our context and the key of this session is that an architecture never ever exists in a technical vacuum as we have seen the social side of your code will impact so many important aspects and what I would like to show you today was that we can actually measure that social side of code and make decisions organizational decisions architectural decisions based on data and if you want to dive deeper into this topic I have a lot of material on this in software design x-rays and I have my blog, my company blog my personal blog with different case studies and examples and of course also have some interactive analysis if you want to take a look at them inside codes and see what these different communication the coordination graphs actually look like so I'm going to leave a little bit of time for questions if you also want to take this opportunity thanks a lot for listening to me and name the code with you thank you hey Adam I think that was a fantastic session there are a few questions I think we have time to take a few questions right now should I call out the questions okay so the first one is can we share people by multiple feature teams DBA, DevOps, membership etc I mean then it's not really a feature team right if you have I'm struggling a little bit with it because it would typically mean that there is a cross layer of teams as well that might also become a coordination bottlenecks so the tooling having our course on in multiple different teams but you could of course try it out with different configurations different definitions of the team but what I would do is I would procedure cross functional support like DevOps for example if you have that or testers as a separate team cool so there's another one I think we still have some time what kind of reflection or decisions have you seen of these kind of coordinates enabling most frequently in your experience so it's fascinating because it's it's often a technical action that's the result it might simply be that you find that you have a particularly microservice context I've often seen that you use this kind of data to identify services that actually belong together so it might collapse different services together you might also occasionally see that the reason a particular service or a building block becomes a coordination magnet is because it does way too many things so you can use this data and get insights into that and the result I've seen is that services and components also tend to get split to find that better alignment and I think the key here is that we also need to follow up on those changes and measure and see that you actually get the expected benefit hope I managed to answer that question yeah okay last one when you refer to broad knowledge boundaries with limited operational boundaries could you help understand the term knowledge boundary would it mean that we have to heavily rely on one team per operational boundary so what I've seen is that yes you have one team for operational boundary and I didn't talk about that in this session but it's actually very important from a motivational perspective and from a long-term perspective that you as a team get the chance to take personal ownership over something and the knowledge boundaries what I mean by that is simply that those are the other parts of the system that your team aren't only those parts but you're familiar with the code you know a little bit about operations you have seen the code you know the people on the other teams you know who to talk to I think this is really really important if you don't need to pull making the code you still want to communicate with our teams so hope I managed to clarify that sure so I think we are done with the questions here there's one more yeah a lot of time so a lot of thank you notes for you Adam so thanks Adam for sharing your experience with us today thank you thanks a lot everyone