 So today I am going to talk about breaking down your silos using metrics. So a bit about me. I have been in tech for over 25 years now. I am currently partnering up with Detertia and their senior facility for East Asia. I am also the acting vice president of the Open Tech Association of Thailand. We are trying to promote the open source culture in society and organizations. And because I love open source so much, I have decided to home so much. Kids visit actually applying open source culture to really chat with Detertia, tech talk, and this lovely open source. Nothing like real things. We turn here. Again, 50 years of expertise, very smart folks there. They go under the Code Chaos Project in 2017, which is about using metrics for software development. And communities, we will talk a little bit about that later, maintains the open source framework lab metrics tools, and is the official metric part of Open APRA and non-focused foundations. There's a suite of services from Detertia, from the analytics part as well as doing customizations, consultants, etc., etc., around all of this. And so a lot of companies have trusted Detertia with our output. So silos are an amazing place to store your brain and your code. Most organizations are silos. What this means is that when you work in your company, you work within your own silo. Who here works in a large operation? Like let's say over 100 or above. So you would understand that reaching out to someone across your organization is extremely difficult. Basically, for the most part, you have very little interaction with them if you're working a silo. The only interface with the rest of your organization is your boss, and it is up to your boss to coordinate and communicate, no matter what you need. If it's anything that's outside your team, talk to your boss. So this is a feature, none of us, it has been developed in the pre-internet base, and it's often a very easy way of managing people and teams. Use your boss as an interface. We write modular code with loose coupling. This is a loosely coupled organization, sort of. So let's say you work in your silo, and you have developed a tool. Let's say that the tool is a hammer and a spanner, and you build or acquire or buy a hammer and a spanner for use within your department. Cool. Another silo, a continental way maybe, also uses a hammer and a silo, uses a hammer and a spanner. But you guys don't know, you guys don't know that you've built your hammers and spanners. So they start building the hammer and the spanner from scratch. A third only happens to be the hammer, so they build their own hammer. So in the end you have your silos, each with the variations of hammers, which you made and maintained by themselves. So and they are there because they are there to solve a problem, right? But you have to go back, really go back and ask yourself, do your customers pay you for your hammers? Because you have been spending a lot of time and effort and resources making hammers. So unless you are a hardware company making hammers, your customers are not paying you for your hammers. They are a tool that you need in order to get your done, but they're not paying you for your hammers. They are, however, paying you for the end product of what your hammer is. So if we were really physical hammers, this is justified, right? Every silo needs its own hammers. But we're talking about digital artifacts, oh, the hammers are only metaphors. So back to our hammers, in order to remove waste here, let's break down these silos. We put all your tools in a central place, because it's in the sky conveniently, there are clouds, so we put them in the clouds, right? So the difference when you're sharing physical hammers, physical items such as hammers, is that physical items are a zero sum game. You give someone a hammer, you don't have a hammer. However, when you give away code, both of you end up retaining the code. So the tool benefits everyone who participates in the sharing, and because it's been improved together, it's been tested, then it's improved, the tools get better because it has been tested under diverse conditions and improved under conditions. So rather than each silo maintaining the entire tool chain from the beginning, all that's really from the beginning, everyone shares it. So you can have your silo, right? One can do butt fixes, another one supports, another education, security, so on and so forth. So the resources available to improve your hammers is multi-focusing. At the same time, the resources required to maintain it becomes a fraction of its original resource, which is why breaking down your silos increases efficiencies, right? And with Innersource, it helps you to free up resources to pursue more challenging things that your customers value. Now that we have goals, let's talk about metrics. What are metrics? Metrics are tools that help us to measure how well something is doing. They give us information to evaluate a purpose that makes smart decisions. However, it is only useful if you know what you're going to do with the metrics. So in Innersource, good metrics are easy to understand. They are representative of the question or expected outcomes. And they are useful, if they are most useful, if they are actionable. But if not, they should be informative. It doesn't have to be too precise, but it should be informative enough. And so, because remember, we are managing a product, we are not scientists. So ultimately, the position doesn't matter that much. So we have here, and these are pretty good goals, enhancement, buffets of support, etc. Now, when you have your metrics, it is good to be strategic about that. So there's a couple of techniques that we can use, which I'll cover. But first, there's a story. So a police officer stumbles across a drunk man, coming around on their street light. The policeman asks, what are you doing? The drunk man says, I'm looking for my keys. The policeman says, where'd you leave them? Where'd you last see them? The drunk man says, over there in the alley. The policeman says, why are you looking over here? The drunk man says, this is where the light is. So this story illustrates the danger of many types of scientific and data projects. Because if you measure productivity, for example, you can't stick an MRI machine on the heads of all your employees to see how motivated they are. You have to measure secondary information, and sometimes it's just the best that you have. You can only derive information by looking at the street lights and try to figure it out from there. So we have goals, question metrics. That is why it's important to start with what you're looking for, not what you're looking at. So with this strategy, it was developed by Victor Vasili in the 1980s, I believe. We can have more meaningful conversations about what we're trying to measure. So we start with the goal, what we're trying to achieve, then we go on to the questions that support those goals. And then ultimately, we come up with the metrics that will support the questions. So for example, the question is how often do customers encounter our issues with our software? A metric that you can use is the number of customer reported issues. The question can be how easy is our software to use? The metric usability ratings, for example. So you see goal, question metrics, question metrics. So all of the question and the metrics support the goal. However, if you have, with the metric start, with the metric customer reported issues, you might have difficulty figuring out why you're doing this. It's much, much more productive to start with your goals. Another strategy to use is the can-do-check-act cycle. So you start with, you can start with a plan. For example, identify a high rate of customer reported bugs and set a goal to reduce the number of bugs by 15%. That's a plan. Do implement a new testing process and collect data on bug rates. You check, you analyze the data and you find that the number of bugs has increased by 40%. Then you add, which means that you make adjustments to the testing process and continue monitoring the bug rates. Because you are most likely not going to get the metrics right to begin with. You need to be open to the idea of iterating this through and constantly re-evaluating whether you are on the right path or not. And improving. You may never finally find those keys, but you would have gathered enough information to know which part of the dark alley it's in. Which may be enough. So here's a lovely example. I'm here, Watson. These slides are from the top. It's originally given by Jeffrey Bore. And he talks about their transformation to inner source. So here. So you see that in this part, we have the, it just stood up into the contribution. Where you contribute to the inner source part. And then you have the adoption phase. Which makes it very clear the roles about who is doing what in this ecosystem. Now, the metrics are interesting. Because in maps, you have the contribution numbers. 50 contributors. These are part of the contributors. And then we also have numbers regarding adoption. So you see that over here, there are 15 products. Product used in the first 18 months. And with 75% less time to gather delivered. And $10 million saved through reuse alone. Which are significant numbers and very, very good metrics. This is probably the kind of things that your managers would love to hear. So let's look at one such approach. For our silo breaking effort. We want to see which contribution, which units, which business units contribute. Interact with each other the most. So we use the network analysis metrics. Let me break it down. So we have, we had, we started with a project. This can be code, documentation, anything really. The project has a contribution. In this case, we'll say it's a developer. We draw a line to show that the developer has contributed to the project. Or has committed to the project. And we can keep drawing those lines to show each interaction. And ultimately, these are the meaningful products that we can come up with. To show the layout of the patterns of interactions within your organization. So then you can start seeing where the silos are, right? You can see the items. You can see these guys are working in the by themselves on the periphery, right? You can see two major projects are over here, three major projects. And there are, there is some cross interaction. And then you can see that these are all continents. So not so much silos, but they are, they're, they're continents. So that kind of thing. So with, so with visualization like this, with metrics like this, then you can start having an idea on how to make changes. And to assess the health of the, of collaboration within your organization. So this is all generated by the free open source, free portal app. So these are continents, communities versus archive particles, right? This is one project developer versus many, many, many project developers. You can see that they're contributing to many projects. This is, you can see collaboration. These, you can see the lines in collaboration while these are isolated projects on the periphery. Now let's silence and understand communities. So you can see that there's a clear dividing line. So there's a line dividing that part and that part and up and down. So this is a tool. It's open source at the Linus Foundation. It collects and displays data, displays data. It supports 30 plus collaboration platforms. And there's a whole bunch of metrics that are included. And with the inner source community, there is a pattern. So if you choose to take this approach and you start implementing metrics in your organization, please don't do it alone. We have a community, the inner source community come and talk to us about the metrics that you choose to implement and let's share some of your knowledge, right? Because that way it is more likely, as I said, that the more street lights out there, the more likely we'll find our keys together, right? So here also, metrics in the inner source, this is an example. So you can see that these are patterns for contribution and collaboration. Darker numbers means more contributions. And then you can see here, for example, the number of top PRs open and commit-made. So you can see that these are probably some good candidates to come and be your trusted committers in a source that is sort of like a maintainer. So you can invite them to come and be a maintainer or a trusted committer for your internal projects, right? Policy metrics measure their application of... Sorry. So let me start that again. So the goal in here is to increase coding collaboration, right? When you are making that metric, you want to increase the collaboration. So one of the things that you can do is to use a root cause analysis to see the factors that use this collaboration. So after a root cause collaboration, root cause analysis, you might come to the conclusion that the commits are too big and that they are difficult to review and fix. They are difficult to review and fix and it's just ignored. And the person making the contribution is then quite sad and upset. So one way to fix it is to start reducing the size of the commits, for example. So another thing is to continue with the analysis. So we have questions, right? How far, well, are we applying our policy? So we have policy metrics. See the evolution of lines per commit. We can track the lines of commit and see whether they're going down or not. Then we can see if the policy is succeeding. So we can see the median review time. Then are we being misled by circumstances? Are we causing unwanted side effects? So we can track the other metrics for the context. So the number of PRs issued per timeframe as a stress metric, for example. So with metrics, great power bears great responsibility, right? There are, because you're looking only where the lights are, it's a slippery slope with a lot of issues. So we just want to stress again, seek help from the community. And share what you're trying to do. And let us build up a database of metrics together. So we are standing on this shoulder of giants here. There's been a lot of work done both by the inner source comments as well as the chaos community. And we can see... Let me get this moved. So this is the Chaos Community website. C-H-A-O-S-S dot community. You can go in there and you can look through the metrics that they have. So let us look through, for example, contribution metrics. So you can see we have metrics, types of contribution, activity, dates, and time. Time to first response versus us. So let's see time to first response, for example. So we have descriptions. The first response is activity. And sometimes we need the most important response. We have objectives. Time to first response is important. It's an important consideration for new and long-time contributors to a project along with overall project health. We have implementation guidelines, etc., etc., which is visualization. A lot of information about metrics is available. So please look through the website. These work for open source projects as well as inner source projects. So please look through it and see what you can find helpful. And subsequently, if you find that you can contribute an improvement, please open a full request. And so back to our slides. In summary, suggest that you implement your metrics. You implement them early on. So if you're a startup, you suggest that you do it early on in your journey. Please don't roll your own metrics, a bit like crypto. Don't roll your own because you are very, very likely going to make mistakes. Please seek help from the community while you're doing this. And so before I go, I just want to acknowledge these entities for their contributions to these slides. And thank you very much. Thank you. Yes. Okay, so I have a similar question. Basically, I believe that those are combinations. So we can extract data from GitHub and GitHub as well as other platforms. I'm from the operational side. So I'm coming from like 17 years old stuff. The reason I'm here is that technology is available to do integration. To do the integration with incident tickets? Yes. Yes, yes, yes. So it integrates with over 30 platforms. And it has a plug-in architecture so it can support even more. But most of the popular platforms are supported by DreamWorks. Thank you. Any other questions? Okay, well, thank you very much.