 Let's get started. Hello, everyone. This is the next session, and the topic is how we're using flow change to change the way we work, and it is presented by Roman Dekal. Feel free to start. Hi. Yeah, so my name is Roman. I'm from Vienna, from Austria. Until a few months ago, I worked at Electropit, which is an automotive software supplier. I now work at GraphMasters on making traffic jams the thing of the past, but today I have the honor to speak about some of my personal epiphanies in recent years with my talk, Are We Really Moving Faster? How Visualizing Flow Changed the Way We Work. So I want to tell the story today, but I'm also very interested in the discussion and look forward to your questions. So in 2018, we faced a major problem. It was delivering value in a flexible way at speed and high quality to internal and external customers. We were hindered by long development cycles, six to 12 month budgeting periods, higher workloads, and priorities that were often changing. So for example, a full cycle of building and testing one of our products took more than 24 hours. So when we did something in the afternoon, you sometimes didn't get feedback until the following day, but the day after. This had a negative impact on developer morale and it felt like quicksand. The more we fought it, the more it pulled us in. We knew there must be a better way. So by experimentation and by identifying the bottlenecks in the build and test run, we were able to cut the full build test cycle by a factor of three in the first few months of 2018. And moving our code to keep monorepo and containerizing our build environment in 2019 allowed us to provide feedback to our developers on every commit within minutes, not hours. Furthermore, automating our delivery allowed us to provide a new version of our software with the click of a button. That's great, right? But after putting in all these countless hours and improving the deployment pipeline and investing in automation and deploying new technologies, it was time to ask a very fundamental question. Are we really moving faster? Which, as it turned out, was quite hard to answer. So the first step that we did is making the current status and our work visible. We were missing something that's called ambient awareness. I think I first read about this in Michael Nygaard's 2007 book, Release It. The idea is to create an ambient display, which is an interface between people and digital information, which represents data, for example, the health of the system with the help of sound, visuals, movement, or other cues. If you look on the Internet, there are various ideas out there. There are simple displays, ambient orbs, lava lamps, bill lamps, USB rocket launchers, traffic lights, you name it. So these kind of information radiators should be put in a highly visible location to promote responsibility in the team. They also show that you have nothing to hide. I had already collected the data that I tracked for a weekly status meeting on a wiki page by hand for a few months. However, I wanted to visualize our work on an automated dashboard. When we moved to a new office, there, as usual, were some logistical problems, and I had some time to play around with an open source dashboard framework, which is called Smashing. And I got a spare Raspberry Pi. So based on the metrics I had collected by hand for a few months, I visualized the following things. What are the next milestones, important dates and releases, what are the number of open pool requests, what are the number of open support tickets, the cheered tickets per status as also visible on our Kanban board, and the status of our check-in stops, including build time and failing tests. I implemented the first version of the dashboard and set it up in the hallway ready for the official opening of the office, where it sparked a lot of interesting discussion. So we somehow succeeded in bringing back ambient awareness, but that's when I noticed a problem. We were creating too much inventory. Looking at the dashboard, the inventory was staring me in my face as the number of tickets done but not released kept increasing every day. Imagine all these items were boxes lying around in the hallway. They would have been way harder to ignore than the number on the dashboard. These items or features or patches or whatever don't have any value as long as the users cannot get their hands on them. It doesn't really make sense to add more. So the bottleneck in our development had shifted to testing and releasing, and we were creating a lot of inventory. However, when I read about the typical progression of bottlenecks in the DevOps handbook, I was reassured that our efforts were at least going in the right direction. But how could we measure that? It stumbled upon the following tweet by Chess Humble, commenting on a presentation by Adrian Cockroft, which made things more clear to me. If your team is cross functional, they are acting in two domains at the same time. They have to deal with the fuzzy frontend of product design and development, as well as with product delivery. So build, test, and deploy. The key is that you have to fix product delivery first, because once you have a low variability, reliable delivery process, you can work in smaller batches in the design domain and take an experimental approach to product development and process improvement. There are metrics for both domains available. Let's have a look at product delivery first. So in the Dora Accelerate State of the DevOps report, the offers identify four key metrics to differentiate low, medium, and high performance. Lead time of code changes from checking to release, deployment frequency, time to restore, and the change fail rate as a measure of the quality of the release process. There's also availability, but they did not include it in their cluster analysis, as it does not apply in the same way to different software products. The offers show that these metrics do not represent trade-offs between throughput and stability, but rather that high performers succeed in improving all these four metrics at the same time, and stability, and speed, and able each other. Given that we had created the capabilities to release our software with the click of a button and were able to provide hot fixes when necessary within days, I knew we already made some progress in this domain. However, even though we were able to ship more often, inventory went up, so I wanted to take a closer look at the product design and development domain as well to better understand that. At this time, I had just finished reading the Project to Product book by Mick Kirsten, where he introduces the Flow Framework. The Flow Framework defines four different flow items, features, defects, debts, and risks that reflect all the work you do. It also defines four different flow metrics, so I aim to measure them. I aim to measure flow load, which measures the number of flow items being actively worked on, flow time, which measures the duration that it takes for a flow item to go through the value stream, flow efficiency, which measures the proportion of time flow items are actively worked on, to the total time lapsed, and flow velocity, which measures the number of flow items done in a given time. I also wanted to visualize flow distribution, which shows the allocation of flow items in a particular flow state. So back to the drawing board, I created another dashboard with smashing that visualizes these metrics. I also started tracking business value, cost, quality, and team happiness, which is also part of this Flow Framework with survey and correlated it to the flow metrics as proposed by the framework. After the next deployment, I stood in front of the dashboard and looked at the numbers. It dawned on me. We were shipping more often, but we didn't deploy from master, but rather patches from a release branch on average, we got slower. We had established a fast lane for fixes, which were fixed on master and backported to the release branch, but it still took us too long to ship features, which were waiting to be released. So we looked into cutting our release cycle for major releases from every half a year to each quarter or even more often. Still it seemed as if we were always late with priorities and requirements changing in between these cycles. I felt like we were improving our development process, constantly running, but remaining in the same spot, as in the Red Queen's Race in Lewis Carroll's Through the Looking Glass and what Alice found their book. Well, in our country, said Alice, still panting a little, you generally get to somewhere else if you run very fast for a long time as we've been doing. And the Queen answers, a slow sort of country. Now here you see, it takes all the running you can do to keep in the same place. If you want to get somewhere else, you must run at least twice as fast. But I had another epiphany. We will never be able to run fast enough. As I later found out, we in death were trapped in the local optimization and urgency paradox described by Jonathan Smart. He says that valuable ideas sit in. 12 months to 18 months of big upfront planning with no sense of urgency, no questioning if they might add more value than what has already been locked in the plans for the year. As soon as these ideas reach the product development team, they're urgent. It was not the first time I heard about this difference between agile development and business agility. But this, we are so freaking agile, yeah, picture that you see on the screen from Klaus Leopold, really drove it home for me. We had a limited system focus and we had to turn to another powerful tool, value stream mapping. The first way of DevOps emphasizes system thinking. It highlights the importance of flow from ideation to deployment and support. While process improvements focus on where value is added, so this happens inside of these boxes, value stream analysis focuses on identifying bottlenecks and eliminating waste in the process. This approach often has a way higher leverage. So if you take a look at all the activities and processes creating the product, you can split them up in two main types. They are value adding and non-value adding activities. Value adding processes are processes that must be completed to satisfy your customer base. Non-value edit activities are production or service related activities that simply add cost to or increase time spent on a product or service without increasing its market value. But not all of them can be eliminated. Some of them are non-value adding but still necessary. For example, to comply with regulations or organizational policies. So the goal is to eliminate non-value adding activities, minimize necessary non-value adding activities, and optimize value adding activities. Our value stream mapping exercise led to another epiphany. We need to change the way we work. So as usual, I think our work typically started with the product manager and owner gathering requirements based on customer request or business needs. The development team then adds this feature to the backlog, plans it for an iteration, and implements the feature. The code is then built, integrated, and tested. Finally gets deployed and released to the customer where if everything worked well, it creates the desired value. In values stream mapping, it's not the goal to map every single step in detail, rather to get an overview with five to 15 process blocks. For each process, the team which performs it, the activity, and the name is recorded. Real data is gathered about the current status, the people involved, barriers to flow, amount of work in each process block, as well as queues and the inventory being between these processes. Additionally, three key metrics are recorded for each block. The lead time, the process time, and the person complete and accurate, which is the proportion of times a process receives something from an upstream process that it can use without requiring any rework. Based on the current state, the future state is designed and the necessary improvements are identified. The biggest benefits of this exercise was visibility. I have the feeling that when most of the problems were already known and in our heads, it really helped to visualize and discuss them with a larger audience. It was obvious that we could not do all the necessary changes ourselves, so we had to socialize the maps. The best way to do that is not distributing a digital version of them or posting them on the wall, but rather talking about and discussing them. So that's what we did. We also gathered the estimated effort and the anticipated benefit for each improvement idea. With this data, we created a priority action consider eliminate matrix, which is called the PACE matrix. Using the findings from the value stream analysis as the one list to rule them all allowed us to pull in the same direction. So you typically start with the priority items, the just do it items, which have a low effort but a very high benefit. And then you continue with the action items, which for example need more planning and you have to put on the roadmap, etc. So let's look at the numbers. What we already knew is that having two to three major releases each year and providing monthly patches did not provide the quick feedback necessary from the market. This also showed in this graph. When we looked at the flow time of our tickets or items that were released in the last six months before February, what you see here is actually fits to our deployment schedule, right? So you see a fast lane for patches in this zero to 19 days range on the left and 20 to 39 days range and another peak here around the 120 days, which can be explained with our two to three big feature releases per year. So looking at the results, it was clear and backed by numbers that we needed to release features more often. Thanks to various further improvements, we were able to move towards monthly cadence starting from March and create the release branch at the latest possible date from master just in time for the release as proposed by Drunkface Development, by the way. Then we did the same analysis of flow time in June again for the previous six months. But we now see that there's still a very long long tail and there's also some overlap with the previous analysis. So there's still quite some tickets in the 120 days bucket. But what you also see in what's obvious is that most of our changes are now nicely aligned in the 30 day bucket, which fits closely to our monthly cadence. So we were now able to answer the fundamental question that I raised two years earlier. Are we really moving faster to get faster feedback, learn quicker, reduce risk, monetize earlier and maximize outcomes? Yes, we finally are. So to summarize, during the last two years, I had three epiphanies at my job. First one was making work visible revealed that we were creating too much inventory. Second one, visualizing flow showed that we will never be able to run fast enough. The third one is value stream mapping made it obvious that we needed to change the way we work. Only through the insights from all these exercises, we were able to move faster. If you want to know more, I look forward to discussion. At the same time, I have to say that we just tried what the offers of these books are saying. So I think you should really read some of these or watch the talks of the offers which are available on YouTube, for example. If you're interested in the dashboard that I've created with smashing, you can find the code on GitHub and blog posts on my website with some more technical details. However, I also have to say after I implemented the dashboard, I found that of course there's several professional solutions dedicated for the same or a similar need. So if you have a more complex setup or want to do something more serious, you might want to look into these. In any case, implementing this myself really helped me to understand the underlying concepts. So if you want to know more about value stream management, there's also a value stream consortium now, which is a very good resource to start. And I put the website on the slide as well. Last but not least, I want to say thank you. Thanks to you for listening. Thanks for the DevConf organizers and the sponsors. Thanks to the smashing community and also thanks to all my former colleagues who were part of this story. Thank you very much for your presentation. As I'm looking in Q&A section, I don't see any questions. So we have still five minutes if you want, you can utilize it as you want. For everyone, if you have any questions, feel free to put in there. And if no questions will come, you can go after the session to the World Adventure. And it's a virtual platform where you can interact with each other and discuss anything you want. Perfect. Thank you, Lugumi, for heading this. And thanks also for the very positive feedback in the chat. Thank you very much. Okay, thank you. So please go there if you want. And that's for you now. And the next session starts in nine minutes. Thank you.