 Welcome to the pragmatic DevOps session by Vedic. So for those who don't know Vedic, he's a, I would say he's in Agile India, this is his third year. He's an experienced in engineering leader with a focus on building products, infrastructure and high performance organizations. He has deep interest in DevOps, SRE, cloud native technology, software architecture and technology, leadership as a craft. Currently, he's working as a freelance technology consultant helping to start in India to rethink their approach on the software delivery, operations, architecture, DevOps and technology leadership. Prior to this, he has worked in Blanket, which was formerly known as Brofils as their VC engineering for DevOps and security. Without further ado, I would not like to invite Vedic to talk us through pragmatic DevOps. Thanks for that intro, Richard. Hi everyone. I hope you're enjoying the content so far. I know it's a weekend morning, so thanks a lot for making it to the session and I hope to make it worth your while. Like Richard said, my name is Vedic Kapoor. I am an independent technology consultant. Before this, I spent six years leading engineering at Blanket or formerly Groofers. We sell grocery online. When Groofers started, our technology practices were far from ideal. A lot of things went wrong in how we set up our technology teams and thankfully we managed to course correct in the following years. Today, I want to talk about why and how we kept going in the wrong direction. How did we course correct and what lessons of pragmatism we learned out of that journey. Finally, we will discuss the much disputed topic of maturity models and how we use them at Blanket. Hopefully along the way, we will engage in an interesting conversation during and after the talk. Now, I usually start my talks with a story because stories help us understand the context of an environment. My talk today is all about context. I am going to tell you a little story first and the core of our talk will follow a little later. Please bear with me. Groofers started in 2013 as a hyperlocal grocery marketplace delivering orders in 90 minutes. You could go on our app, find the nearest grocery store, place an order and we will deliver it to you. We started with a simple architecture, three services, if not microservices or you could also call them applications. One has the back end for our consumer-facing app, one for catalog management and one for everything to do with order footprint. That is order tracking, support, etc. And this was simple and good enough to start with. It worked fine for us initially to build quickly and roll out new features. The business was also simple enough that it hadn't really become very complex. Everything was on AWS from day one. And as a business too, as we solved more problems, especially as more people joined our tech team, it naturally became hard to work with all these applications. There are a lot of problems being worked on in parallel, a lot of business problems being worked on in parallel. So more people working on the same code bases at the same time usually meant overstepping and involving handshakes at some levels. More bandwidth seemed like the biggest bottleneck, so we acquired another company to double up our engineering strength. The timing was also such that we needed to move really fast as a startup. We're just going to take a pause and reflect on how we're going to collaborate on these code bases that are becoming more complex every day. So we couldn't build the tooling, the practices and the developer experience that would allow all teams to move fast with the setup. And while the companies that have managed to successfully work with monolithic code bases with far more developers than Proofers, I think we were a young team, not very mature to really understand what we were dealing with, especially with the growth pressure that we had. Being able to divide and parallelize seemed simpler than being able to figure out how to make monoliths work for us. So adopting microservices architecture seemed like the next best step. We started breaking our application into microservices to enable teams to work on problems independently. Every time you would see a new problem that could be independently worked by a team in a domain without dealing with the complexity and chaos of our existing code base, we'd spin off a new microservice. Teams started starting new microservices will choose their own stack to attack the problem at hand so that we're choosing the best technology for the problem. And to make our teams truly autonomous, we felt it was important that we give our teams ownership of systems end-to-end. Every time if someone needs to run in tech experiment, if they get blocked on something because they don't have access for it, we're just not moving fast enough. So we've put the idea of developer and team autonomy as far as we could. Adopted, you build it, you run it philosophy and enable teams to make their own technical decisions, manage the entire stack including infrastructure and operations like configuration management, scalability, resilience and even handling instance. The first bottleneck was provisioning infrastructure resources. So we were on AWS, but not really leveraging it as much as we should. Requests to launch and configure EC2 instances, setting up databases, etc would come directly to the extremely under-resourced infrastructure team, which was essentially just one person, that was me. And with the product engineering group growing rapidly, it was impossible for us to keep up with incoming requests. So we decided to get out of the way of developers as quickly as possible. We built automated tools that allowed developers to safely create infrastructure resources without intervention from the DevOps teams. And it really opened up possibilities for our teams to quickly try out things in test environments and put them in production. There's a pressure to move fast and a big artificial bottleneck was out of the way now. So DevOps teams were responsible for governance, for providing processes, tools for developers to really own the entire application lifecycle end-to-end. And one big responsibility for the DevOps teams was to coach developers and help them architect for the cloud. So for example, we felt the need for configuration management as a practice was there. And it was really important for us as it would help us manage changes better. It would help with CI CD, it would help with automation for automatic autoscaling properly across services. And to be able to scale config management well, we just decided that we'll coach all developers on how to use Ansible for their applications for config management for CI CD and autoscaling. And we got a stage where pretty much every developer could work with Ansible to an extent that almost every application was managed using Ansible. And this is a great place to be. Like the DevOps team was not really coming in the way of developers. Every day developers were able to build features to reconfigure their applications according to those new features and just ship them to production. And we were proud that we were able to build these kinds of behaviors in the team. And all this worked really well. Or at least that's what we believe was happening. In early 2018, we realized that we had an illusion of agility. So teams were working independently on their microservices, deploying multiple times a day, but there were not enough guardrails of quality. So we were creating a lot of waste. We were shipping poor quality products that were frustrating our customers, internal users and management. Our engineers were burning out as they were busy firefighting more most of the time than shipping value to customers. Systems have become so complex that tech debt was getting worse. Writing code was a terrible experience for most teams. The developers wanted to leave because it was not fun working on those code pieces. We used to think that solving for just autonomy by creating boundaries saying, you build it, you run it is enough and our teams will own quality of what they ship. And to an extent they did this is what happened. Our teams did what they felt was right and was within their control and within their boundaries. But they did not have a systemic view of what was happening in the overall environment. And as a leadership team, we did not do enough to provide oversight over our entire architecture. We ended up with serious problems that changed the course of our work. We had a proliferation of microservices because of too much freedom and absolutely no guardrails teams could create new microservices at the CFIT. But we were continuously making our systems more complex and microservices eventually became hard to develop test release and monitor introduction. In many cases, the boundaries between teams were not clear enough, leading to handoffs, slow releases, complete lack of ownership. Our quality feedback loops became extremely poor, so poor that we were mostly getting to know about bugs from our customers, customer support, and directly from the CEO sometimes. We also ended up with an extremely diverse tech stack. This slide doesn't paint the entire picture because it is not as easy honestly to list out everything. Since technical decisions were localized and democratized, we ended up with a pretty diverse tech stack. You name it, we had it. We had several tools that fulfilled the same purpose. It just didn't make sense. The unnecessary diversity stopped us from achieving economies of scale because of lack of standards, common tooling, and most importantly, mastery over anything at all. Every tech stack required a unique way of thinking about continuous delivery and management and maintenance of applications, which made our journey a lot more painful. And the worst was that it took us a lot of time to figure out what was wrong. When we realized that quality was an issue for us, immediately we created organizations of focus to improve quality. The entire organization, entire technology leadership was driving quality as an agenda. Teams were excited about improving quality. Writing tests was largely believed as debt that we must pay off. It was not just pushed down from the top, but teams really wanted to do it. And we were using OKRs at the time for setting goals. So we would have OKRs like improved test coverage with CARES like 80% in all the services, and nothing would get done. We're like, all right, maybe it's our first step. Let's try once again, be more realistic. So take another attempt with more realistic goals, reduce the scope of our services. We made almost meaningful progress even after changing our goals. Maybe a couple of teams made some significant progress, but largely we were not really moving forward. With all the organizational support and alignment, we couldn't make any meaningful progress. And it was quite emotive waiting for all our teams because they really wanted to improve things, but were not really able to do. They were not really seeing the success that they were waiting for. So we felt we took a big goal that was impossible to achieve in the time frame. So we decided to take another quarter with more realistic goals, reduced coverage target focused only on a few critical services. But we made one more change. We allowed teams to pick up some of their localized problems. We had a very interesting observation in this experiment. We saw some teams make progress on some fronts, onboarding new engineers was a problem, so readme's got better. Integration across services and sharing of contracts was getting hard. So some API docs would have been using Swagger. Recruiting issues and productions were hard to debug, so we improved our logs and monitoring. But all of this happened in bits and pieces where the teams felt that these were their problems. And also, testing still did not get better. We did not really improve our test coverage on any fronts. So for some reason, we were not able to make progress on testing while we were able to improve on other fronts. The next quarter, we were even able to attack new problems like load testing improved some architectural issues to support our largest ever online sale where we expected 3x started off a regular traffic. We made things happen and the sale was extremely successful. So this was the story. I wanted to tell you with as much details as I could share the interest of time. And of course, there's a lot more that has skipped. But what I really wanted to do is share next is some of the lessons we learned in this journey. So the first lesson is that there's no such thing as a best practice that you must follow. Best practices should probably be called recommended practices. We almost never achieved any of the goals where we wanted to fix a practice across all the services. Anything like let's increase coverage to 80% or let's define SLOs and all the services were not achieved. In retrospect, we didn't get anything done because we didn't need to follow all those practices all the time in all the places. The value was just not clear enough or therefore it was not worth it. For example, we started managing infrastructure as good many years back but it was not always necessarily done with everything we did. It was usually the parts that were fast moving, frequently changing or expected to change frequently. Too critical for manual areas or had to be democratized and that was good enough for us. Another story was with config management. Before Grofus, I was coming from the world of Puppet. Love Puppet for what it was, declarative config management, even though Puppet was probably a better technology. When I introduced Puppet at Grofus, our teams really struggled to get started with it quickly. Our reality back then pushed us back to look at something that was much simpler to understand for our teams and get adopted quickly and is extensible for most people and Ansible for us proved out to be a better choice even though I still hold the belief that Puppet as a configuration management tool was probably better than Ansible in what it does. Lesson 2, DevOps practices that have a clear plan for adoption get adopted faster, especially when the plan is attached to outcomes. Case in point, the time when our teams decided to improve its documentation. If you don't have a culture for documentation, you have to be careful about how you introduce it and change the culture. What problems are you trying to solve? When we went from saying we need to improve our documentation everywhere, we need to improve documentation to help onboard new engineers faster. Our teams felt that without minimal documentation, onboarding new engineers is becoming a big problem. It was affecting the teams directly. The outcomes and the associated tasks were clear enough. Every depot should have a readme with a brief description. Clear and well tested setup instructions, recommended tooling for development, and clearly defined owners. And so it got done without a lot of stress. We made good progress. At the other extreme of this was testing. There are several holes in our plan to get better at testing. One big reason why we were not able to progress on testing was most engineers on our team didn't know what tests are valuable enough. The unit versus function testing was a constant debate. Another big challenge for getting better at testing was a complex problem deeply rooted in the problem of our microservices architecture, which required a completely different strategy for testing. We figured this out after constantly retrospecting over our many failed attempts to include testing. We spoke about some of these challenges at another conference called DevOps Enterprise Summit last year. So Siddharth has a question. Sure. I'm allowing him to talk. Siddharth, you can ask your question. Hey Siddharth. And also EPAM team three. So I have unmuted both of you. You can Siddharth and EPAM team three. If you have a question, you want to ask Vedic directly. I guess they are AFK. Maybe we can connect again. Sure. Yeah. Or if they go back, I can always pause before the next part. Yeah. All right. So moving on. Lesson three. We found ourselves prioritizing instead of blindly following all the practices across all the services. The cost of being of debt altogether was very high. That whatever felt like comes in the way of deliberate value or was a big risk. There was someone on our team pushing for solving it, solving for it hard. And then those problems will get solved. Phrases like critical services became common in our conversations. That meant something, right? Our failure pushed us, our failures pushed us to adopt, to adopting practices in critical services instead of all services. And even if we wanted to make changes in all services together, without a clear execution strategy, nothing would ever get done to an acceptable level where you say that we are finally getting value out of this investment. So having some prioritization framework helps convey the urgency and make progress. And progress is important than being perfect. Every team and by extension, the services and code bases owned by them could be dealing with different problems and might have different needs. And the solutions for those problems need to be looked at differently as well. Or the prioritization of problems to solve can be different. I've often seen teams get stuck in objectives like standardization. And while standardization is a great idea, standards and systems can also come in the way of moving fast. To what level should you standardize? It should depend on economies of scale you want to achieve and not the doing things the same way just because that is how it should be. And often there could be something better to do than just driving standards. There could be some other places where you can get value off. So for example, our consumer facing systems had scale related challenges whereas our supply chain systems were the challenges of correctness and reliability. Every time we decide on a wide technology investment that was not a real priority for every team like adopting SMS, we will make progress where it is a priority but other teams might not be able to keep up. And lastly, as a team leaders, a lot of our job is to control entropy of the entire system. As the organization and teams grow bigger and bigger, we produce a lot more good in systems. The growing of entropy is largely inevitable but the rate of growth of that entropy is not in our control. So entropy will go out of control when everything is easy. It should be easy for us to do the right things, not just anything. So for example, yes, microservices do enable us to choose different technologies for specific problems, but that does not mean that it is okay to do it all the time without a pretty tough reason. Unfortunately, reason and logic is hard to scale. So what do you do? I recommend that you make things harder, especially making technology choices. So when you're small, you can make things harder by reasoning about everything. Let's say with the head of engineering or the bridge to the engineer or whoever is the most person on the team. But when you're growing and becoming a larger team, you cannot be in every conversation. So you have to use economic levers to control entropy so that a person or a team decides to introduce more entropy. They understand that it is worth the effort. It is not easy to introduce more entropy in the system. What could these levers look like? Baseline expectations like consistent contracts, different types of tests, or it could be getting reviews like architecture review, but if not with the senior most person, but then with a more democratized review like adjacent teams of the same division. So reflecting on a journey got us to learn some of the some of reflecting on a journey got us to learn some of the places where we were going wrong. And that we had to figure out where do we go from here, right? How do we internalize these learnings and our execution across our teams? Unfortunately, we couldn't think of an easy way. We so we start wondering realize that there is a lot for us to learn. And this is the point where we got introduced to the concept of DevOps maturity. Mostly by reading a bunch of really nice books that I'm sure this community is already aware about. And here's the first maturity model that someone in my team shared on a Slack channel. So this is a delivery, continuous delivery maturity model from the book Continuous Delivery by Dave Farley and Jay Sambal. In this we found a way to articulate what we had we had learned. DevOps practices do not get adopted in day one. You move towards a vision and there are intermediate steps. This framework highlights the importance of different aspects of interdisciplinary to turn the concept into execution. And each of those rows and then is an area that is important for practicing CB and the columns from basic to expert are levels of maturity. So you start from the left and the expectation is that you're moving towards the right on each of the rows. Hence for sharing your CD practice. An important call out in this framework is the first row, which is culture. So I'm enjoying and most engineering practices is not just about mature and how you use tools and technologies, but also your ways of working with a framework like this. You can clearly define those intermediate steps and also use them as internal or external benchmarks. And this was a good direction. It made a lot of sense to us. But we couldn't really take this to our teams and expect them to use it just because it was a it was too high level and not restrictive enough about practices and specific context. The solutions were missing and see it is aspiration as in following any practices can become a goal in itself, then delivering value. So the question we were asking was, how do you operationalize a maturity model? How do you make yourself go from, hey, we wish to become an elite team to a plan and a system that pushes you to get better every day. Here's probably one sixth of the maturity model we developed and profess inspired by many other maturity models and incorporating our learnings. We call this the microservices maturity model. The idea was to look at all the practices while building systems, instead of just one practice like interior stability. From a distance, it seems similar to the one that we just saw, but there are a few differences here with noting. But let's look at what do we have here first. So on the left side, we have something called pillars in the first column. In the second column from the left, we have something called areas which are basically areas within pillars. This way it is not as high level and gets to a little more details on what kind of practices do we want to see. And then third column from the left and onwards, we have levels of maturity, level one through level four, level four being the most mature state. So structurally very similar to the previous maturity model that we just saw. The categories are the pillars and are sort of macro engineering practices where the areas are more specific practices within these pillars. So this way it is not as high level as the previous maturity model and adds a little more detail. These are things that you can borrow from other maturity models like we did from some of the maturity models that are already out there. But the key thing to understand here is that what you decide to put in your model has to be important for your business instead of focusing on everything. So remember it's a journey, progress matters, not perfection. So depending upon your business, industry and journey, you can craft your maturity model that focuses on practices that are important for you today. And the ones that are sort of like infinite games that you must start playing now or you should have been playing already. So for example, maybe you're an e-commerce business like Rofus, things like ability to release past, run many experiments in parallel without breaking customer experience. These are the things that are important. So you create focus on agility, eligibility, experimentation, quality and resilience. That's what we essentially did. But maybe you're a fintech business. Then things like correctness, transactional guarantees, security and compliance matter a lot more than experimentation. You're probably okay slowing down a little bit, especially when you're growing then compromise on correctness and things like security. Maybe you want to be a SaaS business. Then maybe reliability with compliance is more important. Maybe the cost is very important for you so you can create a systematic focus on that. Moving on. So the third column onwards, we have levels just like continuous delivery maturity model. But the difference there is that we have two sub columns in them. One is called expectation and the other is called supported purpose. I will come to what expectation means later, but for now let's just read it like we did the previous maturity model. So for example, synthetic monitoring on level 2 says the expectation is that synthetic monitoring is to be used in production with the learning. And the adjacent supported purpose column specifies the recommended way to meet that expectation. In this case, we suggest that services must implement a well-defined smoke suite with P1 test cases that can run in all environments and can be done periodically in production using Jenkins. So we don't just set the expectation but also describe how can those expectations be met. That's what helps make a maturity model more restrictive than open-ended. When a team looks at this, they know where they have to go and how can they get there in Grofer's context. One of the key differences in our approach which comes from our learning is that our maturity model is not aspirational. It's actually risk driven. So we don't try to make our services and teams more mature just because they should become more mature. It's not like a career growth plan like we might want to follow as professionals to grow in our career. We get better because our business needs us to get better. And this is where we factor in different kinds of risks in our assessment of each of the services or microservices that we're looking at. So the levels in the columns are not the levels that you try to get to. But the levels are pre-decided for every service because that comes from the criticality of the services in our environment. This chain specifically comes from our learning that we found ourselves in talking about critical services often in our conversations before we introduced microservices, a microservices maturity model. And we're not going to get better because we should get better. We will get better because we need to get better in some way. And this is why we call the first sub columns on the previous slide as expectations. A service at a level that is expected to follow certain examples. For example, we have an area called service resilience under which at level three services are expected to have circuit breakers implemented to avoid cascading failures. While a level four service must practice chaos engineering to continuously validate that failures don't lead to cascading failures. The levels are pre-calculated on multiple parameters like frequency of code changes, number of active collaborators. If it is in the critical path to serving the customer, etc. And we try to mostly calculate the risk automatically and centrally to use a common logic as much as possible and assign a level to every service. And then see which services are where in their DevOps journey. Once the levels are assigned, teams can self-assess and set their journey to get to the level of expectation as guided by the maturity model. This started to make a lot of sense. It was getting tied really well into a structure where microservices are owned by teams. After teams did a few self-assessments for their services, it started to make become clear to them as to what are the areas they need to focus on depending upon the nature of the services. Right after we had the first quarter where most teams are organically arrived at the most relevant goals that match their reality with minimal hand holding. And again, just prescribing is not enough. Teams today have to deal with so many decisions, so many different kinds of tools and technologies, expecting everyone to make the best decisions for everything they do is unjustified. This is where platform thinking comes in. Clearly defining how we can help teams to adopt various DevOps practices without spending a lot of time making decisions and reduce the cost of transformation by achieving economies of scale. So all the things that you see highlighted in red and yellow are in the support at Gruffer's column. These are possible solutions that the platform teams came up with, which they could potentially help the teams adopt practices easily, but these solutions are not productionized today. So this is this way platform teams also got a clear roadmap of things that they needed to build. And of course, we made sure that we're not tied to the solution. So this is more like an outcome roadmap for platform teams than exactly these are the things that need to be built. When they get to execution of each of these problems that they had to solve, the solutions would differ a little bit. This is just like the best understanding of the solution according to the information that is available to us today. It also stopped a lot of debate of we should do this or we should do that. We now had a framework to accept or reject ideas and focus on platform execution. And I feel that is extremely important for platform teams, especially because the impact of their work is usually not clearly visible, sometimes even to themselves. And an outcomes doing framework like this can help keep the platform teams aligned with product engineering teams and the business. The idea of maturity models has been debated before about the utility and effective. So the doubt naturally arises is that do maturity models really work? In 2017, Dr. Nicole Forstren was the author of Accelerate. And one of her thoughts said that maturity models don't work because they go out of date too fast. Due to quickly changing technology landscape in an enterprise. And when I don't disagree with the point of technology moving too fast these days, doesn't everything that we do today gets out of date very quickly, right? Isn't that true with technologies with or without a maturity model, ways of working organization policies? Are maturity models effective in the way we deployed them at Trufus? I think only time will tell, but we committed to doing this and also committed to revising the maturity model itself at the time. And because platform teams derive their goals out of this model, the relevance of everything on the model says has been that the model says has been reviewed and questioned several times after we release the first version. So for example, it was not that this maturity model was built in isolation. It was first built by a small group of leaders in the technology organization with some platform engineers and senior engineers. And then it was socialized and reviewed multiple times with people representing different teams, especially engineers who are building things hands on because they're in front of the challenges every day. And it was very important to incorporate their perspectives and also share with them our perspectives on how we see that the technology landscape needs to evolve and how this might help. And this really helped us get adoption really quickly because we practically covered every team before we announced that this is a framework that we are going to run an experiment. What we also noticed is that practices that have stood the test of time don't really change. The technology supporting the practice could change and that's fine because we deal with that kind of changes anyway. Approaching a maturity model with our solution to help us scale engineering management with a team that was young and lacked experience, but was highly motivated to be better. And your reasons could be different. You'd have to look at your reasons and approach deploying a maturity model, building one and then deploying it according to what your reasons. So yeah, that's it folks. I hope you enjoyed the session as much as I did presenting it. I would love to take questions or hang out with you after. All right, with a couple of questions. First we have from Siddharth and Siddharth, I'm unmuting you. You can ask your question. Yeah. So by the way, could you please share your thoughts on the security to related to DevOps, which is presently known as a DevSecOps where we need to stand for that? Hey Siddharth. Yeah, sure. I mean, I don't know where to start and where to end there. I think it's a, it's a white, it's a really white topic, right? And I think the answer also sort of like depends on, again, what is your industry and what product are you building? In cases, like I said, let's say if you're a fintech, your answer to that question would be that you have to drive a lot more control tightly, right? But in case of an e-commerce company, it might not be, it might not require that kind of type control. For us, I think, and I think irrespective of where you are, my answer to that question is it's going to be slightly abstract in the sense that a business always wants to move fast. How fast is, again, depends on what kind of industry you're in. And security in context of DevOps has to be more about building the right guardrails than coming in the way, right? You want to unleash your tech team to solve problems, right? And this is very similar to how DevOps and descendants used to be before. Like, even if you're on AWS, if your team is coming to you to like provision a new EC2 instance or get your new S3 bucket, which was basically nothing. It is an artificial bottleneck. The same thing goes with security as well. So like, can you craft out policies and processes that help your engineers do things in a secure fashion? And I think that is the only way that you can scale with cloud because sooner or later, your business is going to scale a lot. Let's hope that it happens. And you'd want to leverage more of cloud and then things will go out of control. So DevSecOps, I think for me is the understanding of DevSecOps is, DevSecOps is often construed as like, we put security in CI pipelines. This is one aspect of it, I think. I think the other aspects of this is that embracing the fact that we are on the cloud. So a lot of things are going to be automated or rather should be automated. And if they're automated, that means that we are going to be doing those things faster. And that's fine. But let's just make sure that when we are doing something repeatedly over and over again, we're reducing risk in that. And we are also doing that through automation. That will only happen once you embrace cloud. If you're moving towards the other trend, which is cloud-native technologies, you embrace that you're in a cloud-native landscape. You're not going to control every single thing every day. You're going to have to let go of control by creating good countries. I don't know if that answers the question. It might be slightly abstract, but that's the best that I can answer right now. All right. Yeah. Thanks, Vaithi. So another question is from Avi. Can you ask a question? Yeah. Thanks, Richard. Hey, Vaithi. Really good session. So one question I have is with the recent trend where we see a 10-minute delivery happening across e-commerce space, is technology has to do something with that or it is more about the business models? Oh, it's all technology. It's all technology because I guess it's similar to cloud and infrastructure management. When you have so many servers and so many engineers interacting with those servers, you cannot have another person come in the way because the customer is skilled. That's where cloud comes in or that's where virtualization and interaction with infrastructure has taken place. A lot of what we did at Lincoln is really bad. Of course, a lot of things cannot be done using technology, but it is the business fundamentally unscalable if you're not driving it. For example, and this is a very interesting topic for me because supply chains in e-commerce businesses are not like how you build data centers and cloud. Rather, cloud and data centers have been learned a lot from physical supply chains is that we were previously in a warehouse-led model, right? And we could control a lot of things there because we could have people on our payroll and we would deliver the next day. But when you move to a 10-minute delivery model, you're not delivering the model. You're delivering from something of a dark store, which is much closer to you. There's no other way that the delivery is going to happen. Now when we want to deliver to millions and millions of users, we are going to need hundreds and thousands of customers. How many people can you have on your payroll? And even if you have people on your payroll, how are you going to control their behavior? And a lot of that just happens through tech by observing how things are happening. And you can apply based on the principles in scale tech organizations. I want my engineers to do the right things, but I can't go and stand on their head. So I'm going to build observability in behaviors. And that's what sort of brings you to things like engineering metrics, like the Dora metrics. You'll find an elegant set of metrics where the business is off-savvy, not big business. A lot of things, for example, it's raining right now, it's pouring here in Gujarat. Most of the tech interventions when, you know, that's going to be automatic. By observing the real-time stream of data that is coming from the software that is deployed on the ground. For example, it might not be raising in Bangalore today, so the operations in Bangalore would be fine. But in Gujarat, it's automatically at a locality level to handle that kind of disruption. Thanks for sharing your experience with us today.