 Good day and welcome everyone. Welcome again. Today we have Vaidik Tapur, who's going to talk to us about continuously deploying a distributed monolith. Over to you. Hi everyone, thanks a lot for making it to the talk. I know it's 9am to Saturday morning, but I really appreciate your participation and I hope to make it worth your while today. And thanks a lot to Agile India for giving me this opportunity. So the second time I'm telling Agile India, and for the first time I'm speaking, so super excited. When I was submitting this proposal or rather sort of considering proposing a session for just to speak at Agile India, continuous deployment sort of came up as a topic that I've actively wanted to speak about. Because that's also something that I work on very closely at my current workplace. But then continuous deployment is also a topic that's been discussed in the community for over the decades as like what is it that I can talk about which is unique, which can be unique for the audience. And while adopting continuous deployment and continuous delivery practices was this unique challenges for every team. I think the complexity of working with microservices architecture makes it even more challenging to adopt continuous delivery and continuous deployment practices. And that's something that on that we were experiencing at pro first today I want to talk about our journey and challenges and some lessons that we learned while trying to adopt continuous delivery practices, while working with the microservices architecture. So like I said I work at pro first. We are one of the largest online grocery services in India. For eight years, we started as a hyper local marketplace of grocery stores where customers can order online and get their orders delivered within 90 minutes. There was a need for a convenience service like this in India and we were solving this important need. And we started we grew rapidly. As like most startups, we started the crazy idea and faced multiple challenges along the way, which changed that core value proposition and the business model five times in the last eight years. Moving from a hyper local marketplace to a centralized warehousing model for fulfillment to becoming a quick commerce business that now delivers orders within under 10 minutes. And each of these changes were pivotal in our journey. We would not have been able to survive and be where we are today without this tremendous agility built in our organization. And we realized that agility and speed is one of our core strengths for us to continue to be relevant and keep transforming ourselves. So, to support this agility for this entire for the entire organization, and to be able to survive this extremely competitive business landscape. But this, the cadet's delivery practices sort of make it sort of extremely important for the entire organization because we are a tech business today we have about 2000 employees directly working at Rufus at a ratio of 150 people. What makes up the technology function including engineers product design data. And while the organization is large and there's still the technology function that function is relatively small. It's sort of driven by technology the decisions are taken by by technologists and a lot of decisions, even by the rest of the business leadership sort of depends on everything that the technology team produces and makes them makes available to them to be able to function on a day to day basis. So technology and technology and agility in our technology practices is extremely extremely important for for us to continue to stay relevant. So when we started just like any other start up or any other any other team that is trying to innovate we start we start out simply start out with a very simple architecture there were three services if not micro services, or rather three applications. One is back in for a consumer facing app, which is the mobile apps that you would use to place an order on group is one for catalog management and one for everything. Everything else to do with order fulfillment is order tracking support assignment of order to to to the operations staff to to fulfill the to fulfill that specific order. And this is good this work fine for us initially to build quickly roll out features and I don't even think that we could have done anything more here because we were very new to the entire retail business and we didn't really understand the domain. As a business crew as more challenges came as more problems came. And with those problems as we hired more people to to work on these problems, apparently it became hard to work across these applications, the complexity of our business domain needed us to work on multiple problems at the same time. And more people working on similar code bases at the same time usually meant overstepping and involving handshakes at some levels to make sure that we're meeting some level of standards. If at all we were, you know, we had the time to sort of maintain those standard standards during a rapid growth journey. Because a business environment needed us to move freely fast, we just couldn't take a pause and reflect on how are we going to collaborate on these code bases we couldn't build the tooling factor this and all development experience that would allow teams to move fast with this setup. And while we were facing all of these problems. We did make sure at least on one level that we had lesser problems so we were mostly on the cloud from the very beginning we were on AWS and some level of infrastructure level challenges were not as big a problem. And while there are many stories of teams successfully working with monolithic code bases, I feel we were not really mature as a team to make monolithic code bases work with the growth pressure we had. So being able to divide and parallelize this always seemed like as the most natural way to be able to attack multiple problems and give different teams sort of like problem statements and the autonomy and enough boundary so that they can attack those problems at the same time. So we looked at adopting microservices as the natural progression for us. We started breaking our applications organically into microservices to enable teams to work on problems independently. Every time we could see a new problem that would be independently that could be independently work by a team in a domain without dealing with the complexity and chaos of our existing code bases that we started out with us started off with. We would spin off a new microservice and and just like that teams are spinning off new microservices and they'll choose their own stack to attack the specific problem at hand so that the mandate largely was solid use whatever you feel like is the right right tool for solving a particular problem. And to make our teams truly autonomous we felt like it we felt it was vital for agility that we give our teams ownership of systems and to end so it was not just the applications we also wanted our teams to own the infrastructure and the entire delivery process and So we pushed the idea of developer and team autonomy as far as we could adopted you build it you run a philosophy and enable teams to take the technical decisions my entire stack, including infrastructure and operations like configuration management scalability resilience and even handling incidents The web seems to be responsible for taking care of governance for providing processes and tools for developers to really own the entire application life cycle. We were deploying to EC2 instances using Ansible and Jenkins and along with of course some other infrastructure tooling and most common tooling was standardized but not long so even if teams really had a need to deviate from infrastructure standards they could if they had a good reason and nothing would really come in the way. And all of this was sort of again designed from the perspective of that we were going through our growth journey and we didn't really want technology decisions to come in in the way of chasing that growth. So all this worked really well or at least that's what we believe that was happening that it worked really well. But in early 2018, we realized that we had an illusion of utility teams were working independently on their micro services deploying multiple times a day, but there were not enough God deals for quality. We were creating waste and shipping poor quality products that were frustrating customers internal users and management. Now, our engineers were burning out as they were busy firefighting then shipping value to customers. We used to think that solving for just autonomy by creating boundaries and simpler infrastructure management and a delivery pipeline was enough, saying you build it, you run it is enough and our teams will own quality of what they ship. And to an extent, it happened. Our teams did what they felt was right, and was in their control in boundaries, but they did not have a systemic view. In fact, nobody consciously ever provided oversight to the overall architecture. I do see Karen is raising, raising hand. Yeah, I checked with them. Let's continue. I haven't heard back. All right. In case if that's a question, we'll just take that question at the end of the presentation. So what we ended up was was a proliferation of microservices that teams treated to manage technology and problems within their boundaries as they understood. And more often than not, the teams were not really considering the impact of introducing new microservices on our overall architecture and the overall state of quality. So we ended up with autonomous teams creating microservices independently to solve problems within their boundary boundaries and under their control. But due to the missing guidance of an overall architecture, we had microservices that were hard to develop, test, release, and monitor introduction. In many cases, the boundaries were not even clear that we're leading to slow development cycles and releases. Our quality feedback loops were extremely poor, so poor that we were mostly getting to know about bugs from customers, customer support, and sometimes directly from the CEO. So this was actually unacceptable. And on top of that, we ended up with an extremely diverse text time where the slide doesn't even capture the entire picture because it is not easy honestly to list out everything. But when you see MySQL and Postgres being used together in the same company, just fairly small companies still. And it's just a question that comes up is like why, why two RDBMSs, right? And the answer for that is never really clear to anyone, even on our team, right? That's what happened with us. We just lost control of some of our technical decisions that could just have been much better. But the downside of this was felt in overall engineering practices. So since the technical decisions were localized and democratized, we ended up with a diverse stack. Anything that you name, we would have it. We were using all the modern technologies, but the diversity was the challenge of lack of standardization and benefiting from economies of scale. Every tech stack required a unique way of thinking about continuous delivery, which made the journey a lot more painful. And the lack, the presence, the clear evident presence of delivery pressure to chase our growth sort of also did not really create enough room for teams with choosing this, these different kinds of diverse stacks to be able to figure out how we're going to continuously deploy their applications. So retrospectively, no matter what we think, it felt pretty empowering to the teams were empowered. We have a, say if we have a new problem, we find the right place to write code for it mostly. Or we create a new microservice so that the right team can attack the problem independently. And for engineers, it was, it was awesome. It meant having the freedom to be of being able to try anything that doesn't work. Well, change it. Right. That's that's what microservices allow you to do. But yeah, I mean, it's a different thing that going back and changing the students can be a and was always really hard. The worst was that it took us a lot of time to figure out what was wrong. When we realized that quality was an issue for us immediately we created organization focus on improving quality. The entire technology leadership was driving quality as an agenda. Our engineering teams were excited about improving quality. Writing tests was largely believed as the debt that we must have paid off already. So our teams were not even hesitant in taking up as a goal of improving test coverage. We'll take up OK as we take up sometimes monthly goals or whatever have you. And those goals would be allowed aligned towards improving test coverage. But even with all the organizational support and alignment, we couldn't make any meaningful progress. Now the teams could achieve their goals and we failed quarter after quarter. And this was just something that we could not understand with all the intent from leadership and the teams. What was stopping us from meaningfully making progress? So I think it's not able to navigate the way to improve quality in a meaningful way in architecture. Writing tests didn't really seem to add meaningful value or more specifically the way we were adding this or the kind of test we were adding. A lack of technical oversight made this microservices architecture complex. And I think on top of it, a lack of clear testing strategy didn't really yield any clear results. At the infrastructure level, our processes were decently managed, but application releases were manually tested, orchestrated and testing for some behaviors was just not possible with the kind of complexity we were dealing with. When we would try to approach testing microservices, we would not be able to make any meaningful progress as testing one service, one microservice was never really enough to test out the behaviors that the customer cared about. Our microservices lost their defined boundaries over time. We figured that we were dealing with a distributed model instead of microservices that become hard to reason about. And this took quite a lot of time and cycles of failures. And most of us were not able to clearly see that we were dealing with the challenge of a distributed model. It was not that you write tests in one microservice and you will see the value of it. Now, because of the way some of our features were over the period of time had sort of gotten mismanaged in our architecture. The organic way that a microservices architecture grew led to competence being created according to only the convenience of the teams we had. While to a certain extent this is desired, we realized that our team architecture became hostage to a microservice architecture. Most of our team restructuring decisions were always centered around who's going to own which component instead of centering them around customer value and outcomes. We even tried to fight this by forcing our team restructuring around customer value. And in the beginning it sort of did work, but after a few quarters down the line again, we realized that our service boundaries were getting even more bloody, as building most new features required touching four or five microservices that may or may not be owned by the same team. So we started seeing poor decision making in our low level design and the code that we were writing because of the cognitive load. The engineers had to sort of go through while implementing feature day in, day out. So naturally our teams started slowing down as making sense of where a new feature should go was not really easy. And in fact debugging bugs or incidents in production was even more hard, even harder. But the biggest signals of a crumbling architecture was orchestrated deployments. So we had to orchestrate deployments to decide which servers should be deployed first. And mostly it would depend on the feature and how it was implemented. So if a feature is cross cutting multiple microservices, depending upon how the features is getting implemented, you would deploy either service A first or service B first. Now, if you don't orchestrate that well, it would lead to an outage in production. So all of this was not simple. It was not consistent. It was not deterministic anymore. And all of this was waiting us out as a team. Developers were unhappy, actually, because of poor developer experience. They were regularly dealing with bugs and incidents in production instead of creating value. And I must say that it was overall a very stressful atmosphere for the fairly long period of time. And one lesson in all of this that we learned is that if you allow your teams to launch, more components are autonomously and independently through self-serve tooling, they will do it. But if there's no framework to service problems in the architecture and engineering practices, the complexity will become too hard to comprehend and more mess will just pile on. As a young lady, we decided to slow down and find a way to be fast enough without compromising quality and resolve the mess we had created. Since our microservices were not really independent, independently testing them was enough. We decided to run automated regression tests on a distributed model to ensure that a change in any microservice should not be in production, which essentially meant running behaviors on the entire backing for every little change. I bet was that this would help us increase our deployment frequency again without compromising quality. At the same time, it would give us the safety net we needed to re-architect. And we called this initiative Project SHIPPET. So SHIPPET was a multi-phase project to attack our biggest distributed model. So with phase one, we decided to temporarily slow down to improve quality by doing a lot of things we knew would not scale but would work right now. We introduced a manual regression testing process to make bi-weekly releases at the end of the sprint. So it was sort of like a mini waterfall in a sprint. And this was on purpose. This was by design. So what we did was we first documented all sort of like critical behaviors that must not break anything that was revenue impacting or was sort of like a hygiene customer experience. Now, at the end of the sprint, one day before the sprint ends, we'll do a code freeze and get the entire engineering team, not the testers or not a small group, but the entire engineering team to manually test the app by distributing these documented behaviors and assigning them as tasks to the team. We call this activity as all team tests. And so this process, we'd be able to manually test the entire app with all the changes up together within an hour. And the teams will fix any known bugs that would be reported through the process before the release. And finally, we'll make the release. And then the fact that nobody liked testing, this was extremely effective in some ways. It encouraged developers to automate things that they found painful in the testing process. But it brought the team closer to quality issues and gaps in our systems leading to bugs, missing features or poorly implemented features that are flaky nature. All of those things start sort of surfacing and developers got even closer to quality issues than they were ever before. But most importantly, this was not just about testing the client. This was about any change really, right? It's about backend change. It's about configuration changes. It's about infrastructure level change. It's about bug fixes. So we start treating any change anyway that was related, vertically related to our distributed model. This definitely slowed us down, but it helped us cover every little change that would unknowingly break production in some way. And like I said, we did things that in scale. We let go of our independence, data deployment, autonomy to make sure that we ship quality software. We ran fundamentally the same process, but in different flavors for a while, and wherever we encountered new bottlenecks that could be optimized in the short term. For example, after a few iterations, we had to allow some teams to ship trivial cosmetic changes without waiting for the entire sprint cycle. And while this process worked, but it didn't really scale, we knew that we could run with this for a while and make it less painful. So we were still ready to go back to developing and releasing microservices independently. So we had to continue to look at our backend as a distributed model and test it the same way, irrespective of how small the changes. So the phase two was all about making phase one less painful by speeding things up and automating things. So we decided to automate all our behavior tests, and we tasked the central team to sort of do that and build fmdl test environments for running tests on demand on every chain. So if you create a pull request on any of the microservices, that should actually spin up the entire backend and run these end-to-end behavior tests. So building the CI experience that enables a hundred developers to ship simultaneously, we thought was not as complicated a problem, but it turned out to be a pretty complicated problem. The first complicated problem was being able to provision and maintain a reliable test environment of what, 18 microservices with SQL and no SQL databases, SQL servers on demand where tests can be run. Synchronizing state using data fixtures and keeping them up-to-date was another big challenge that in many ways we don't have a clear answer to even today. So phase two was a really long phase for us. We started experimenting with Docker and Docker Compose to create these fmdl environments, infect zone dating in about a month's time. We could orchestrate a complex backend and run tests over it. So there was some value coming out of it. But it was all too slow and unstable for any real use. We continued to sort of make efforts to stabilize the automation and the tooling to provision the entire backend to some acceptable level of stability to give us the push until we realized that we had a new problem at hand, which is dev prod disparity. We were using Ansible to deploy it to production by Docker Compose in our test environments. And this led to test passing during CI runs by deployments causing outages and bugs in production. Finally, in September 2018, we realized that our tooling is not going to work. Our strategy was not right with Docker Compose. We were building container orchestration that was built for local development and we could never really use the same thing in production. We needed something that was built for production and the industry's momentum was towards Kubernetes. So that's where we decided to go as well. By January 2019, we had put a few critical services in production to prove scale and our implementation of Kubernetes. Finally, by March 2019, we started migrating away from Docker Compose based CI setup and also started migrating those services hand in hand to production to make sure that we have dev prod parity. Well, this helped us with achieving dev prod parity. It also helped us streamlining local dev experience that I'll talk about briefly. And we got some solid outcomes, right? So we could deploy a lot more regularly without worrying that core functionality would break outages due to deployments got reduced significantly. Developers could ship features much faster to do much lesser work. We let go of that mini waterfall within our strengths and sort of move back to deploying every day. With reduced burnout, improved the momentum to ship and innovate, there was always clear visibility of what changed in production and if that change was the source of something breaking, we could easily go back and divert. For our distributed monolith, we could even look at incident management and quality process in a centralized way this way. And this immediately provided some baseline consistency for all teams working on the entire backend, right? We were also able to achieve some of our architecture improvement objectives leading to a simpler and more performant backing. We deprecated some legacy microservices and add some new microservices which helped us reduce overall complexity. It took us a couple of years, a significant amount of engineering time from product and platform engineering teams and hundreds of thousands of dollars to get to this. And not to forget the cost of migration to Kubernetes was not less in any way as well. It involved, of course, spending money on the infrastructure but also retraining a lot of our engineers in how we were changing the way of working. So we continued to ship multiple times a day with this set up today. We continued to build features and take them to market really fast. But we got all this after making really big investments and we continued to make investments in supporting this setup. A big shift that happened for us during this journey was that we became a lot more explicit about CI CD than we ever were and also about our testing practices, right? Our teams start prioritizing for developer experience more actively than passively. When we were far from ideal, we see independent teams being more thoughtful about change management as compared to before. This also forced our teams to assess their own domains and boundaries of their services to be able to move as independently as possible. And this was a big outcome for us, right? So we now see that teams are not just thinking within their boundaries. We had to go through this pain to sort of get to that but teams are a lot more cognizant about how introducing a microservice could potentially impact their teams' experience and some adjacent teams' experience. The project ship had forced us to invest in Kubernetes. Well, this was a costly shift and we do have mixed feelings about this decision also till date, but there's no denying that Kubernetes has forced us to adopt cloud-native practices that help us with reproducibility, reliability and a much better developer experience that helps us develop in the cloud. And here's what we have done with Kubernetes at a high level in the last two years. 75% of our target production services are migrated to Kubernetes. And by target, we mean that we don't intend to migrate everything like stateful services and some extremely slow-moving legacy services. We only intend to migrate things that are either in the critical path of serving production traffic or get developed actively. And also, anything new that we build will be now deployed on Kubernetes. This is a lot of benefit that comes with Kubernetes for development in CI CD. We practically develop in the cloud, like I said. Developers treat the stage Kubernetes cluster that we have as an extension of their laptop. So the tooling that we use to set up to provision these test environments all the 18 microservices on there, data stores provisioned on demand, the same tooling is used by developers to spin up a personal dev environment on demand where they can sort of like have a grouper working in a box just dedicated for them so that there's no overstepping on dev environments. And this has really enabled developers to build new features, still be cognizant about the microservices boundaries but be able to test those features really fast. It gives developers a fantastic feedback loop for developing new features and running experiments, either product experiments or even engineering experiments. And what it means for our developers is that when they develop in the cloud, they have Kubernetes, but we have still not let go of some of our old tooling. We still try to use some of the old tooling that are like Ansible to manage some parts of our infrastructure, but mostly everything is sort of built on open standards and developers can still take decisions where they might want to deviate a little bit from whatever we have standardized. But yeah, in the larger scheme of things, platform teams and the DevOps teams continue to build abstractions that reduce ops overheads cost, security and reliability so that developers can truly own things end-to-end and we keep pushing this boundary as much as possible. We are able to attack our architecture problems by creating platform teams whose sole job was to focus on building the right application platform layer that helps other product engineering teams move fast. We started off with attacking biggest source of quality and performance issues in our production and managed to deprecate a few legacy services and replace them with new microservices that enable our product engineering teams to rapidly build new features and have their own roadmap to sort of evolve to sort of contain a lot of complexity but still enable teams to quickly try out new features. However, we wouldn't have been able to rearchitect if we did not have end-to-end tests for our product and very specifically we were testing behaviors more than anything else because that's what matters to the business at the end of the day. In fact, we took the call to not spend as much time on writing any other kinds of tests until we stabilize our architecture. These behavior tests gave our teams confidence to rearchitect faster than they ever could. And some of a cloud-native approach to consider integration force us to test not just our applications but also configuration and infrastructure code very actively. Every CI done because it will provision the entire integrated environment of 18 services and tear it down for the test run would just by design test our applications for their ability to be able to operate in a cloud-native environment. And while we could rearchitect some of our parts of our architecture allowing some teams to truly leverage the value of microservices we're still not quite there we stop calling our back-in-the-district model so there is a part to sort of getting to that we are still not there we have a journey to cover but we feel we have a pretty good system in place to continuously get to a better state. We've created various architecture steering committees around domains of our business that continuously assess if our teams are slowing down because of architecture. These architecture steering committees work towards ensuring that we continue to apply learnings of our past and not keep sitting on technical debt that will source down. At a high level this involves reviewing ownership of microservices reviewing if those microservices mostly fit in the business domain of the team owning that microservice and also reviewing which microservices partially or completely are owned by other teams that either don't work on the same domain or have overlapping concerns and then attacking what kind of new boundaries should be set up. Depending upon this assessment ownership of microservices are changed or the boundaries are adjusted this way of thinking has also enabled us to not be held hostage by our architecture we start from the business and customer value and then figure out the right team and system architecture to achieve our goals and with better testing and CICD practices these architecture steering committees are actually empowered now to take calls on a real-time basis and continuously re-architect and this seems like a fantastic system to be able to attack architecture challenges on an ongoing way. So this worked for us even though it was extremely expensive and not taking this call would have meant years of debt would have continued to work through and we would not be able to capture business opportunities that came our way. Should everyone in a similar state try this? Maybe. If there's one important learning I would like you to take today from my talk is that be conscious about not being stuck in a distributed monolith be consciously sort of like looking for that are you actually stuck in a distributed monolith? It is a real thing it happens unknowingly it took us quite a lot of cycles of failures to realize that that's exactly what we were going through. And so yeah it's also absolutely possible that you may not even realize that that's what you're going through. But I hope there are better solutions out there than what we did and there are easier solutions where there is lesser ongoing costs and pain of maintaining the tooling and the infrastructure. When we have distributed monolith that we can deploy every day that we can deploy every day and we can do this work independently today. The cost of making this happen is always borne by some other teams. Specifically platform teams in our case are supported by three teams. The test engineering team the continuous delivery platform team and the backend platform team. That's about 12 developers just supporting about 60 or so developers to be able to work on this distributed monolith. These teams collaborate continuously to make sure that they enter into an CI experience. CI CD experience never breaks or else nobody will be able to make any changes in production. So this involves automating the right end to end behavior tests improving CI CD pipeline setup making sure that pipelines are reliable modernizing our infrastructure and of course handling unique failures that happen in the cloud every day. An example of this is we spend and continue to spend a large amount of time in just dealing with creating the right data fixtures for all important features to reliably work. Systematically it gets hard for any central team that is not developing features to support those features with data fixtures. And one team builds the features while another maintains the data fixtures for some for the same features. This is just something that is not going to really scale. I think this is a quick time check. We have five more minutes. Perfect. This question is not if the problem can be solved. The question is how long will you get to your ideal state? And the complexity of this tooling it will keep operating it every day is going to be consistently is going to be challenging and it's going to take a lot of effort. And funnily enough we wanted developers to own things end to end but having this kind of a setup where we are dealing with a distributed model trying to enable teams to still work independently but the central team supporting the tooling created the wall where every time there's there's a thing that you want to improve as part of your delivery process that gets thrown over the wall. So we are actually stuck in some in a different kind of an antipattern that we'll have to break at some other point in the future. Controlling complexity becomes really hard in the sense that an interesting challenge that we are dealing with is that this tooling is useful. This process of working is actually useful for us and in some ways it gets infectious. So every time a new micro service sort of comes up our team actually actually want to use the same tooling to deploy things in production. The counter side of this is that it actually makes the existing tooling which is already pretty complex even more complex. And teams that are actually not impacted by a distributed model also want this integrated end to end experience. So also the kind of solutions that you propose would have the ability to sort of create some kind of back pressure on yourself as to make sure where other teams might start requesting you for a similar kind of a tooling and pattern which is something that you might want to avoid because it's actually an antipattern. But yeah, I said this multiple times and I'll say this one last time. Managing this distributed model and the tooling is extremely complicated operationally. There are different services that are being developed by different people moving at their own pace a central team or a few central teams cannot, it's hard for those teams to really keep up with how the application is sort of changing and then we keep supporting the delivery process. It starts causing frustration for these teams as well and sometimes you just have to take a look according to your context as to like how do you attack these problems that cause frustration. So where are we going now? We continue to use this delivery model. It works really well for us but we know it's not sustainable. So as we speak with experimenting with different ways of testing where we can simplify our delivery process and we are careful of mostly what we have learned about this is we are careful about not making the same mistakes again. So we're cognizant about the microservices that we introduce so that the way we introduce them, the boundaries are well managed. But yeah, I think one of the things that we did in this process was that we are still invested in microservices and a lot of our scaling strategy of microservices and DevOps practices in that build a good DevOps platform that enables teams to adopt DevOps practices the right way. We call it a DevOps platform built on top of Kubernetes. So Kubernetes is the platform of our choice where we sort of like are trying to bake pretty much everything in the platform from CI CD, reliability, resilience observability into the platform so that all of these things become easier for developers to adopt. We already have this platform in place which continues to evolve and basically trying to strengthen our capabilities here so that developers don't have to take active decisions in adopting the right practices. And so yeah, we have a multi-year roadmap for our DevOps platform. Like I said, we're trying to simplify the platform for developers so that they can keep still moving fast but not be held by the decision of adopting which practices while being able to while trading off with moving fast in our business context. And yeah, that was our journey. My name is Vedic. I work at TROPHIS. You can find me on Twitter, LinkedIn and Medium. Amazing. I think it was very intriguing session. It was lovely to hear about the story. Thank you all. You were a lovely set of audience. Thank you so much for these insights, Vedic.