 Hello. Should I say that again? Take two. Sorry. So how did DevOps enable real-time business decisions for us and also help us break a monolith BI system? That is what we are going to be talking about. I am Pradeep, Senior BI Engineering Manager with Target. My name is Ravi Kumar. I am a consultant DevOps which I have worked in at Electric Close working along with Target in the data engineering analytics space. So this is where we start off. So why is DevOps so important? Quick story here. I am not very good at telling stories, but then I will give it a shot. This is an interesting conversation that happened between a mechanic and a surgeon, a cardiac surgeon. The mechanic tells the surgeon, I also work on engines. I repair them. I keep them running. And so do you. The only thing is you, I work on a mechanical engine, you work on a heart. But then we essentially do the same job. So then why is it that you are regarded so highly, you paid so highly while people just think I am doing a dirty job, right? And you guess is on what the answer could be, right? Oh wow. Okay. So people are awake. So that is exactly why DevOps is essential. So no one's going to let you bring the system down every now and then and implement changes. But at the same time, changes are so critical in the world that we live in, right? And if applications are doing it, BI teams need to be able to support that. I don't think BI teams can tell them that you guys go ahead, change whatever you want. We will do it once in two or three or maybe six months. It's probably going to be useless by then. That brings out why data warehouse really needs to make sure that fresh, reliable and accessible data is available at all times for the business to make decisions, right? No longer does a business implement the strategy, wait for a couple of months and then see how the strategy work. You would want to implement the strategy, see the impact and maybe even make changes on the flight. And that is where DevOps becomes very important for a BI kind of a setup, right? Having said that, what are some of the challenges that DevOps implementing DevOps and BI really comes with? For one, the BI systems are usually very tightly integrated. You have multiple sources, multiple consumers, right? How do you make sure that one change is not impacting another downstream consumer or one you're making changes in a source and that is flowing through and it is a tightly coupled system and the whole enterprise relies on that. You don't have, say, multiple BI systems within a single enterprise. It's very unlikely that you would be doing that, right? And those are exactly the challenges why DevOps becomes quite an overhead in the BI world. At Target, in addition to all this, we have a mix of legacy and modern system. We have existing systems which are running on RDBMF. We have modern systems which are running on something, it could be Hadoop. It could be any new form of NoSQL database. All those things kind of work seamlessly, have to work seamlessly, right? And we also, in addition to these, we have vendor package tools for ETL and reporting. We have our own tools that we are developing. And with all these in the mix, there are very few DevOps tools which would actually cater to this sort of a technology stack. We'll probably come to that a little later. But then these are some of the challenges that we face with DevOps in BI and with Target as a focal point. So there is a fundamental thing here, right? Any guesses what could be the fundamental thing here? From a conventional CI, CD implementation in an application space into a data, we heard some of these challenges. What could be one of those fundamental things? What's our product here? Sorry? Data. Tools. So our product is fundamentally data. We don't have a sense of an UI or there are certain things that we need to put it into test. Panellatent through a pipeline like you normally do in a conventional set of our product is data, data, which is catering to across target, right? Across multiple business lines. And we are talking about terabytes of data and stuff like that and sources and massaging of the data ETL, ETL, all of that in the mix. We'll talk about some of those challenges as well as we go into the mix of how we implemented, what we implemented, and I just wanted to get you a perspective as to what is fundamentally our product so that it distinguishes clearly from people who are from the app space and looking at it in using the same pair of lenses, but it could not be true because looking at it and reviewing it as a data as a product itself is very different. It's a very different beast in some sense, right? So what did our journey look like, right? This is, I mean it is fitted on one slide, but it is actually a two-year journey, right? It all started with what we could potentially call a cultural revolution, right? Where did this originate from? It originated with the formation of the enterprise data analytics and business intelligence as a pyramid by itself and target, right? And that is where we wanted to reach out to our business and see what their pain points were. Are they getting what they really require? And one of the things that we heard loud and clear from them, we need to be able to turn around faster. And that is where the whole agility mandate started. This was way back 2015. So our leadership came out with an agility mandate, right? And I think at that time we were still learning as to what agility means. Most of us were understood agility as just making sure that you're following all the agile principles, which we in hindsight is totally wrong, right? So that is how we started off with the agility mandate. Now what did we do next? We had to break down this monolith. So now we have one enterprise data warehouse, which if you really think of it is a lot of different products, but then how do you break that down? So we created the agile scrum team. So each team had a clear ownership. Each team owned their data, knew what they were responsible for. So that was the first step that we took towards it. Next, as we set up these teams, we realized that it was not really going the way we thought it will be, right? Just following the agile principles does not change anything fundamentally. And that is where we set up two, or I should say two of our most significant investments. One was setting up the agile ops team. So the agile ops team had a very clear mandate, made sure that teams are thinking of the agile and they are not just following the agile principles. How do you do that? We'll come to that at a later slide. But then if you look at all the clouds, those are actually the challenges that we faced. We will not spend too much time on each of those, but then the agile ops team was essentially helping us with one, the cultural trip, thinking agile, agile planning, agile processes, and how do we move from a waterfall to agile model? So those two things were really what the agile ops team started. The second most important investment for us was the systems team, the BI systems team. Now they, and it all started off because we did not have, we could not get off the shelf developed products that we could use. We had to bring in a lot of different stuff, marry them together, create a pipeline of our own. So what was the systems team doing? They were making recommendations. They were evaluating some of these tools and making recommendations. So this is where we want to draw a clear line between just making recommendations versus governing, because we also wanted to democratize the way stuff is done. And that's the whole agile principle. So they made recommendations. In addition, the other very, very important, I should say, contribution that the systems team made was in tailoring the CI CD pipeline for BI. So if I was to just walk up to a team and tell them that guys, you need to use these tools, I'm sure I would get 100 different versions of the solution. And that becomes difficult to manage because we also want to manage the infrastructure. So these teams came up with templates, what they call the easy buttons. And these we created multiple templates, one for the legacy system, one for modern systems and one for app development, but then these were templates that could be used by other teams, just customize a little, maybe change the thing here or there. But then those really drove adoption, because teams were now not spending time only on building the pipeline, but they were the pipeline would be up in a couple of hours and they would start seeing the benefits quicker rather than spending a couple of months trying to build pipeline. So I think though, the fundamental thing is though here, when we talk about principles or whatever it's processes, when you do it, breaking teams, having smaller teams, how you even adhere to some of these processes in the construct of where we are, it becomes hard. I cannot have a user story from a data product perspective, thin slice of story which has to be like an MVP thing. You know, all of those things were actually a roadblock for us. How do you even cut across multiple business lines and thin slice of story so that data product or a data engineering team and even the analytics team is looking at that slice cut and stuff like that. So we had to refactor some of those things which will apply into this context and make it also easy for the adoption purpose. So Pradeep did mention this took us about two years, I think we started somewhere about 2015, late 2015 in our first avatar of the agile principles, go and do this, getting into the training sessions and stuff like that. As we went through the journey, there were a lot of bottlenecks that we had. It's not just about having these teams do what they do. It's also, you know, be part of them to show them the way to do it. And there were decisions made as we go through, went through the journey and changed some of the thought process we initially started with. So the journey is not ended yet. That's the reason why in the top most corner, we said this journey is still on. We have not reached the stage where we can claim ourselves to be in the Rwanda stage. But interestingly, where we've come to is this was our earlier process on the top, you see from a check in into the code repository, it used to take us about 30 days approximately to move anything into production when we were today where we are is at about two hours to a day to move anything into production, we've not changed any of those process steps in the what we have done is automate all of this, put in a lot, including how the dependencies are there, what we do, how we do stuff. It's all about automating it. It's not about changing, you know, dramatically removing any of that has given this result. But as we go through it, we'll try to drill down a little more as to how did we come to this two hours from our 30 days. So obviously, like I said, we went through this learning, right? This was somewhere in between. It was not at the outset in 2015. We got this. We wanted the management to participate, you know, across the teams, how we would do first of all product thinking has to be there in each of the teams that we've got are product thinking. The vision at least is we need to have product team teams across the board and every team is set up to be autonomous. Every team is set up to be making their own decisions in terms of tool choices, in terms of how they do, what they do. Yes, there is aid given from systems team, the support structures, all of that. But it is more about these teams decide what they do, how they do in compliance with the results that we are looking at. Governance is not in terms of policing. You have to get this right and that right. It's more about saying, are we doing the right things? Why are we doing this right? It's not about saying you are not doing this. You have to get this and velocity is a metric and no, so we had data as metrics. We used those things so that that resonates well with the business and then work backwards from that perspective, right? So we have templated version, which we talked about. Monthly showcase was a big cultural shift. Every team in this journey comes together once a month to showcase what they do, how they do it, what are the challenges. And if they are not able to embark on this, why is it so? We want to truly learn. And in each of these meetings, we have senior leadership teams participating, including the principal engineers, architects, participating in trying to learn. How can we put together a system that simplifies adoption for these people? That's the intent behind that. And that's also a huge investment because without that, anything that you do is local optimization. Any external going and doing these things with a team and showcasing with respect to PPTs is not going to funnel in or the kind of interest that we want to see. So what we've said is all these teams come there, showcase on their laptops. It's not PPTs, no PPTs. We want to do it. In fact, we have a demo version, a small demo as to how we do things as we go through that. But I think that this was the crux. This we learned somewhere in between our thing and it was well supported from the top most management at the CXO level too. And how we drill it down. Obviously we have OKRs to support this. At the top most level, we have the OKRs and there are the product thinking teams. Each of these OKRs at the top most level is aligned to at the team level. We've not gone at individual level, but at the team level we're doing. We are getting better at it. We are not saying that we've achieved qualitative OKR, but we have done a very decent job in the adoption cycle. So this at a high level is how our CICD infrastructure looks. Couple of things very common, right? Your gate, Jenkins, Slack. I'm not sure how many of you have heard about PeriSert. It is a testing tool that we're using. But the most important thing that you would want to look at or maybe one of the important things is the whole range of technologies or tools that we are using. And there is a gradual shift from third party supported vendor provided tools towards more open source. And with open source comes a bigger challenge of how do I make sure that all these things are done well enough? If you look at the other thing that we wanted to talk about here is essentially the automated code review. So we do have a lot of tools that cater to app sort of development like you have Sonar, but then for a BI something that would probably pass my CICD or HiveQL, right? Those kind of tools are not really available in the market or we haven't come across them, right? And that is where we have our teams to come up with their own review mechanisms and a lot of team have come up with their own code review mechanisms. So it could be something as simple as a shell script. It could be something as complex as building a UI. So, I mean, so what I would really like to call out here as soon as a Jenkins runs, you have a code review that runs. Even though the testing really happens later, code review runs immediately and any failure on that part, send the message to Slack and we stop the deployment right there. So, I spoke about in the initial this thing, data is our product, right? So why does it become so critical here? If you look at the stack also, we have data stage and all of this. Fundamentally, we are running through sequence of jobs. There are in some of the application that we are applying these sequence of jobs is like thousands of jobs, they are all running sequentially. They are pulling it from the source data, massaging the data all on the fly. If we say in a construct of CI, once a check in, it should come back in 15 minutes. You must be kidding. So it doesn't happen in a high volume thing. Yes, I can kind of do it just for the sake of getting, okay, we are doing data, it is only about 100 records, you get this. Fine, that is a different strategy you apply for certain context. But if I have to do from a business resiliency perspective, whether it's the data right and enabling true decisions, I cannot sacrifice doing it at the cost of don't do pull from the latest source that we have or don't do massage on the fly. Don't do your ELT, ETL, you cannot sacrifice it. And also the fact is we are moving from traditional, it's not that we don't have a baggage, right? I mean, I don't want to call it as a baggage, maybe it's a wrong word. Our business is still running. It's all a lot of money is being generated out of that. So if you are in certain kind of ecosystem already, transitioning out is also equally challenging. I cannot do a slam dunk approach and then say, remove this and go there. So I think that's where exactly we come from. This is, we've stood it up from a pipeline perspective. We have all of these traditional data, terror data and your data stage jobs running. The challenge here is if something breaks in that entire pipeline of data stage pipeline that we have, I need to rerun it. Usually the problem is with the data, qualitative aspect of it. It is not about the technical glitch, oh, I missed this thing. It could be as simple as I've missed an index or I instead of using this column to that column is a very small thing. The fix is very small. The time it takes to run this is the cycle time, right? I don't have the luxury of speeding it up. And I can speed it up in theory, but in practice I cannot have Docker container do it because it cannot take that massive load, right? So I have to put it out. So we have to have our own strategies around that, right? We are learning as we go through this process. But yes, we have this and also we have this today. Where some of this thing is on the cloud where the development happens. We still have a lot of the big data kind of investments which are there on the staging and the production environment on our physical servers. It's not still on the cloud cloud, but the interesting thing about this particular pipeline is the testing after we deploy. So for a BI sort of a system, it becomes very difficult for you to test and then deploy because you can't create data on the flag. So one of the key things that you would want to see here is that we always deploy and then test. So rollback might not always be possible even as we speak, we are testing out how we could rollback. But then that is something that we have not implemented yet. So testing usually happens after we deploy. The other thing is deploying to prod, right? We're not truly continuous deployment because all deployments into prod are still controlled because we are still coupled, right? We're not so loosely coupled that I could just go ahead and implement something by myself. I am sure that there are a couple of other dependencies that I need to talk to teams about and work them out. So those are the two things that probably are very different. I think even in certain contexts, we do test and deploy. Like I said, we are product thinking team. We are writing tools for ourselves. Lot of the testing tools are not sufficient for our needs. Query search, we are license causes, yes. But there are certain things that we are building our own tools in house. Yes, it is on the modern stack of development. To test that aspect, how is it going to cater to my needs of the business? I still put it through the rigorous test of the data behind it and things like that. Yes, there we can apply our own application point set off, you know, or how you do the tooling chain for your application way of doing CI, CD. We have already applied it. Yes, we have Artifactory and Docker and containers, all of those things. But then we talk specifically on data, data, business, catering to that. We are limited by certain things. We are moving towards the cloud and all of that. If we shut off one thing and then get into big data and Hadoop and those areas, perhaps maybe we'll have, but then that's also work in progress. So far, I think we are here. Any questions so far? So if you see there on the top, right, that big data local that you see, it is nothing but a dev environment. Yeah, it's still, it's in the cloud, but it's as good as our local environment. We do our semi-unit testing sort of a thing. These are query write. We get our checks and balances, right? We are not missing an index, reviews, those progression happens. But the volumeness of the data, that is what we stage it, put it into this stack. So only when that passes, we push it here. So that's a good question. Yeah, so some are local, but our local is again, spinning off that virtual environment where we work. Coding happens here, we spin off that. That's one step that we have to bring in. So those are something that are more specific to the products that you're dealing with, right? And that is where we are asking teams to come up with solutions. So we do have, we do use query search in some of those cases, but largely it is more about teams automating some of those processes. You could, like if I could use a very raw example, you could always take a backup today and run your tests against the data tomorrow, right? So that could also be explored because we are not in that state where we can say that we have achieved complete testing. And in fact, if you see at the latest slide, we would be talking about how we have a long way to go in terms of test data management. One of the steps that we have is isolated testing and test data management. Where we are working with teams now is for the area that you are working in, what is the data from a functional standpoint, you can actually containerize that. This is not going to change or it is yesterday's data. You can parallelize that and get it ready so that when your feature site is ready, you can push it against that. But yes, it will not address the volume problem. If you want to get real time kind of a thing, there are always trade-offs. You have to deal with those trade-offs. And what is the value or the opportunity cost you are going to lose by with these trade-offs? These are again decision points. It's not about just technically alone, I can do this. It is more about is it worth it for me to spend this? And can I spend that? Do I have the time? Now, if I want it in one hour, but you also have to do it in terabyte, I mean terabytes of data, we still don't have something across all of your regression test suites and all that. So we have to compromise something. But if the change is quicker, we will fail through to, you know, let it go there and it fails because we can always funnel in a newer change as quickly. And that's where our two-hour kind of a construct will help us, as opposed to saying, let me do all of these things and wait for a 30-day construct and then get it ready because it is curtailing a lot of other speed. Can we wait for that? As we go through it, we'll finish it. Maybe that will answer that. In case we don't answer that, maybe we'll come back. I'm going to give you a different in terms of what? In terms of capacity, capacity usually are stazings of, if we try to do, if you look at the local, you would generally run your unit tests there. And your stage is where you would not only be running your unit test, but also the integration test. So from a volume perspective, capacity perspective or stage environments are usually around 10% of production. So it's realistic to expect the same volume. So you would probably extrapolate the results. Now, with respect to volume that you want to test, it really depends on your application. Is my application something or is my flow something that would need to return results within, say, 10 minutes? In that case, I would probably do more volume testing in stage. I definitely have that flexibility of asking for more space. So again, what I'm probably getting into too many details here, but then while it is 10% of, say, a broad environment, that 10% is also shared across different teams. So you could always go back to your infrastructure partners and say that, OK, this, I need to test for this volume. And for that, I need this much of space, maybe for a month. So we don't have a fixed mechanism of calculating how much of volume we are going to test there. It really depends on your needs. But there are provisions to test for the entire volume. So monitoring, there are two kinds of monitoring. One is your flow itself, your jobs, are they running? I might have them running. Usually, we have it in a sequence. We call them a batch. It starts at, say, 5 AM in the morning, and it might run till 5. In that case, I need to make sure that the jobs are running. So that is one form of monitoring that continues throughout the week. The second form of monitoring is essentially once your data is available, is it right or not right? So that will usually happen depending on the frequency of your data loads. It will happen immediately after your data loads. Most of my data will probably be like a daily refresh. So I will run the flows today, and the data will be available by 5 AM, 5 AM CST. So at that time, as soon as the data is available, there will be monitors that will trigger. So that is part of the load process itself, I should say. So here's a quick demo on real life, how we do CI CD, from a checking process to taking it all the way from a pipeline perspective, just to give you an idea on one of the templates that we have used. So as you can see, there are multiple steps. We have also got a lot more steps than in your conventional development, because here our trigger point is also from how do you actually from your bridge when you raise a pull request, getting into your dev environment mode, is all your sanity tests done at a lower level, and then only you trigger off from a staging perspective and all the way through. So there are a lot more steps that we have got. And this is also all the way through from a pull request. How do we even get to a deployed state? So this is, again, it's not completely automated in terms of everything is done by Jenkins here. At least the pull request part, when the first set of the tests are all successful, you get a green bar. Then only we raise a pull request for the higher order environment, and then there is further set of flows which will run to get you into a stage and prod kind of an environment, and we have minimalistic set of steps there, including the change management process. Because like we said, our production is still controlled, but what we have automated in many of some of the cases is automate the whole change management aspect. Before what used to take us time is to raise a change request, get somebody approve it, and then push it in manually and stuff like that. We have automated. So there was one thing that went by in this. It was a quick demo, but what I want to highlight here is we do this test in terms of from a source to your target, did we get whatever data? Typically you would see in our kind of work more in terms of count star, whether if you have 10,000 rows there and 10,000 rows did make it. That because that's for us is important because we are not leaving out data, right? Because anything could happen in that. What we are also encouraging teams and doing more is are these thousand data representing the right, I would say business areas or whatever is your key things that you are looking at, not just any thousand rows in terms of matching the count of it, but it is also in terms of whether it is store level or whatever are your 10 stores and then 10 stores, thousand rows is actually matching there. It's not just about thousand rows of any number of stores and which number of stores. So I think qualitatively we are improving our tests also that we wanted because that is critical for us to get our test data management, right? That strategy, right? And isolation of the test data. This is a foundation for that. That's where we are investing now. So this is a thing as we spoke about the journey, right? One is we are fundamentally not like the general developer-developer community, everybody who's on the gate and we do this, right? We are data people. We don't do some of this get to work and stuff like that. But however, all of the sequels that we write, all of the shell scripts that we write or even the data stage or any of the work that we do are first-class citizens on our GitHub, right? There is a check-in process. There is a lot of the things that you which is no different from that, right? What we wanted to do was why is, why are things happening so late or why are we not checking as frequently as we could do as we should be? We built our own monitor internally just to showcase that, you know, how are we doing relative to that? This is not a comparison. This is again not policing. It is for us to reflect upon as a team how much of the commits are we doing on a daily basis? How many pull requests are going in? How many master mergers are we doing? Because that is what is going to seed you for deployment. Today, when we started off, we were at probably about 30 deployments. All of our deployment, whatever numbers or the stacks that you see is only on master mergers. But a lot of commits keep happening at a lower environment, right? So we've come up to this 250 level, 250 deployments for one sort of a thing. It's all facilitated by a lot of these check-ins which is on smaller granular pieces that is happening. To back that what Pradeep spoke earlier, more in terms of teams have written now their own tests and coverage. It could be your review on your shell scrubs and things like that. That is facilitating this change before it was all manual process. Anything that I had to check in had to be with a manual review. Now that we have automated it, it is facilitated by the pull request. It is facilitated by those rigorous mechanisms in terms of automation. We are now able to commit things lot more easily because we have a safety net. Without this, it was always derailing us. So to summarize our journey, what we've achieved the last two years, when we talk about accomplishments, there are a whole list of others as well, but we thought we'll stick to three because that's the only thing that we'll, probably stick with you guys as well. Focus team for implementation in ops. So earlier we had a team of 40, which was totally focused on making sure that implementations are happening. Remember it was a very intensive, manual intensive process. And the ops team did all the monitoring. So we had a team of 40. Now it is down to four, which means that we have 36 additional people who can work on active development and add that much more value to the business. Speed of deployments. You guys have heard it so many times. 30 days to two hours. So it is no longer a daunting task or we don't even need to focus one engineer one month to just get the deployment done. It's down to two hours. 30 deployments to 250, just like the previous slide. So we had only 30 deployments earlier. Now we are up to 250. Now again, the key call out is yes. Does 250 sound great when you look at it from an Amazon perspective? Maybe not. Again, it's not any apps or small things, right? It is, we are talking about, at a data, at a volume that we are talking about, we are able to make these changes. I'm not saying can it be 10x more than this? Of course, we will not deny that. But this has been our journey. We've been a team who used to be like big teams working out of their own set of cubicles. Once in a month, once in two months, do those delivery process. Every time there is a review, there is a management aspect looking at it. Now we have come to a point where even before the business sees certain decision metrics going wrong on their monitors, we've been able to capture as part of a CECD process itself. We have got an equivalent set of a monitor which will at least give us the indication that you know what, this data is not going to come out right. Maybe we have to correct this before the business comes and asks us for a change. We have become much more proactive in the way we will attend to the business lead as opposed to being reactive from a business standpoint and then go talk to multiple teams and figure out where this thing happened and why did it happen. And look at, okay, which of the data set caused this kind of a problem. And those are all the promotions which we had when we started off this journey. Today, I think a lot of those things have come down significantly. Like we said, it's still in progress. So we still have to attack this big challenge of test data management isolation. I think the crux lies there. If we have to get it to 10x, it's a possibility. But we are hoping we will get there. Next year could be a different. Yeah, so I think we had this, but before, instead of this, I think we spoke a lot about some of these things. Me too. That's anything though. Okay. It was a small thing. So the booth that I found most interesting was the enterprise merchandising stone. What I liked most about this team is they're very vertically sliced. They talk to the consumers, understand the requirements, they go back to the data sources, come back with the demo card and then interact with the person who's raised the request to understand the other purpose or not. Lot of interaction, lot of learning. Personally, I've met supply chain data foundation team where I want to be sure that we connect even beyond the CW fair, understand certain supply chain metrics that will be crucial for the model that my team's building. Definitely a great platform to learn. There's a lot of opportunity to prosper in it amongst each other. Natural way of collaborations that can come out of this bottoms over a lot of great experience looking forward to more such in the future. Boots that particularly stood out where the enterprise booth, which demoed all the insights and how they gather those insights from various means to put up the most competitive price for target and our guest IQ strategy, which deep dive into insights like if a shopper goes into target and they buy a pack of diapers, what are the other things that they're buying and how can we build promotions around those things to drive footfalls to our stores? Overall, a really, really interesting day of Friday well spent at the ad-having fair. This is another investment, right? Now, when you look at it, this is not a one-time affair. What I want to emphasize here is when we talk about culture shift, lot of people talk about culture shift. How do you actually make a culture shift? What is it in culture shift? This is an investment which we've done. It happens every four months we run this ad-having fair across multiple locations. We said that we've created product teams. Each scrum team, agile team is a product team. The idea behind these ad-having fairs is people go and mingle across each other, try to learn from each other what product they are building. Even though we are data, we are data, we see it as a product. We also have downstream, upstream. I did mention about OKRs. When we do funnel in the OKR, also people talk about multiple teams who is going to get impacted by what? Because we are in the crux of either we could be a recipient of something or we are giving something to somebody else which could be impacting. And we are also building our own capabilities in terms of building our own products, getting new open-source solutions. What we just saw, showcase here is just of, yes, we did CI, CD kind of a representation from a Jenkins perspective. But when I look at it from a tooling, it's completely democratized in the way you want to use tools. We've been, we use Ansible, there is OpenShift, Vmass servers, which are there to support us. Then there is Docker, Kubernetes. We are also using Chef to manage our infrastructure. All of this is in the mix. We just want to get lost in the infrastructure aspect of it, but more focus on how we are getting the data components, right? But these are all the support structures which are there and every team is choosing what tool and how they want to do. And when they apply that, the challenges that come out of it or the learning that come out of it is what we share in this kind of forums. The interesting bit is everybody in the management is also participating in it. And it's about selling it to the business and why we are producing this kind of a value. If we have missed something, how can we better this? With that, thank you. If you have any questions, we'll be here around. This is Pradeep, my name is Ravi Kumar. Thank you. We have time, done? And try and do the same thing what has been done by one team across all the teams. Understood, but how did you empower your team? How did you motivate your team to write some tests which were not written in the past or which were written for a coverage sake? So that is again working with the team closely and asking them those questions. What is relevant here which can support this kind of a deployment or this deployment, are you having adequate tests? And what's the price you are going to pay if something were to fail? So it's not about the... You're saying by creating more awareness that you were able to move... It's also working closely with him. Like, Pradeep and team spend a lot of time working with the individual teams to bring that focus that you need to attack these tests. It's not just, yeah, democracy is there but it's also thought through in terms of this is important for this to happen and investment is required. And there is support structure to do enable that for you. And you could also incentivize them, right? You could always challenge them saying, okay, write these test cases. As soon as it passes, I'm going to move it to production. Anything that happens in production is your responsibility, right? I mean, that could be one of the ways if you're still seeing resistance. Yes, yes. But then over a period of time, once your team starts realizing the benefits of all this, I'm sure they're going to start doing it by themselves, not testing. There are show and tell stations also propagated a lot of these things. There was resistance, but how did we break the resistance? Not by telling them, it's more about people participating in this and seeing that value. And in some way, it's kind of the same. Did you bring some practices like TTDBTD as part of your future? Yeah. I mean, a lot of those, we have dojo sessions which are being conducted, people participate into that as well. There is embedded coaching which happens with the team as a need basis. So it is not like one says with all the, we did this and hence we got this. It is combination of things that has been done. And the results are from a context that has to be derived from that. And I think once you have one of your teams reaping the benefits, right? They need to evangelize this with the other team. Because that works best. I mean, from what we have seen rather than us going and coaching individual teams, once you have a success story, make sure that it is more visible. We have seen a lot of resistance. That's the reason I asked that. So yeah, I mean that resistance is something that will take time to go away. Yeah. Just need concerted access. Thank you. Cool, already I think run out of time. Thank you. We are around, so if you have any questions.