 All right. Good morning, everybody. I'm going to be talking to you about continuous deployment at scale. A lot of the things that I'm going to talk about work well for medium to large scale long term software projects, and I'm sure a lot of this you can pick and choose for all projects of all sizes. How many of you have heard of Etsy? Quite a lot of people. I would assume a lot of you know, those of you who know about Etsy know about it because of its tech block. So Etsy is a marketplace for buyers and sellers to connect online and offline to sell, make, and buy unique goods. We have around 1.6 million active sellers, around 24 million active buyers, a little under 2.4 billion dollars in GMS, which is gross merchandise sales. That's the sales that happens in our marketplace. And we have a little over 800 employees and we're based in Brooklyn, New York. So my name is Prem Shri Pillai. I'm a senior engineer at Etsy. Before this I used to be at Yahoo. And I'm really happy to be here. I'm glad for, and thanks to RootCon to invite me. So I structured this talk like so. I'll talk to you a little bit about the principles that guide our continuous deployment tools and stack. I'll talk to you about the tooling and culture we have in place so that allows for us to have an environment where we can deploy continuously and we'll end with some Q&A. So let's talk about some principles. So just ship. It sounds a little obvious but when you don't explicitly call out something like this as an important part of your process it kind of gets lost. We always want to have it in the back of our mind that we want to be able to ship. Our continuous deployment tools should allow us to ship quickly. As people who make products, what are we trying to do? We want to build products. We also want to innovate on our products. We want to do this in a manner that's iterative and that's quick and that we can get out of production to people quickly. In the end we are here to create products. We're not here to write code. I mean code is part of what we do in order to create the product. The product is the end goal, not the code. Also as people who craft, whether it's our sellers, whether it's our users, whether it's designers or engineers, we want to optimize for certain intrinsic motivators. We want to optimize for purpose. You want to have a sense of purposefulness when you're doing your work. You want to optimize for autonomy. You also want to optimize for mastery. These are intrinsic motivators that drive us as humans, as people, as engineers, designers. We want to optimize for these. So when thinking about our continuous deployment tools, we want to make sure that the tooling that we have in place is never hindrance to these. On the other hand it always enables us to have all of these and it's always important to keep this in the back of our mind. If it does not allow us to have any of these, we're not going to be motivated. In addition to wanting to release products quickly, we also want to experiment. We don't always have the right answers. We want to experiment. We want to know what the users actually want. So when we think of our continuous deployment tools, we want to think of how can we experiment quickly and iteratively. So that's where eBtesting comes into play. For example, if I want to test two variations of a home page and see what feature performs better, and you can choose whatever metric you decide, whether it's conversion, whether it's drop and bounce rates as being your indicator for success, but you want to be able to do that. We want to release products quickly, but we will never get products 100% right. So we want to be able to iterate. We want to be able to iterate and we want to be able to iterate quickly. We are never going to get a product 100% right. We're not going to get it perfect, but that's okay. We want to constantly improve the products and features we're working on. We also would prefer to iterate and fail quickly rather than have stagnant code lying in our stack for a long time. That's not how you can innovate on products. Baked into the idea of continuous deployment is also continuous improvement. So our goal with continuous deployment is that the products you have, you want to deploy quickly. You want to make products. You want to ship it out. You want to iterate. You want to make changes that are useful to users, but you also want your continuous deployment process to have a way where you are continuously improving the process itself. You can do that by having some kind of loop. We'll talk about that in a bit. You also want to optimize for a low mean time to recovery. So when things fail, which they will, you want to be able to recover quickly. I'll talk to you about some of the tooling we have in place in order to accomplish this. It's just meant to be an overview, so I'm not going to go into great details. Let's look at what a typical continuous deployment or delivery cycle looks like. So as a developer or designer engineer, you commit code, which leads to a build, which would likely trigger some automated tests. You do some user tests, and then you release. Now, each of these steps also has a feedback cycle. So when something goes wrong, you know whether you can or cannot move to the next step. So, for example, if a build fails because of one of your commits, you can't move on to the next step. So some of the things we do is frequent check-ins. We check in all our code to master directly. Like a lot of people here, how many people here use GitHub? I think one of the greatest things about GitHub is branching, but we don't use that. And that's okay because for us, this works really well. We insert branch and code, and we do that using something known as feature. Someone here is later talking about feature flags this evening. For those of you who are interested, but feature flags are very simple. They're really dumb, simple ways to do branching without having to go through any kind of merge hells. So at SCV use PHP library for features. So a lot of our stack is PHP, so this is a simple PHP configuration for, in this case, a feature. It's telling us that this feature is enabled. It's on. And that's how you just turn off a feature. Now, in addition to having a feature enabled or off, you can have a feature be enabled for a small percentage of users. So this is telling us that my feature is enabled for 1% of all cookie users. So this is users that have bucketed by UAIDs. Now, if you want to bucket it by user instead, for example, if a logged in user sees a specific module on their home page, you want that module to be either enabled or not enabled for that user at all times so that they have a consistent experience. So you bucket by user. And on your application end, you just check whether a feature is enabled. If you are checking for, if a feature is enabled for a user, you pass in a user object. So in the same vein, you, the feature flag is what also enables us to experiment with different features. So instead of having this config block where a feature is either on or off, you might have something like this. What this is telling us is we have a feature, my feature, which has three different variations. Layout one that's enabled for 1% of all users. Layout two and layout three that's enabled for 3% of all users. So in your application code, instead of checking for whether a feature is on or off, you check for what variation, what variant of a feature, what variant of a feature is available for what user. So all this ties into continuous integration. We want to always have the build be green. We want to be able to release our code any time. So we are always ready if you want to make a production push. So while we talk about the process, now I'll talk about a little bit of each specific tools we use through the processes. So before a developer commits code, they try. And what I mean by try is it's a library. Try is a tool that sends a copy of whatever you have in your local VM to whatever's on master, patches it up and then runs tests. It's on GitHub. So once a developer tries, they go to the Jenkins page, see if things are okay, if everything's green, they can move on to the next step. When they're ready to push code, they use deploy knitter. Deploy knitter is our homebrewed deploy tool. It does atomic deploys. This is how it looks like. It's very straightforward. I'll get into this a little bit more. So once our developer starts going to the push process, it's time for tests. Automated tests start running for each of the environments. Once all the tests look green, you probably do some manual user testing. Once your tests look good, you release to production. That green button there, that's what you press and things are live to everyone. So it doesn't go, of course, to production directly. We have a pre production environment called princess. So princess is essentially deep pool production boxes that have the same configuration, the same environment. So you are able to test what things are going to look like to users eventually before it actually starts taking production traffic. And once you think things look good, you go to production. And anybody can push. Maybe not always. So I think I mentioned this in the bof yesterday, but I think when you have such easy access, it's like having root. Everyone here has root in your organization. Everyone can, you could do major damage, but you need to be anxious. You need to have a healthy amount of anxiety. You need to be prepared to test. You need to be aware that what you're pushing has real consequences to millions of users. So you need to have a healthy dose of anxiety when you're pushing. You shouldn't just go and have no fear at all, or no anxiety at all. But what you don't need is to be fearful. We have tools that help you build confidence when you're pushing things out to production. So you don't need to be afraid. Typically, traditional developer operations relationship has been one where developers write code, hand it off to operations, operations push it out. There's not much communication. People in operations don't always know what's being pushed out. There's major communication gap. But here, as developers, as engineers, whoever's pushing code, you are responsible for pushing out your own code. Operations or developer operations people are involved in making tools better for you to be enablers for your own workflow. So the relationship is more collaborative. We talk to each other. People know what you need. Operations, people know the pain points. Developers know what it might be to push code that might break things and paid someone. So in order to facilitate pushing out code very quickly, we also have the notion of dark changes. So dark changes are code that you have extremely high confidence in. Now, that in itself is very vague, but there are specific ways you can understand what a dark change is, like a simple template change, minor CSS tweak or unreference code that code block that's not done on. These are all dark changes that you don't have to go to the full push cycle for because you can just push it out. And by convention, we normally mark dark changes as dark. So someone who's actually pushing the code knows that they don't need to worry about it. So in addition to, we have a few parallel deploy stacks that we have. So the most common deploy stack is the web deploy stack that we go through. But in addition to that, we have config pushes. So for example, if I have a feature that's enabled for 1% of all users, and I want to ramp this up to 50% of all users, that's a very simple change. I don't need to go through a whole push cycle to get that out. So we have two separate push cycles. One of them is config, where you can ramp features up or turn features on and off. And so the way people actually get to pushing is by very old school coordination of human beings through IRC. So the channel topic that you see there is the topic of the push. Now, what you see there is basically two push trades. One has five people pushing, and one has one, the last ASM there. So when I'm ready to deploy something to production, I go into IRC, a channel called push, and I say dot join. And so then I join this guy, so that changes the push topic. When any of my changes look good, like when I'm on any of the environments, and I know, OK, my changes look good. I've tested it. I say, I let everyone know that it's good. And you can say that in whatever language you want. So an asterisk gets added there to let everyone in the push train know that I'm good. And at this point, if someone else wants to join the push queue, so since John and my train is already going, Sally will get her own push train, and she is the driver of the train. So we use the idea of push trains, and we use something we know as push bot. It's the topic and our grammar is written in Antler, also on GitHub. So now you've pushed your code out to production. Now it's supposed to apply time. It's time for you to gain confidence in what you've done. So what are the tools we have? First of all, we have a super grep. It's on GitHub. Super grep is an aggregation of all your air logs through all your different kinds of systems through all your pools. So what this does is once you're ready to push code, if any of the commits that you made may have cost the logs to run, you notice that very quickly. In addition to super grep, we have a tool called super top, which I don't know if it's open source, but super top is basically what top does. But so instead of giving you every error that pops up, you get an aggregate of what's the most, the list of all the top errors that you have. So it's very easy for you to spot what's going wrong. We make extensive use of dashboards. So here's an example of generic system-wide dashboards we have, 404s, random HTTP errors. So if you take the 404 graph, for example, now these vertical markers, the red markers are when a push happened. So when a push happened, and if you notice a spike or a change or a shift in pattern in the 404s, you can correlate relatively accurately what push may have contributed to that spike or that change. And then you go through the deploy logs and you can very quickly find out what commit may have cost that problem. The other vertical lines are different kinds of pushes, so the blue lines are config pushes. In addition to system-wide dashboards that track 404s and things that are relevant to Etsy across, you have app-specific dashboards. So this is a dashboard for shipping tracking. So there are a lot of different features at Etsy, of course. There's, so each feature development group comes up with their own metrics to track when something they push out for their purposes may have caused a change or a shift in their own metrics. We use StatsD and Graphite in order to track this. StatsD is on GitHub. So to summarize the various steps that you have involved in continuous delivery, you use Git hooks, of course, and we use Try in order to get ready for pushing your code out. Once you're ready to push code out, we use PushTrain and IRC to coordinate the pushes. We use DeployNator to do our atomic deploys, and then there's automated tests. There's a bit of user testing, and then near the end, you use dashboards and SuperGraph to gain confidence, and that's it, so then you're done, then you move on to whatever else you're doing. So I assume everyone knows what this is. You guys are all probably wearing one right now. It's an RJ45 Ethernet cable. How many ways can you plug this in? There's only one way to plug this in. Whereas a USB cable is often challenging. How many times have you tried to plug it in the wrong way? So this is based on the principle of pokayoke, which is Japanese for mistake-proofing. So it's a very useful metaphor to have in mind when we try to build our continuous deployment tools, because people make all kinds of mistakes, but we want to be able to make our tools such that. It's very hard to make a mistake, and I think that metaphor is very useful. You don't want people to have cognitive overload in their head trying to think, oh, what do I need to do now? It should be fairly obvious. If the build is red, you can't move forward. It's not, there's nothing for you to try to think about it. So I'll talk a little bit about culture, because culture plays as much a role in able to have an environment where you can push relatively frequently. So no one creates culture. Culture evolves, and there are ways in which you can create or foster an environment where people are happy, empathetic to each other, and productive. There are some guidelines we can follow, and there are tools that we can use in order to achieve this. So I think the first and foremost is you assume best intentions. No one's trying to do anything wrong intentionally. You just have to assume that people have the best of intentions. It's very important and critical to an organization that you cultivate empathy. If I don't know what it's like to be an operations engineer, if I don't know what it's like to be in support, I'm not gonna be empathetic to you. I don't care about you. But if you have empathy, I have a human face to look at. I know what it's like to be in your shoes. You have to be open. You have to be open to critique. You have to be open to ideas. You have to be open to different perspectives. And I mean, this is day two of the conference, and I feel like it's been so drilled down since yesterday morning that I think we started off with failure is inevitable. Failure, if you're saying failure is not an option, you're lying to yourself. Failure happens, whether it's systems, humans, technology, whatever, failure is inevitable. But it's still important to keep in mind, we don't intend to fail. It's not our intent to fail. In fact, we would prefer very strongly that we not fail. But despite all of that, we will fail. But failure presents itself an opportunity, and failure is the only opportunity where you can actually improve your processes or systems. We do that by post-modems. So post-modems are where we try to get down, like if a particular push led to a failure if at season's unreachable, that's a failure. And post-modems are where we try to get to the root cause of what may have gone wrong. And these post-modems are blameless. They're not meant to find fault in people. They may be meant to find fault in systems, but they're not meant to ask who broke things. It's meant to figure out what went wrong. Now, you can't ever know, sometimes there's this instinct to ask, oh, but that's so obvious. Why did you not look at if the test succeeded? Or why did you not look at the dashboards? You could clearly tell 404s were going up. But that's not helpful. If you have one person in that situation, you're gonna have many people in that situation. So the point is not to find blame in anyone. The point is, is there something better you could be doing technology-wise or process-wise that could have got this without the need for human intervention? So the consequence of a post-modem is remediation. So you come up with remediation items. Okay, this push failed. What are the different things we can do now in order for us to prevent this in the future? Can we add more instrument, more nagiosellas? Can we add more metrics that would have got this sooner? Or are we, sometimes the problem with having too many metrics is you have so much to see. You don't know what to see. So then you ask yourself questions. Are we looking at the right metrics when things get pushed out? Are we having too much to look at, and hence we didn't catch this other thing? So whatever, those lead to remediation items that you then go and implement. We have a tool called MARG, which is where we document when things go wrong. So as, like if you're part of an organization that has many feature developments, you're not always involved in all of the same failures. When there's many of the failures that happen, network failures, MySQL goes down. I, as someone who is in product development, may not even be aware of. So MARG is a great way to go back and learn from what people found out. If there was an interesting failure, it's a great way to learn something. Yeah, this is just an example of another MARG entry. MARG, I think, is also on GitHub. We also used to give out these awards. Sometime back there, they were the three-armed sweaters. These show up on Etsy when there's a 404. And we started giving these out to the most spectacular failures of the year. And I think it's not because we want to, I don't want to say we want to celebrate failure because that's glorification. That's not what we want. You want to learn from, if you didn't learn from your failures, we are not going to celebrate that. So this is to celebrate the biggest failures that we had a lot of things to learn from. We have a tool at Etsy called Mixer. So every week, or every two weeks, two people within the company get randomly paired up. So I might get paired up with someone in support, someone in operations, someone in product engineering. And it just says, hey, you guys go meet. And so this is a really great way to get to know everyone in the organization. When I started at Etsy, we were a little over 100 people, I think. And now we are 800 people. There was a time where I knew everyone at Etsy and now I don't. But it's very important that you know your people. Like if you walk past someone when you're going to the bathroom and if you don't know them, it's a little weird. But even if you get to meet all of these people through time, through a whole year, you have a kind of empathy that you wouldn't ever otherwise have. It's hard for you to be mad at someone when you know them. Or at least you will talk. So to summarize the whole talk, a combination of culture plus tooling lets us deploy very efficiently often in a day. Most times we succeed. Sometimes we fail. And when we fail, we conduct post-moderns. Out of post-moderns come remediation items. And these remediation items mean they make us stronger. They make our systems more resilient to failure. And there's, you can be stuck in the same cycle. You need external stimulus also to make your systems better. Like there's a learning and development group at Etsy that tries to conduct workshops on how to approach problems, how to understand failure, how to approach empathy, how to know, understand your coworkers better, how to understand, how to understand, how to listen. So this is a graph of our current deploys. The yellow ones are web deploys. The green ones are, I guess, blog deploys. But so we have a few deploy stacks, but the most important ones are web and config. So we are able to deploy at least 30 times a day every single day. We do around 30 plus config deploys a day. So that's 60 deploys a day. Yeah, and that's it. And so we have a lot of technical articles on our deploy process. Like if you want to know how atomic deploys work, a lot of the specific tools that I talked about they're on codascraft.com. That's all I got. Thanks. I'll take some questions. Hi. Excuse me, hi. Yeah. That was a great talk. Just had one question. Yeah. Why the name Princess? Sorry? Why the name Princess as a pre-pride? It's, I don't know. It was before my time. I'm not sure. This was when Etsy was five or six people. And actually, so there's an interesting story. I think it's also on the Etsy blog, but in our deploy-deploy-native tool, there's two buttons. One, the deploy production is called deploy to production, and in a pre-pride environment it's called Princess. And it used to be called save the princess. And I think through, it's very interesting because through a lot of iteration people decided that was sexist, that was wrong. And so now that button is called get saved by the princess. But I don't have an answer to your question. Sorry. Yeah. So I have a question here. Here. So the feature flags looks very interesting with doing it, but I've actually had experience of trying that. But what happens is over time your feature, the code really becomes full of features and all conditions and becomes spaghetti. Okay. So I mean, do you, when you use feature flags, do you actually have to spend a lot of additional efforts to clean up all the features that you're using once you know that which feature you're going to use or do you keep the features on forever? That's a great question. So we actually had one project where we started distinguishing between, so we're not at that scale yet, where we're at a scale where we don't have these problems, but there was a time where when in the younger days Etsy would have get overwhelmed during Christmas and things like that. And so when we used to have to selectively turn features off, but we don't do that anymore. But that is the only big problem with feature flags that you need to clean them up. So we have sprints, like sometimes we just, like I think when you're working on a project and you realize that that's part of the project, cleaning up the feature flags, that's the only way I have seen it work, but it works differently across streams. But the downside to it is there isn't that much of a downside, it's just a lot of craft here and there, but we haven't experienced it being as a problem. We think of it as something we try to clean as we go. It would be ideal to have a more automated or a cleaner process to get rid of that. Yeah, but we haven't experienced it as a problem, except there is a lot of switches. Yeah, so you think keeping that kind of thing is okay? Is that what you're saying? Sorry, said again? Keeping that kind of switches is okay for you. We don't, I mean it's okay, it just looks a little ugly, that's fine. Okay, so what happens is at least with my experience, maybe not designing the switches very well, the kind of overwhelming switches and then you really don't even know why the switch was there in the first place. Yeah, so we've had that problem where a lot of people who left Etsy, their names are on config flags, so what we do is we just scrub off those. If they're all on, we just delete those flags. There are some features that are specifically, we want to turn them off, so we leave, there's a different convention to those kinds of flags, so they're called feature underscore blah, blah, blah. So we know that these are not just feature flags, these are features or products that we want to maybe turn off sometimes. But yeah, we just scrub them off, but it's ad hoc. Hi, down, down, yeah, great talk. You mentioned about the deployment buttons, right? To push to production and to push to princess, right? So which tool you use and what is your rollback strategy? So the tool is called Deploynator. It's a tool we built at Etsy and it's on GitHub too. Our rollback strategy is very simple. We just, if a commit may have caused a problem, we just revert it. So because we are deploying code continually, like our code is very fresh, whatever your VMs are pretty much, they're never a day away from master. So if there's a problem, you just rollback and then you work on your changes, try to fix it and then you push it back. So yeah, that's the old rollback strategy. Okay, thanks. Hi, here. So how do you think basically our deployments with the schema changes, especially in case of rollbacks? So we have, so for schema changes, we actually have a different tool. Schema changes is one thing that we, like developers won't handle on their own, we do the schema changes on our dev environments, but we use MySQL, so it's not as straightforward for us to always do schema changes, so operations does schema changes. We have a tool, I don't know if it's open source or not, but we have a tool called Scheminator, which allows us to define a schema and then we run that on our dev boxes as developers and operations people run it on their separate schedule. But when we do schema changes, we may have, for example, if you were adding a new column, our models may not reflect those columns yet, but we will start opening that up. So we may have it behind a featured flag and turn to off, and when it goes out to production, we may turn it on. So in the case of rollback, what you might wanna do is you would turn it off in the model and then you would rollback the schema later. Perfect. And do you have any canary sort of deployments as well? I do not think so. I'm not sure. So like all those thousands of servers that you guys have, you deploy them all together at once? You mean our production servers? Yes, on production servers. I'm not entirely sure of the mechanics of the underlying system, but I know there's a blog post on the atomic deploy details on the blog if you wanna look. Okay, thank you. Sure. I had a question around culture. So you were talking about things like blameless postmortems and assumed best opinions. So how did Etsy actually inculcate those values in themselves as a company? And how do you think as programmers, we could encourage people in our company to adopt those kind of values because eventually it helps you to be a better company overall and a better workplace? Yeah, that's a great question. And I don't know how many of you experienced this or have experienced this in the past, but I started programming at a time where it was very common for male engineers, people who have seniority to be, it was very normal to be arrogant. It was very normal to put down others. And I know things have changed everywhere. And at Etsy, I think the way you create culture is to have people who, I think you need to have a certain base of people who have that in them, like who are open, who are empathetic. And I think one great way to do that is to have diversity, to actively seek people of all backgrounds. Like we have a lot of liberal arts people who actually, you know, who are programmers. And there would have been a time 10 years ago if you said, oh, I want to use text made and people would laugh at you. No one does that anymore. You can use whatever you want. You have your own ideas, your own opinions. So I think that's a very hard question, but I think having diversity is an incredibly powerful way to allow for, you know, when someone from a completely different background who has never programmed before comes and asks you a question, that seems dumb, you're less inclined to say, that sounds stupid. You're inclined to think, oh, why is this guy trying to use text made? Maybe there's something to it. And so we actively try to encourage faster diversity, whether it's women in technology, whether it's, you know, all kinds of people, like underrepresented groups. And we also have, we constantly do intro to programming classes. So people who don't have any background in programming, like our product managers push code quite often. If there's a template change, you don't need, anyone can do that. So a product engineer, a product manager will go and do that. Designers all commit code. So I think there's a lot that needs to be done. There's no one way, but I think encouraging and being open is a good start. Thank you. Hello. Yeah. In our organization, we have automation build process. We have automation scripts to test user testing. But after post build, there are a lot of steps like testing, user testing and performance testing. So how you can manage 30 builds a day after a lot of steps to follow? I think this is, I get this question a lot. I think it seems overwhelming, but it really is not because you are pushing code so frequently that your, whatever's on master is fresh. So what happens is, when say you're at master right now, three people have commits to go out. These three people make commits. Now, these three commits are not that far behind master already. So your changes are very small. So that means the scope of what could go wrong is very small. And then in addition to that, these generic system-wide dashboards that we have, we monitor performance. We have performance for all the various important pages. So you quickly can look at what's going wrong. If you are not looking at those metrics, someone in operations is looking at, like if performance appears going down. And you, as an application developer, as a feature developer, you are focused on, perhaps, your own specific metrics. So you are looking at that. So you have, you have awareness of what's going wrong. So it's, yeah, it's, I think it's a combination of all these different things that allow you to do that. Okay, so it would be better approach to divide your user testing for special features so that we can, we will only test all the things which will be released. Correct. Not to, like, go for all the complete as feature product testing. I think a function and a consequence of having very small changes is that the things you need to test are also limited. Automated tests, unit tests are a separate thing and they have to happen, you know? They should never break. But like if I am working on, say, pushing out shipping labels, I just need to test that because I know my code is affecting just that. There are cases where something in your application affects somewhere else. So there are specific integration tests we have to tackle those. But those are rare and far, and don't happen that often. And also in our cases, reverts don't happen that often. It's very rare. It's an exception, not the rule. Thank you. Yeah, hi, yeah. Yeah, I'm from Intuit and we are trying to learn from Etsy's CICD journey. The one question I have is, how did you get the whole company, all the different projects on the same train? Do you have a lot of variations in your projects? I'm sure you have mobile web, different languages, right? How do you get everyone aligned? Yeah, so I think one of the recent things we are having to deal with is we have mobile apps. So mobile apps have their own different deploy schedule. But whenever an engineer joins Etsy, they do a bootcamp of one month or however long in a team that's not their own. So if you are joining big data, you may do a bootcamp with infrastructure or ops. Recently, we had a big data guy do some JavaScript. He had never written JavaScript in his life. And so I think having that kind of cross-team collaboration really helps people understand how things work. But also there is autonomy within each team. So you can come up with your own coding style or whatever you want, but you still follow the same overarching processes in order to deploy code. But I feel like also we have senior rotation, so if you're an engineer, you want to spend time in some other team that you have never done that kind of work. If you're doing mobile development, you can go spend a month or something working with them, learning from them. So I don't know, I think yeah, that's how we do it. And I think bootcamps really help. And also most people deploy code to production on their first day. It's probably a simple change, like adding their name to their about page or something, but it's the first time a lot of people are used to pushing code to production that quickly when they start at a company. And it puts a lot of anxiety in them, but it also makes them realize that it doesn't have to be that hard. It's really simple. Hello, so you talked about rolling back a commit if some failure has occurred. What about multiple commits have happened after somebody identifies that there was a failure in earlier commit? So what should be the strategy in that case? Will you suggest to roll back all of the commits done after that, or you'll commit a new push correcting that failure? We will just roll back that one commit. So if there are some dependencies, let's say on future commits on top of that, rolling back may have some side effects. So how do you handle that? So chances are if a commit that you made needs to be reverted, it's very likely that dependency was injected or that dependency came from someone in your team or you. So in our case, it's rare that a commit depends so strongly on someone else's commit that you normally have nothing to do with. So the simple answer is that if I'm pushing something that breaks, that depends on something later or has depended on something that was pushed before, it's probably me that did that. So I would roll both of those back. Okay, I have one more question when you're talking about multiple commits happening on the same day. So is this, means you're talking about a monolith product or there are multiple services involved in, like they're totally separate to each other? It's a mix. Yeah, it's a mix. It's not all, I mean, it's very hard to define. It depends on your domain what a microservice is for you. But in our case, it's a mix of both. There is some monoliths, there is a lot of microservices. Okay, thanks. Hi, here down. So generally regression test slows us down in the pipeline because they are large in numbers. So how do you choose your regression tests so that they are executed faster and you can go quicker to production? We try to mostly write unit tests and we try to avoid integration tests. Integration tests are, of course, incredibly slow. So yeah, I mean, we have a lot of unit tests and those are almost always really fast. There are a couple of, like for critical paths to our code base, we have a couple of integration tests, but our tests, I wanna say they run, like our QA test suite is the biggest and I'm fairly sure it's under 10 minutes. So it works fine for us. Okay. So do you use any tool for code coverage and code analysis? We are working on some tool based on PHP 7, I think, for static analysis and we have a bunch of Githooks that rely on this. Yep. So what's the team size that you have? The one that handles the tooling. So I mean, there's a pretty much comprehensive amount of tools that you have out there. So what sort of team? I will be making a guess, so I will say I don't know. Yeah, but we have a substantial DevOps team, but I don't know an exact number. All right, thank you guys. Thank you, Premchie. Please take all your other questions.