 So, this talk is about Microsoft Visual Studio's journey to continuous delivery. So this talk is really not about what Visual Studio does and what kind of capabilities it has and what kind of awesome tooling it provides to help you be agile and really be good at your ALM life cycle. It's also not a remedy for all the problems. It's really a talk about the development team, which actually builds Microsoft Visual Studio and the team foundation server. It's about their experiences of how they went from a traditional waterfall style of development processes and how they're adopting the modern practices today to really provide a world class first class kind of engineering system. And build out great products. I personally have been part of this journey to some extent. I was part of the Windows Phone platform team. Then I built tools for Windows Phone application development tools. I spent some time in Redmond. I spent some time in India. And in my current role, I'm the GPM for the Visual Studio testing tools. So it's super important for me as a product owner to kind of understand how the internal team within Microsoft itself are actually progressing and adopting modern practices. So that we can really learn from there and help provide great tools to my customers. So that's my personal interest in the whole Microsoft Visual Studio's journey here. So how many of you know about what Visual Studio does and how it's doing? All right, pretty much everyone. Okay, great. For some of you who don't, the historical context for Visual Studio is that it's a very large product. It's that coupled with Team Foundation Server really helps you build applications for various kind of platforms. Be it Windows, phone, Azure, websites. It provides bulk of developer and testing capabilities, source control, build systems, name it. And it's like the full end to end tools for developers, testers, and stakeholders. So it's really a large and a very complex product. And since it's a large and complex product, it's also built by an organization which is fairly large. And traditionally what used to happen was we used to have two to three year product cycles, right? So we had Visual Studio 2005, then 2008, 2010, 12. And now we have 2013. So from two to three year cycles to, we've come down to one year cycle as part of shipping an RTM product. But of course we go through various kind of CTPs, betas, etc, etc in the middle and we'll talk about that. It's a large product, it's a large organization. But I would say the organization is pretty adaptive. At every year, every stage, we've been making a fairly disruptive changes to our processes, to our themes that we are operating on, what our focus areas are. And with any kind of change, there is always resistance. But I would classify it as a fairly adaptive organization, trying to cope up with the changes that are coming. The teams itself are kind of distributed across geographies. In Redmond, India, in Hyderabad, Bangalore, in China, Israel, and it's kind of fairly spread out. When you're working on a large product and you have multiple components of things that are getting developed across various teams, the common challenge is how do you get really a holistic picture of where you are really in terms of your development cycle? There's one team here that's building debugging for Windows phone apps. There's another team building profiling. There's someone who's building project system for the whole thing. But what does an end to end scenario kind of looks like and where, how do I really figure out what state am I really in? And with geographic distribution, there comes its own challenges with collaboration, efficiency, how do you make sure that the quality of the product is good because there are several dependencies. There's one team always breaking the other team. And there is always finger pointing going on. I was on track, but the other team kind of broke me and therefore I fell flat over the floor, right? So those kind of things happen. And with all this going on, the overall goal is still to kind of provide customer value to your customers. And how do you make sure that there is a continuous flow of that customer value? And in such kind of situations, there are a lot of excuses that people look for, right? One of the common things that back then, which I personally myself also as product owner did was, people ask me, where are you with your features? And I would say, yeah, we are doing fairly well. And then somebody would ask, hey, but you seem to have these many large number of bugs here. Well, on a relative scale, it seems fine, like that other team look at that. And that has like 400 bugs, whereas mine is only 100. So we are playing with who is more inefficient at that time, right? There's that kind of schedule chicken that's going on between the teams. And since there isn't any common criteria of what really done means or what really is the truth about the quality of the product, everyone is trying to play with their numbers. And that used to happen quite a bit in world in days. And just to give you kind of an idea of how big the Visual Studio product team is, this is data back from, I think, July 2012. Just gives you a flavor of its large number of developers, this large number of source files that are getting touched. And there is just terabytes of data that's getting generated, right? And pre-2005, right, our cycles used to be, our traditional planning cycle was there used to be an M0, where we would do the planning for the two or three year release. There would be milestones and you will develop code in it. Once milestone M3 is over, that's when you start with like mini milestones, 3.1, 3.2, because you really realize that you haven't hit anything right now and everything is kind of chaotic right now. And there is a lot of milestones, mini milestones that are getting created to really just clean up the mess that we have created there. And when we are actually doing beta, rather than using the beta to really get customer feedback on the experiences itself, you're really using beta for people to tell you what the bugs are, right? I mean, you're not really iterating on whether the product that you've built is right product, but it's about, okay, I found this bug versus I found that bug. So there's a lot of quality issues that we really used to discover during those beta timelines. And then there used to be a release candidate, you do the RTM, and any product that we ship has a 10 year kind of a servicing SLA that we have. Then you end up doing a lot of QFE's, hot fixes, or what we used to call service packs, where fine, I've shipped it. Now, there is a bunch of another 100 bugs, here's a service pack for that, and you install that. So there's a constant kind of, once you're done, you're kind of fixing all the issues kind of a phase. And I would think most of the companies, most of the organizations, most of the products have been used to it. They've gone through these cycles. But then when all this was happening and the industry was kind of changing, we really wanted to look at how do we adopt little more modern practices and set some engineering principles really behind some of these things. And when we started the journey of setting these engineering principles, we really didn't tell anyone to use agile practices or something else. It's just that there's some fundamental changes that we said need to be brought in. There was a realization that you can't really have buck tails and long buck tails to kind of fix product issues. So we wanted to move the quality upstream. There was a lot of emphasis that was put on developers to do unit testing and write testable code and write a lot of automated tests themselves actually and not really rely on the testers to kind of do the white box testing for them. So that was an effort to kind of drive up the quality upstream. We really brought the developers and testers together as part of feature crews so that testers know what is being developed, stakeholders know. There's more collaboration between the teams and there is a concept of feature crews that kind of evolved from there. And for the main branch, we said hey, there needs to be certain amount of sanctity to that main branch. If main branch has fallen over or it's not building or some basic things are not working there, you need to really feel bad about it. It's not that you can't fix it, but you really need to feel bad about why is such kind of a thing happening, right? So that kind of culture we created where main line is something that there is a customer ready tomorrow, I want to ship it to him. I can't ship it to that customer because you broke this, right? Because your change actually caused this. So there was a culture that was being created that it's not about shipping at the end of two years, your main line bill should be ready such that I can give it to the customer at any point, right? And to that extent, people created feature branches where they would actually do their feature development. Get it to a certain level of stability and then kind of add it to the main branches. We did try to define a common definition of what done really means. When you say, hey, this feature is done, kind of define those criteria for that, certain teams followed it, certain teams didn't follow it. And then kind of there was a lot more focus on automated testing. And this also came with a lot of resistance because automated testing means you need to invest upfront in certain infrastructure for automation. And there is that initial period where people don't see the value of it. So certain teams did a lot more level of automation, certain teams did less. But at least what everyone did was for whatever they were trying to build, they tried to automate that, right? I mean, there were issues related to what the legacy system is and how much of the legacy can be automated. But at least the new features that were getting built. There was a lot more focus on automated testing. And so we laid out these engineering principles and different teams adopted these engineering principles and different methodologies and different process methodologies, right? Some teams who were, for example, the teams who were building now, Agile was kind of gaining more momentum. And there were teams in team foundation server who were actually building these tools for external customers. They certainly adopted Agile practices and started doing Scrum and all. There were certain other teams who were more mindful of these things and kind of brought them into their processes, but did not really follow the practice to that extent. But largely they use these engineering principles to kind of start evolving the mindset of what a tester needs to do in this new kind of area. What is the responsibility of product owner, right? So there was a, with the model of feature crew coming into the picture, there was a notion of shared responsibilities that people started in buy-me-ing. So the road to Visual Studio 2013, really, if I were to kind of break it down into certain phases. A phased first wave was really about kind of addressing a technical debt, right? We just had lots of bugs in the system. We needed to get that down to a controllable state so that we can actually operate in a model which is more predictable. The second wave was once we actually did that, it was making sure that the learnings that we have from that first wave actually stay there because it's very easy for people to kind of go back. Yeah, sure, now the bugs have come down from 2000 to 200 or whatever. Let's go back to that previous mode where, yeah, fine, 50 more bugs is not gonna make much of a difference. But it's about maintaining that hygiene and discipline for another cycle where you kind of institutionalize what you kind of learned from your first wave. And that really resulted in kind of amplified flow of value to the end customers. And today our focus is really on kind of increasing the cycle time, right? You have an idea, you want to implement it, you want to get it to the customers as fast as possible. And coupled with that, now we are not just building an on-prem product. We have a service called Visual Studio Online Service. Or we have the load service, build service. There are several services that we are building in the cloud where we are doing continuous delivery on a very sprintly basis, like three-week cadence. So that's another set of, there's common teams who are building products now for on-prem as well as services. So kind of making sure that there is some kind of a uniform process for them to actually go deliver that. So if you look at our 2005 and 2008 debt, like this charts kind of give you a feel for what those first set of principles that were laid out really resulted in terms of what our 2005 bug backlogs look like, and what the 2008 things look like. It was almost like a 15x reduction in the debt. There was a lot of schedule improvement here. We could get out the bits to the customers faster. And in general, I think there was a huge amount of customer satisfaction raise that when we did our customer sat surveys. We found that there's a lot of improvement in that. There's a lot of improvement in performance, reliability, less number of crashes, and those kind of things. Earlier what used to happen was people would not focus on bugs. And everyone used to take a lot of pride in building new and new features, but kind of shipping them not very finished features, right? So that was definitely a marked improvement from 2005 to 2008. Since then, now as I said that our focus is really on kind of enabling continuous delivery, enabling reducing really the cycle time to get a product out. We're really following this modern application life cycle here, where during the planning phase, you have some hypothesis. You do a code and test during the construction phase, and then release it to the customers and then do a measure and incorporate that feedback. So we are following that typical build, measure, learn cycle here in this phase. And since then what we have done is for our team foundation service, which is a service oriented thing, which we actually ship every three weeks on the cloud. So every three weeks we are having delivery of that product that goes out. And then for the Visual Studio itself, which is a non-prem product, we have enabled an update cycle where every quarter we typically do an update. And then there are some CTPs that we go out that go out. So one of the fundamental kind of learnings of this particular thing was that if you really want to practice agile, if you really want to get continuous feedback, if you really want to do a continuous delivery, you need to make sure that your product really has a ship vehicle, right? If you're going to still ship the product at the end of one year, two years, you're really not enabling to get feedback from the customers. So what we did was for our on-prem product, for example, the whole process of actually installing Visual Studio and then installing updates on top of it, itself was not so smooth, right? So if you want to enable your customers to kind of give you feedback, you need to make sure that the process for the customers to give feedback is easy, right? You remove those hindrances in the way such that people can actually acquire your product easily, you have to kind of invest in a smooth acquisition. Easy for them to kind of provide feedback on top of it so that you can get it easily, and so you have to really invest a lot in your loop of getting customer feedback from the, so that's another thing that we kind of invested heavily on for us to be able to successfully use the agile practices, okay? Any questions so far? Does this kind of sound, yeah, there was no extra budget for that. We just kind of instilled that this is the focus, right? You don't necessarily have to develop new features, right? What you need to do is you need to use your existing bandwidth and actually get down, knock down those issues, right? So we didn't create new budgets for it. But there was a debt, and we were paying that debt anyways, right? Like if you saw the previous one, right? We finished the product, we were doing QFE's, and we were doing service packs. So there is large number of bugs that used to get fixed, and we used to carry on that debt. And frankly speaking, for a lot of that debt, we actually took hard calls and punted all those bugs. Like I don't want them in the system. Give it to the customers, if the customers come back, like Tester has this notion, yeah, this is a bug. We said it's P2, there's a lot of P2 and P3 bugs that's lying in the system. The customers may never hit it, right? And in that case, it's not necessary to fix them at all. So there was this notion of make hard calls if you don't think these bugs are required, because if it's sitting for three years and if you haven't fixed it, what's the point? That means your customers are not really finding those issues and they are not important, right? So that's the way we dealt with it. The team foundation service gets deployed every three weeks, right? So for the team foundation service, which is used for source control and build, right, and work and backlog management, etc. That is something that the teams who are building Visual Studio, they would consume it, you're talking about internal consumption, right? They're consuming it on a daily basis, right? So every three weeks, new bits are coming, we are consuming it, right? We are providing feedback on it. Our product that we are building may get released a couple of months later, but as development teams, we are consuming the bills, service and everything, and we are providing constant feedback on that, right? That's true, so the people who are using the service only, right? For them, they can give feedback, but the people who are using on-prem product, for them, it will go every quarter and we'll get the feedback only once or quarter from them. But remember, since we have deploying the same bits on service and as well as for on-prem, the fact that you have a customer base, which is actually consuming the service, to some extent, you have dog-fooded that quite a bit. So the chances of additional feedback coming, which is going to derail your plans from on-prem product are relatively less. And further on, I'll also show you that I think though our on-prem product, we are saying we ship it like every quarter, which is kind of go live release. But for that also, we are doing some customer technology previews, which are kind of aligned to these printly cycles. I don't know if I have a slide to that. So for metrics, what we did was we really define, I think I'll touch upon that. Maybe once I get to that slide, I'm going to talk to you about how we do our sprint reviews and what kind of metrics we use there, okay? So let's move forward. So once we started doing this, right, we started really with Scrum. So there is a lot of teams when we actually laid out that engineering principles adopted Scrum, right? And they are the ones who started then kind of training the other teams. Said, hey, you're trying to do something similar. Let's try and unify your processes if possible. Though we never said, we never mandated those processes. Because there's always a resistance when you actually mandate something. You really want to show them that, hey, every sprint I'm doing this, I'm getting this feedback. People started kind of imbibing on their own, right? And I think it became a viral effect. And now most of the teams, I think, I would say still not 100%. But 80, 90% of the team are actually following this. The things to kind of consider was what should be the sprint length. It should be two weeks, three weeks. What's the ramp up time that it requires to actually take these teams into kind of following these cycles? What should be the team organization when you're trying to do that? And one of the things for team organization that was very relevant was to kind of set them cross functional. But have a very shared responsibilities and shared goals, right? So there are three or four teams that are working together on something. At a top level, we want to make sure that the goals and the priorities for those teams are kind of aligned well. And that's how we enable them to kind of work together. Because if we said, hey, all the four teams agreed that these are the common priorities, right? Then everyone just works towards that goal, right? And if there is something else that they're working on, which could have dependency on some other team, so long as they know, right? That the other team's priorities are this and therefore it may not deliver this, right? Then it's easier for them to understand the delays from other teams and things like that. And dependency management used to become a little more easier. It's not straightforward, but still it became easier when you actually had like five teams talk to each other and say, hey, these are my priorities, these are my priorities. These are where we are aligned. These are certain things we are doing on our own, you know? People understood, right? So that's how we kind of started with Scrum. This was very important to set the cadence of how often you're iterating on your plans. So often, people when they went through reading books of agile or they read some papers or went through some training materials, there's always chances of misinterpreting things. And they said, hey, since I'm working on agile practices, I can pretty much, don't ask me what's gonna happen a year later, right? I'm gonna learn and then kind of evolve. But when you're trying to build such a large product, when you have such complex organizational structure and things like that, you need to really have a common vision that you're working on, right? So we set a 12 to 18 month vision. And the vision was something that it would stick around for some time, there will be, you know, iterative changes in the vision as well, right, as you learn more. But it's like, you know, at least the destination that you want to reach. But the route that you may take for that, reaching that destination may differ. So we set out a 12 to 18 month vision there. We really had like, okay, over the next six months, we'll draw a line in the sand based on the velocity of the team and whatever we've learned that, you know. At the end of the six months, you know, if you can light up these end to end scenarios for our customers, then that'll be great. And then we used to have a little bit more kind of better understanding of experiences that we will deliver in, you know, three sprints. And every sprint is like a three week, you know, sprint there. So that's how we actually use even the kind of world breakdown to say, hey, there's end to end scenarios. There's a vision and there's scenarios, there's experiences. The user stories, and roughly try to align them to, you know, how often do you think you will need feedback on some of those things, right? Effectively, it's that, like, why are we trying to do these three weeks sprint execution? Because we thought that the kind of pace at which we will be able to deliver and get the customer feedback, that seems like the right, you know, level of thing there. If I had a mechanism where every day my product was getting used by a large number of customers and every check-in is something that I would need feedback on, probably I can evolve to that system, right? That system also has its own benefits. So that's how we kind of traded off these cadence, if you will, for stories, experiences, and scenarios. And this kind of gave people a fair bit of picture on, yeah, okay, I know what my, you know, what Brian had to use, my distinguished engineer of application life cycle tools wants me to do, you know, 12 to 18 months, and here are the reasons, here's the customer data that he has given me to actually go and build that vision. You know, as part of my group, which is building testing tools, let me see how the testing tools kind of, you know, fit into his overall vision of being a leader in a DevOps life cycle, or DevOps, you know, value prop. So then I kind of build the scenarios that are required for testing tools to be able to do that. And then, you know, the team within will actually say, okay, now here are the experiences, and here are the user stories that help me get there. While we're doing that, there was really two metrics that we were using, right? One is the idea to a working software. You know that this is what you want to build, and here's the working software for that. And then, you know, once you have the working software, what's the mean time to repair? Once you put it in production, how fast can you actually fix issues if you found any issues there? Those were the two metrics that we were kind of, you know, wanting to build here. Now, in terms of how we kind of involved really the stakeholders and different teams, right? There was an envisioning that actually the leaders did and, you know, presented a vision. Then we did experience reviews, right? So every once in two months, three months, we did, you know, experience reviews with our different stakeholders, different teams. And that's where you would say, okay, yeah, you're building this experience. There is this team that's building a similar experience. Why are you not kind of incorporating that? And that used to be a good discussion to really see how the cross team, you know, things are kind of aligning towards building that uniform experience. At the end of every three weeks, we will do a feature team chat, right? We would actually say, hey, you know what, this is what we have built. Here's what we are trying to build in next two, three weeks, right? So that kind of give them an idea of what our overall backlog looks like. So this is mostly around how we kind of take care of, you know, there is independent teams who are building their features. We trust them, but how we are kind of involving the management and the stakeholder in the overall process. Because, you know, finally, that's also very important as to, you know, for them to see how this all is kind of aligning towards the broad vision. And then at the end of the sprint, we actually send a sprint mail. We do a recording of a, do a sprint video recording and actually share it with everyone. And that is like a, you know, a super way of actually getting, letting everyone know what, you know, what you did really and how does the product really look like. And people can then go play with it. Whether it be cross team, organization, we just, you know, broadcast it everywhere. And sometimes we actually send this sprint videos also, use them to kind of talk to our early adopters and customers and kind of, you know, take some feedback from them. All right, so this is about how we are kind of doing the planning part of it. And how we are kind of executing the things. In the cloud delivery, right, when we are actually shipping it, it was very important to see what is your shipping cycle. So like I taught, we used three week cycles. At the end of the third week, we'll have a feature team chat. The fourth week is when we are trying to use deployment, and you know, that's the cycle we use. Now this is where we are. Certainly we would love to be in a shape where at the end of week three, you've done enough that you can just deploy with confidence. We are not there today, right? We kind of end, use up some time to actually do the deployment. Now from one week, it's reducing to three weeks and we are kind of constantly improving what we can do there to really make sure that there are smoother deployments at the end of three weeks. Because it's, you know, the smoother you make this process, the higher the velocity of your team will be. Yeah, so I think all the exploratory testing, etc, gets done in week three. The deployment is predominantly about taking it to pre-production environment. Running certain kind of tests that you might not have done. For example, today, we are not doing load testing in a continuous integration way, right? You're not doing that on a weekly basis. That's some of the activities that we are doing in that cycle. You know, if you are, there's certain feature flags that people have turned on. We've found issues with feature flags, right? Because once the feature flags are turned on with five different feature flags coming through, we've seen things falling off the cliff. So those are the kind of things that we are doing in deployment. And those are the things all we think is something that can be optimized for sure, right? But this is where we are and kind of trying to improve on. So if you look at the overall cycle, I would say everyone today is kind of storyboarding the ideas. I think it's a uniform practice now, right? This is kind of institutionalized. You have an idea, go storyboard it, right? Validate with your customers. Once you've built the product, you've given it to the customers, enable stakeholders to give you feedback on that. And from a product backlog item, you can link that request and people are able to see it. So we've set certain processes in place where all this becomes kind of smoother and easier. You kind of capture the results in the backlog. If there is a feedback response, let's say it says, Brian Harry, he gave some feedback on certain things. You capture it in the backlog and do prioritization in the backlog. You use these iteration backlog on a task board and go through the cycle of actually going from one place to another. And then once you have your backlog, what are the coding and testing that you're doing? So you kind of, even from a developer perspective, right? We've kind of had developers doing kind of continuous coding and a lot of teams have started using TDD, right? And more teams are getting cost to do test driven development. So as they are, they're writing tests first and then coding. And that's a different paradigm shift at all, completely. To do certain things in that fashion. The people who are doing it, who are doing it successfully just love it, right? The people who are not doing it, they're just allergic to it, right? So there are two extremes of people that we are seeing. But overall again, from an engineering principle, what we're saying is you don't need to do TDD. What you need to do is you need to do upfront quality, right? And that's what we're kind of going to make you accountable for. The way you want to do it, you can do it this way. If they see that the other teams are producing features with a higher velocity, they'll get motivated to adopt some other practices, right? For testing, yeah, so now once you do your automated testing and all kinds of conflict testing, really we are focusing a lot on exploratory testing. So the testers are actually doing a lot more what we call customer experience validations. They're sitting in the shoes of customers now and actually trying to see if the customer were going to use it like this, right? Though the user story experience may not have been defined like that, but if he's going to use it like this, right? They're going to explore different areas and create issues and create more, generate more backlog items, right? And once these things are, performance under load is again, this is something that I said traditionally these used to be like an activity which was done like at the end of six months, right? Now with our service and other things, we are at least doing it at a sprintly cadence. And we want to see how some of these things can be done earlier. Because what we've seen is when we find performance issues, they're hard to debug, right? And even when you look at your bug counts, you can easily hide them behind your bug counts because but the fact is like 10 UX bugs will take probably two days for people to fix it, but one perf bug can take a week or two, right? And it can cause potential changes to your design. So this is something that we still want to explore as to how I can get some amount of performance testing. Can I do some, should I be doing some micro benchmark kind of testing more upfront in the cycle such that there is lesser chances of getting hit by these performance issues? So there are things that we are kind of exploring in that area. But we recognize that this is an area where we sometimes get into problems. This is there, yeah. Yeah, and once we get into the production, it's about life side, right? This, I'm gonna talk a little bit more about, we have a life side culture where everyone's kind of monitoring the life sides and what kind of processes that we put in place to actually make sure that we can recover from those life side issues faster. We can learn from the experiences of what other people are actually experiencing on the life side. This is something that we've been doing. I was completely new to it. So I think I at least see that we are really evolving as an org there and really learning from the experiences of each other. So life, yeah, so this is not for the on-prem product. It's more for the service that we're building. Yeah, let me load, go a little bit faster on this one. Yeah, and in the end what it says is at least for the service, we're doing all this, but what really matters is how the real users are using it, yeah. No, no, so there are certain class of issues which ops can fix, but I think when we are talking about user experience issues, they will get translated into, they really go back to generating a product backlog item for the team. And we are investing heavily in things like analytics and usage analytics and things like that where things can be more automated to achieve some of those things. But what it's really showing is that we have like dashboard culture. So there are these dashboards if you have, for example, my team owns the load service and we have dashboards all over our offices, right? And there are TV screens and monitors where you can actually see, right? What's going on in your site? Of course, there are ops people and there are DRIs who are actually constantly monitoring it and are responsible for it. But it's just kind of creating that awareness that you need to be seeing how your customers are kind of using it and being kind of constantly kind of watching it, right? So there are such kind of small tools that we have enabled in the system for you to kind of do those things. From an engineering system, I would say that roughly these are all the kind of areas of the engineering system, right? Where it's about making sure your deployment processes are automated. You're using tools for deployment. I'll talk a little bit more in detail about managing quality and code flow. Exposure control is important. You want to do AB testing, you want to do incremental deployments such that you're deploying it to a smaller set of people, such that you can incorporate feedback from there. Dependency management and engineering backlogs, something we talked about earlier. For managing quality, really there is a whole set of things that we are doing. So one is gated check-ins, right? Where the developers itself are building some unit tests and actually doing the check-ins. Then there are some larger integration tests that run as part of nightly builds, then you have the sprint sign-offs. And this is where we actually kind of do the customer experience validations. And really define the sprint criteria, what is a done criteria? So for example, today we say, hey, the done criteria for a user story is that there should be no P1 bugs, right? There should not be any customer experience kind of impacting issues. Of all the things that we have around performance, reliability, acquisition, etc., etc. We've defined certain criteria which should be met, right? And that's when you actually close out the user story or the experience, right? And then for the P2 bugs or some other bugs that are there in the system, ideally there should be zero, right? But what we've done is we've kind of defined some kind of criteria or number that there shouldn't be more than these many or they should be fixed within this should be the age of those bugs and those kind of criteria we've defined to kind of keep them under control. But sprint sign-off criteria is predominantly like, yeah, it is ready for the customers to actually go consume this build, is predominantly the sign-off criteria. And what we do is we have our leaders, our managers, our stakeholders. Actually, we set up a system for them and we say, hey, go play with this product, right? So that's what we call experience walkthroughs, right? So a bunch of people will come together. We'll give them a kind of a five minute overview of what the feature looks like and we'll ask them to go play with it, right? And they will provide feedback. So this is just a way to kind of make sure that your stakeholders, your management, everyone's involved. You are able to ship your products to the customers, get the feedback. And then you have tons of feedback, you can then go analyze it based on certain criteria and prioritize it. So we talked about the exposure control. The life-side focus also we kind of briefly covered. It's like our operations team and the developer team are sitting side by side, right, and we are just working with them together all the time. They come and participate in the life-side reviews. If there was a life-side incident, they will explain what was the mean time to detect it, what did the ops people do there, how was it given to the developers. So we've kind of established those practices and we kind of do those things on a weekly basis to kind of ensure that life-side issues are taken under control and we are constantly learning from there. For example, we've had issues where your life-side availability shows 100%, but there are large number of customers who are actually complaining about the service. What does that really mean? Are we really measuring our availability correctly, right? So we had certain way of looking at availability and we said, okay, the way we should look at availability is really how the customer experience really is there, right? If you have some kind of a failure today, are you actually looking at your system in entirety and seeing that you won't have that kind of system again? So there's a lot of scrutiny and a lot of discussions that we have around that. And then there's a weekly life-side and then there is a monthly service review that we do at again our VP level, who actually goes and looks at what were the issues. Okay, this is an issue that the Azure team needs to fix. Where is it on your backlog? So there is a whole culture of making sure that the life-side is up and running all the time. When we talked about box product, I think it's much harder for box product. Like some of us talked about earlier, right? Because you don't get constant feedback. But what we've done is, we've decided to align our CTPs, I say every sprint. I don't think it happens every sprint. But even though we are shipping at four months intervals, but we're kind of giving some early bits earlier to the customers there as well. And we had to go through a lot of NDA and licensing and this and that and lots of crap to actually deal with it was not easy to do something like that, right? There's a lot of disclosure issues, especially Windows and Windows phone, like our big platform. They don't want to disclose what's coming in the next release. Whenever you give the tools out, there is a chance of those things getting leaked there. So it's about maintaining some of those privacy, making sure that the features that you really want to protect for a year or so, those are hidden in such a way that people can't jailbreak and get to know all those things. So you need to put that kind of system in place to be able to do that. So it's much harder and that's why when we talk to our teams today, everyone wants to build a service because they think that, yeah, if I build a service, I can get the feedback early. I really am in control of my customers. I really know what's getting used. But of course, that's not the way to prioritize whether to do service or on-prem, but it's harder there. So I think I'm kind of towards the end of the talk. I think if I were to summarize what the key takeaways are. From a culture perspective, I think every change will meet resistance. It's about managing that change, believing in that, showing the value to the people and then kind of incrementally getting there, right? As you see, for example, we are not at a point where we can say every check-in directly goes to production. We would like to get there, right? But we are kind of moving slowly towards that. Like teams like Bing and all are doing that already, but probably not Visual Studio. You really want to automate your recurring processes, be it deployment, be it your testing, be it your dashboards. We really want to kind of invest when we say life-side issues. You can actually see those metrics automatically. Or for on-prem product, we used to have something called Squim, right? Where people have to enable some flag and that's when you get to know which feature people are using. You want to kind of make sure that some of those things are easier and you automate some of that, even the analysis of some of those things. Eliminate the friction of collaboration. To know when you're doing sprint, you want to do sprint emails. You want to do sprint reviews. You want to get a larger number of people into the room to actually dog food your product, that's one way to get rid of collaborations. Make sure that the management level, that the leads level, you have relatively shared goals and priorities, right? That's what you want to do. Make operations really part of the process. Ops is not a separate team. Make them part of the process. Think about what the operations will have to do up front. And really use the cloud cadence to rapidly respond to the customer feedback. I mean, that's critical, right? If you're giving bits out early, customers are providing you feedback. You also need to be empathetic to the needs of the customer and actually respond to the customers with the feedback in time. And something that I think we, I think do a fairly good job of striking balance of what I had planned earlier versus I got this customer feedback. And therefore I need to change this. And something that all the management leaders, everyone kind of pays attention to. Even though you might have said that you will build this experience in the coming sprint, you got a lot of feedback the way you've built it. Go change it, and the management is going to be completely fine with it. So the management needs to be supportive of the changes that come with a feedback driven approach. Yeah, I guess that's pretty much that I wanted to share in this talk. If you guys have any questions, I can take some questions now.