 So, I'm very happy to be here, thanks to Pramod Sadlige and Nareesh and Ravi for inviting me here for this talk today. So, I'll be talking, my name is Raj Anantaraman, I work for Intel and based in Bangalore. I'm an agile coach, I basically also lead a team right now in terms of managing that team's deliverables and stuff like that, but my primary goal is to make sure that we are following agile and scrum in the right way, the right approach versus just scrum butt, right, that's the goal here. So, and I'll briefly share our journey of how we have adopted this on a specific Windows platform that we have delivered, which is called the Betrayal tablet that we've just deployed. So, you know, like I said, the aim is to showcase how we have done distributed agile along with a good combination of tools because, you know, you can teach people agile and scrum pretty well and they will get it, you can pretty much do it in a whiteboard, but when you get a team of people together that is cross-geolocated, you are dependent on tools, right, and the tools better be good, right, so that it can integrate and deliver value at the end of the day, right. So, I think for distributed agile to work, I think we need two essential components, one is the method itself and the other is the tools, right. So, what I'm here to share is how we have delivered both on the methods as well as on the tools and how we have delivered a product quite successfully by reducing both the time to market as well as the improving the quality on the product significantly, right. So, that's what I'll do, I'll give you a brief context, I don't know how many of you are, anybody hardware and software here, okay, one of you are, a few of you here. And for those of you that are doing software and hardware, this may be of interest, right. For the others, I'm sure this will be interesting as well, right. So, going the wrong direction. So, this is the Bay Trail tablet that we just deployed, in fact, it is right now in the Mobile World Congress in Barcelona, we've just deployed another version of this tablet. So, you know, pretty good stuff, right, in terms of a tablet user experience. This tablet, you know, that we've deployed on the Windows platform. And if you look at the, what we have actually developed at Intel is this penny-sized SOC, right, we call it the system on chip, which means that pretty much all of the logic that goes into calculations, computing and video graphics delivery, audio delivery, everything is in the size of that small chip. Can you believe that? We call it the atom processor, and that's what is contained within this, you know, SOC that we deliver. From an agile journey perspective, we started out doing, you know, our agile journey on, you know, for those of you from the hardware and software background, you probably understand the complexities involved in merging the two things together, right. You have hardware which tends to be predominantly waterfall-based, and they focus only on milestones and, you know, broad deliverables, upfront requirements and what not. Well, in a big company like Intel, software tends to mimic what the hardware teams tend to do, right, because we support, software tends to support the hardware, right. So we started out, when we started out this program a couple of years ago in September, we started out with agile and scrum being practiced in a very few teams, and by February 2013, we had a few more teams participate, and then what I'm going to share with you here is the first experience of where we have adopted scrum at a platform level, and how we have done there, right. So this, the specific version of this program is called Bay Trail, and this is another small penny-sized chip set that I showed you on which you've delivered this whole platform, right. So when I call platform, what does it mean? I'm setting the context here, by the way, so you understand, you know, where am I coming from, right. So you have this, you know, system on chip, which is basically the whole, you know, all of the hardware components that are going into a small, you know, kind of a device. And Intel, by far, if you look at the lines of code that we deliver, somebody was saying that Intel would be the second largest company only after Oracle, in terms of delivering lines of code, right. We deliver a ton of lines of code through our drivers, firmware, installs, DSP, third party IP and middleware, right. So there's a lot of stuff that we do in terms of software, and driver code is not, you know, small by any means, it's pretty large, right. So this is what constitutes what we are delivering. And especially when you look at a platform like a tablet, there are multiple teams, in fact, 20-plus teams collaborate to deliver software on this product, right. So you have some things like as basic as your BIOS, when you boot the machine, the BIOS takes over and starts to, you know, boot the operating system on that. Knows the IO devices and says, this is where I need to boot, right. Your audio drivers, how do you listen to, you know, listening to audio on a headset is actually pretty interesting, but then, you know, there's a lot of things that happen in the back end to make that experience happen, right. So there is all sorts of DSP and, you know, firmware that goes in the back end to make that audio experience happen for you, right. Things like graphics, how do you make sure that when you turn the tablet, there's actually rotating, right. Rotating by 180 degrees is a lot of software that goes into making that happen, right. There is a gyroscope in the bottom which knows how the turning should, where is the turn angle and then based on that it sends signals to your graphics drivers so that it turns it out and all that, right. So a lot of these components, so you have sensors, you have storage, power, IO, camera, CW stands for wireless, security, power and performance. So again, anybody from the mobile software development company here? Which company are you guys from? McAfee, okay, so they are our sister organization here. So glad to see you guys here. So anybody else from the mobile devices? Yeah? Which? Target. Target, okay, you guys devise, develop, okay, great, okay. So I mean, for those that are working in the hardware software space, I mean, especially from companies like Samsung or Nokia or whatever, right. When you put all these together, it's actually a fairly complex activity, right. But then the margins are pretty brutal, right. Meaning, Samsung just delivered their S5, like yesterday or day before, right. And Samsung is this company that has figured out how to deliver products, at least two products in a year, two big products in a year. And at Intel, they've historically delivered our comfort zone is on the server side and the client side, right. And we can afford to deliver products once in a year or perhaps once in two years, right, this is our refresh cycle, right. But moment we decided to compete in the mobile market, we decided that the methods that we are following in terms of our regular development models will not work, right, will absolutely not work, right. Because it's a completely different mindset altogether. You have competitors like Samsung and Nokia who are extremely aggressive. I don't know anybody worked with Samsung here before. Okay, it's not a pleasant experience to work with them. Because, I mean, they are so results oriented, right, that they will want things yesterday, not today. They will not tell you that, can you deliver this in a week from now? They'll tell you, can you deliver it yesterday, right? It's that kind of cutting edge focus and we cannot compete in that kind of market, right. So, we had to change our method. So that is what I'll talk about today, is how did we collaborate? Again, I think the story is just starting out. I wanted to share our experience but there's a lot more to it as we kind of continue this journey, right. Between all these teams, the good thing was everybody wanted to have a common goal, right. How do we deliver high quality on time platform software that really works and we can ship it on time, right, that was our goal there. So the challenges that we faced, of course, with a lot of distributed teams worldwide with very low overlapping working hours, right, so it's not fun to be in meetings at 6 a.m. in the morning or 11 p.m. in the night in India, it just gets to be, you are very, very tired in the night, you just can't afford to kind of be productive in those meetings, right. But that's the nature of sort of a globalized environment that we live in. Late software integration was another big challenge for us. I'll talk about some of these challenges very briefly. Complex requirements, anybody heard about the V model here? V model, some of you have heard it. It's a term popularized by the Europeans. I mean, they've done a really nice job. I love that model, whether it's really good, but in an agile environment, how do you make the V model work, right? It's a big challenge actually. Planning, tracking dependencies across multiple ingredients, right. I told you about 20 plus ingredients earlier. There are lots of dependencies. For example, for an audio driver to play music on the device, it is dependent on IO, right? So IO has what we call as an I2C driver, from which we have to take the audio file itself and then stream it to the IO, right? So there are dependencies. So I'm dependent on the IO guide to deliver his part of it before I can start integrating, before I can start testing my audio, right? It's a very mind boggling integration activity that we perform, right? So it's planning and tracking dependencies is a big deal. I told you about the mindset of waterfall, which is typically what happens in big companies like what we do here is hardware tends to be very waterfall mindset and then software tends to mimic that same thing. The last big challenge that we had was multiple tools landscape, right? If you have worked in any MNC company in the past, or you do right now, every team operates like its own individual business unit, right? They have their own processes, they have their own tools, right? And it makes so hard to integrate. I was just at the booth earlier talking to some of the folks from Siemens and other company here, and especially with JP Morgan earlier. They were telling me that rolling out scrum is not the difficult part. I think getting people to talk on the same tools is actually the most difficult thing, right? There is still a debate, pretty big debate, whether you should standardize on a single tool set or should you have, because we are agile, we let team self organize, right? The best thing to do is to let them choose their tools, right? You can choose your own tools provided you are a single unit by yourself and not collaborating with a thousand other people, right? But moment you talk about collaboration and getting people together, I think aligning on a single tool set is very important, right? So that is one of the challenges that we faced, which I'll talk about. So from a low overlapping hours perspective, everybody in this presentation so far has put up a world map. So I thought I'll do that as well, right? We are also located in multiple geographies, right? So we have presence in Bangalore, one of our biggest design centers here in Bangalore. We have presence in PRC, Israel, Poland, of course in the US as well, right? So fairly distributed in terms of where the teams are located. And if an IO guy is sitting in Bangalore and working with an audio team in Santa Clara, it's pretty difficult, right? How do you get these two guys to talk? And you want the engineer to talk to the engineer. And you don't want the scrum master to talk to the scrum master, right? Because maybe not much will get come out of that, right? So again, this was a very big challenge. I talked about software integration, right? So in a predominantly hardware company like Intel, everything revolves around this hardware, right? We are delivering a piece of silicon and a system on chip. That is the main focus. So our processes are absolutely waterfall based at the high level, at the waterfall level, right? I mean at the system level, that is. So when the software guys came along, they said, oh, we will tell them when the device is ready for the first, the way it happens is the designers for the hardware do the designs in what is known as an RTL language. Anybody heard about RTL? Yeah, some of you know what RTL is, right? So RTL is basically code also where you type, put in the registers and whatnot and say, this is what should go in and come out and all that stuff, right? So that is all written in hardware. Basically that goes on in the hardware development. So the design guys develop it. They send it to a small manufacturing unit where they take the designs and produce some sample chips that we can then use it for integration and testing, right? So the way that we used to do it before was we would wait till the first sample arrives for the hardware and then start writing your software and then start to integrate and test it and everything, right? Absolutely won't work anymore. So and what is the most difficult part here is that the validation times for software becomes so narrow that by the time you develop your software and validate it, there will be a lot of bugs. So that leads to push over of your milestones and delivery dates and everything like that. This is the current problem that we were facing before we adopted Agile, right? This is the V model that we've been using. So this weather waterfall or Agile doesn't matter. The whole idea is that it all starts with use cases, right? So I'll give you an example of a use case. If I have to click a photo using my camera and I want to hear this haptic feedback or this sound that produced when I do the click. There are four components that need to come together, right? First I talk to you as the IO driver. Then is the audio guys who are delivering the, taking the sample and beaming it through the IO. Then you have the camera itself which actually takes the pixels from the image and then stores it into the device itself, right? So there is lots of things that need to come together. So a V model is of absolute importance to us. So we start with the use case that is defined at the platform level. What are all the 300 plus use cases that we need to deliver in a platform? Like this tablet that I showed you earlier, we don't create these products ourselves. We give it to the OEMs or these Dells and ASUS and all these companies. And they put our reference designs together and they put the label on top and then they sell it, okay? That's the kind of the way that it happens. So we understand the use cases really well. Use cases are then broken down into components and component level requirements and so on. But how do you fit an agile sort of a model in that V model, right? It's kind of the, what's the challenge as we adopted this? I spoke to you a little bit about the dependencies where you have, at the top level you have the platform schedule itself. And then you have the development team schedule for development team one, two, three, four, and all these guys run as independent business units, right? Collaboration becomes a very big challenge when that happens, right? So whenever one team hits an impediment that's shown in red there, how do you communicate that that team has set an impediment, okay? So this was a big challenge. I mean, we used to do a lot of these PowerPoint presentations and whatever methods that we use for communication offline. When a team hits a kind of a roadblock on one schedule, how do you communicate? The older ways was to do a PowerPoint. You used to bring the leads together in one meeting, you know, upwardly say, and we call this a traffic light problem, okay? Here is what it means. Dev team one says that they're actually stuck. They have an impediment, right? They put together a nice PowerPoint, green, red, whatever color code that they use for reporting status up in a PowerPoint. It goes to the second line of management. They say, ah, this is all right. This becomes an orange at the next level. Then it goes to our senior VP and the VP says, ah, it is actually kind of a, it's green, right? And then they pipe it upwards and say, this is wonderful. And whatever this person, this team is reported as red as we are struggling. Sort of gets translated upwards and it loses its meaning. Earlier we played this Chinese whisper game here, right? Ravi was leading us through that. It's amazing that we only got 50% of the people here got the right answers, right? So very similarly, we play Chinese whispers in big corporations. We do this upward presentation of, presentation goes upwards, upwards, upwards by the time it reaches the CEO, it's lost its meaning altogether, right? So how do you prevent such a problem from happening? Where reported status is actually what shows up as really as the end deliverable here, right? So this is another problem. I think I've already spoken quite a lot about this waterfall mindset, too much affluent design, requirements, bulk and late integration. I'm not gonna belabor that. The last challenge we had was this multiple tools landscape issue, right? So when I started on this job of, for platforms worldwide, I'm the coach for this platform team. I'm coaching them on how do you do things better from an agile standpoint and deliver it well. When I did an evaluation, there were eight SCM tools. You had tools like Perforce, JIT, TFS, so many, right? Eight SCM tools. So people are all checking in code in various SCM databases. And then you had about three or four project management tool. And guess what was the most popular project management tool? Excel, there you go. Excel never goes away, right? It is still, I love Excel too, but do not use Excel for your global collaboration project management, right? So the last one is defect management and there were a ton of those also, right? There were a rec pro from one company and then JIRA, blah, blah, blah, right? So there are so many, so many of them out there that it becomes hard to know. When I'm deploying a platform software, platform, when I'm deploying it, I want to know the count of defects, high and critical bugs that I carry at any given point of time. If I have to do that in the world that we were in before, I would have to ask three or four program managers and find out, where are you guys? What is the defects data? And they send you the Excel spreadsheet and some people send you PowerPoint, some people send you whatever format that they have. And my job as a program manager is pulling together data from so many different sources. Anybody been in that board before? I have some empathizers in the room here, excellent. So you guys understand what I'm talking about, right? It's a very, if you're talking lean and eliminating waste, the first thing we should eliminate is all PowerPoint and Excel in terms of updates, right? In terms of status updates and rolling up information. So in a collaborative distributed agile world, I mean, we made it a goal that we're not going to let so many tools out there, right? So how do we standardize on one tool for SCM, one tool for project management, one tool for defect management, and that's it. No more bridges, no more piping of information. And this was the most difficult battle more than agile or scrum itself, right? Because what happens, tools are like a religion for people, right? They say, my tool is better than your tool. They say, rally is better than version one. This tool is better than that tool, right? I mean, every tool has its pros and cons, right? I mean, everybody has done a fine job of the tools itself. But I think the key in terms of distributed agile is standardizing on one or two tools that really makes sense for us, right? And the job is to convince people that guys, focus on development, hardware, work, or whatever. Do not focus on the tools and what the tools offers. Yeah, there'll be some issues here and there, but live with it guys. Our bigger goal is to deliver a product. Not to talk about this tool has this nice UI, blah, blah, blah, versus the other tool. You guys get what I'm saying? Yeah, so this is the one. So what solutions have we provided, right? The first and foremost, again, which is of one of the easiest things to solve, was this shift left, right? So where we earlier, I showed you this picture where the green thing on the software was actually kind of only intersecting at the development phase, right? Now because of a lot of simulation environments that is available to us, we can take the RTL code that the hardware guys are developing, put it into a simulation environment like SLE or FPGA. For those of you from the pure software world, don't worry about these terms. It's just a simulation environment on which you can load the software and you can do the testing on that as if it simulates your hardware environment. It simulates that system on chip that I talked about earlier, right? We also have generation N minus 1 hardware. So for example, before Bay trail, it was Clover trail, right? So you can use that previous generation of that product and do the testing as well. It may not be exactly the same, but at least you can test some part of your code in that machine, okay? So these are all the three things that we did and we adopted an iterative life cycle there, of course, with Scrum. We standardized on Scrum across all of these teams. I mean, going there and coaching teams, educating them was the most easy part when I compare that to the tools part of it. But I think we got good buy-in from the teams, right? So this was the first solution that we put together and by doing this, true vertical slicing. How many of you know vertical slicing? You've heard, right? So vertical slicing is if I'm gonna deliver this experience that I call with this use case of where I can listen to audio through my headset. I break it down to a thin slice, make sure that that full, I'm able to deliver at least a slice of a feature at the end of a sprint. Versus saying I delivered just this one portion of the IO driver or the audio driver or whatever, right? That's not the goal. I need to have a thin slice that is working and verifiable at the end of this sprint. So this was a hard part as well, right? Educating teams on how do you do thin slice and deliver it was a hard part. But I think with the coaching and training, we got over that part. And then we did this Scrum of Scrums roll out. Which was, I mean every team, we had a Scrum of Scrum meeting, which focuses on use cases and impediments and dependencies, right? Use cases I'm sure all of you understand, right? Use cases what the customer feels when they get the product on their hands, right? That is a use case. The Scrum of Scrum is very focused on use cases that we are delivering. Is it verifiable, right? That is a done criteria. Is it verifiable that it is working? And I can actually, if I decide to stop my Scrum process, can I ship that product out right now, right? That's the Scrum of Scrum process with the use case. And then each of the ingredient teams focused on what we call as features and user stories and sprints, right? So this is sort of the other approach we did. And we standardized on a one week sprint, okay? It was pretty drastic because people are all used to three week, four week kind of a sprint. Standardizing on one week across everybody was a very difficult activity. But that forced people to say, what are the things that they can absolutely commit for that week, right? This was very essential and there are pros and cons with sprint timelines, I think, but we just picked one week. That is my decision. I said one week, we'll just do it. And we just rolled it out that. The next important thing that worked for us was management support. We had a wonderful VP who really understood all this methodology. And he helped us quite a lot in both sponsoring and championing the scrum of scrum rollout. Adopting a single tool for each purpose, like I talked about SCM, scrum, defects, requirements, etc. He was a big force behind that, right? So he used to kind of whenever I used to report impediments to him saying, hey, this team is saying they only want to use JIRA or something like that, then like, okay, guys, at least for this platform, can you migrate. So he would talk to them and get their buying, which is pretty useful, I think. And he helped remove roadblocks, right? So which was every impediment that any team raises, we created a dashboard in which it's escalated and anybody can view the impediments. Meaning anybody who's working on the program can view the impediment. And the VP used to review that on a weekly basis. There were some teams who said, scrum, we don't follow scrum. We follow this rapid prototyping model, whatever they call it, right? But we had to ask them to kind of, we had to seek his influence to go make non-participating teams to participate. And again, tools integrations was a big deal. I mean, he funded that. And the last thing that I wanted to highlight here is in a distributed model, especially when you're working on thin timeline margins and high quality goals, you cannot afford to miss commits. But it's very hard to miss commits. And you need a mechanism. Even though scrum talks about self-organized, you take your own goals and stuff like that. The model that we implemented was called managed personal accountability. We said, if I'm gonna commit to something, I better make sure that I go and deliver that within the end of that week, right? If not, you explain to me why you didn't deliver, right? So this may seem a little contrary to self-organized goals, but actually really very simple, right? If I'm committing to something, I better make sure I'm committing the right things before I go and deliver it for that week, right? So this helps each individual in the organization sort of focus very clearly on what are they gonna deliver, right? So this was another good thing that happened. We integrated, we were able to get this single unified tool. We are using a rational team concept which is from IBM for our scrum tool processing, right? The UI is not that great, but I think the beautiful thing about RTC is that it integrates so well with any tool in the world, right? You can pretty much go to pretty much any tool that supports this restful API, and you can integrate with them very nicely, right? So we integrated SCM tools like Perforce, JIT onto this. Build tools, we are using Team City right now, and that's working out quite well for us. Test execution, we are using HPQC, which is working out quite nicely. Defix tool, we have our own homegrown tools, because we come from a software hardware background. Our hardware guys use a very focused tool there, and we have, since we have to collaborate with them, we have to use the database that they use. The one big change that we introduced after this V model was this platform requirements tool. We are using a tool called JAMA, called Contour. Anybody heard that? So it's a very small company, but they've done a nice job of rolling out this breakdown of the model, right? So we've integrated all this, and then it all rolls up to a business intelligence dashboard, where I can track where things are, what is the status, is it working well, not working well, and so on. So that's the, I mean the biggest part of the journey was getting the tools integrated and getting people to use the same set of tools, right? So that was the interesting part. From a work item hierarchy perspective, I think it's very important to in any lean or any, I mean any kind of scrum practice, right? You need to have a good work item hierarchy or a work breakdown structure, right? We start with the use cases. Of course, use cases are at the platform level. You say that I need to get the click sound when I press the button is actually a use case. Use cases are supported with test cases, right? So automated test cases that I should be able to verify whether it is working or not, right, at the platform level. And we break that down into features. Each feature is basically is like an epic, for example, right? It cannot be delivered within a sprint, but perhaps it may take two or three sprints for you to deliver. A use case is like a larger epic. The reason we didn't call each of them epics or whatever was, these were terms that people understood, right? If we introduced epic and whatnot, we felt that it was too much to educate people on what they are. So we decided to steal some old terminologies. We said, OK, use cases you are familiar, features you are familiar, we'll stick to that. We'll only introduce this concept of story. And story to us was something that you must deliver within a sprint, right, within that one week. So it made them forcefully think what is it that they can deliver, then of course, at the task level. Every story, in order for them to claim done, they have to demonstrate that the test cases are actually working. Automated test cases should be working. Automated or manual that are executed in HPQC reports status back into the scrum tool and says whether things are working or not. I'll give you a quick demo of this. We call this reported status should be equal to verifiable status, right? It is not OK to say something is done if I cannot verify that, OK? What we found out when we rolled out all these scrum processes was it was all trust-based, right? I said, I trust you. I trust that you will deliver it well. But we have seen this mentality that people just claim they are done, even though they are not really done. They'll just mark something. Anybody from the validation team here, from QA team or whatever, you probably understand what I mean, right? So the development team says, yeah, we are absolutely done. But when you open that code and start testing it, it is not done at all, right? It doesn't work as design. There are lots of issues with it. So what we said for the QA team to accept, for the validation team to accept it, they have to demonstrate that they have run the test. In fact, we have tied the unit test system with the RTC tool, which I'll show you a quick demo here, or a slide here, because of what we've done. So we have requirements, which talks about use cases and features. And we have test cases underneath each one of those. And within that test case definition in one of the tools, we have marked them as must pass test cases. These must pass before we can call something as done. So what happens? It syncs up with the scrum tool. We bring the use cases over and the features over. The team members break that down into stories and tasks. They say, I'm going to deliver something in workweek 10. Workweek is basically the calendar week that you're on, right? And at Intel, we communicate everything as workweeks. Now, then what happens? A developer starts to work on code. They check in. It goes to the SCM tool. It creates a change list. And when they do that change list creation, they must select the feature for which they are checking in the code. If I'm checking in a code for an audio driver, I will go and say that it is for this Dolby DS1 driver that I'm checking in code. So then it triggers a build using our back end system. And the build goes in. It executes the specific test that we had defined in the requirements engine. It does the test execution. At first time, none of the tests are passing. When the developer checked in code in workweek 10.1, which is the Monday of that week. On 10.2, she again checks in code. Now, one test is passing. In workweek 10.3, she checks in some more code, build, and now all three tests are passing. Then I can claim that it is code complete or done. So till then, the status of marking something as code complete is not in the developer's hands. Frustrating for developers? Any developers here? Do you like a test system to tell you whether things are done or not? Probably not, a real honest answer. So it was a difficult transition for us to get that over, but then this worked beautifully because reported status should be equal to verifiable status. But this was another transition that we went through. In terms of metrics to track, we had what we call as an S curve, which is nothing. I mean, anybody heard of S curves? Some of you have heard it. S curve is a fairly standard terminology in the industry where you say, OK, these are all the features that I think we are going to deliver by this week. Again, upfront planning. But then there has to be some amount of high level plan, without if you go blind into this, without knowing the sort of, I'll design as we go, it becomes hard. So this is one way of sort of seeing where are we trending right now. Are we trending in the right way? So you look at how many features or use case that we committed have you really delivered. So this blue line on the top shows the plan. And then the other blue bars basically talk about the actual that was delivered. So we get to know, are we really behind or ahead? Our tracking or planning meeting structure was we start our sprints on a Monday, 7.30 to 9.00 PM. The whole program management for this was done out of India. So we met 7.30 to 9.00 PM India time to start our retrospective and scrum planning from India. This worked out quite well for the other geographies as well. The only geography we couldn't get hold was the PRC guys, because they were three hours ahead of us. But that we sorted out through the daily stand-up meetings. They had to get their plans up into a tool so that we can have a quick retrospective of the scrum of scrum levels. And then we would have a daily stand-up with all the scrum masters participating in that. 9.30 to 10 o'clock in the morning, everybody would come and participate. So that's kind of how we did it. And we kind of, so not the most perfect. When you're working with so many geographies, we can't solve this time zone problem at all. It's impossible to solve it. But with tools that integrate, hopefully at least all the dashboards and everything is very clear, that you can get information from it very quickly without them having to participate in these meetings. The other thing that I talked about was holding people accountable to what they're delivering. We had this work items. So you can either commit to a feature or a use case or a story or whatever to a particular week. And we say that, OK, all right, you've committed 10 work items. At the end of the week, how many work items did you deliver? I mean, it's not a perfect science. It doesn't tell you that by story points, complexity could be different. I could deliver five stories of really low complexity. But another team could be delivering one story of very high complexity. I mean, it's not a perfect science. But at least it told us something that if some people are really consistently lagging, then we know that there is a problem. It's just an indication of how many work items did you commit, how many did you actually deliver. And we use that as a way to understand how, from a retrospective perspective, how do you correct it for the next time. So we also track this quality of plan. And we said that it has to be within a certain band. Yeah, you can go between 75% to 100%. You can be in a band. But if you consistently go below 75% of your commits, then you need to explain in a scrum meeting as to why you couldn't deliver. This is one thing that we did. Because without a band, it's very hard. You cannot say everybody's self-organized, do whatever you want. You had to put some kind of a band to say, guys, at least meet between the 75%. I mean, I want you guys to take risks. But it doesn't mean that whatever you're committing, you're consistently under-delivering is not good. So that was it. From a business results perspective, we pulled in this product. It was targeted for a holiday spring refresh. So we have this terminology called spring refresh and holiday refresh. Spring refresh means for the school season, the spring time, which is from January through March, is what we call as a spring refresh time. We were originally targeting deliver this product in spring refresh, but we delivered it in holiday refresh, which is 2013 Christmas time, which is the time when people buy all these products and everything. So we delivered it for holiday refresh. So it was a three-month pull-in, actually, which was pretty fantastic for a giant company like ours, which is always delivered product in two years, timelines, or whatever. So three-month pull-in or one-quarter pull-in was pretty good. At beta time, when we were testing all the use cases, we had 100% of the attempted test cases with 99% passing, which is unheard of. We always had 85%, 80% type passing test cases, but 99% passed because of this discipline. Because of the discipline that verifiable status, I mean reported status is equal to verifiable status. And then we eliminated pretty much all PowerPoint or Excel based roll-ups. Everything was through this dashboard kind of a tool, which is right from the scrum level, where the individual developer is working, all the way up to the use cases, you had full traceability, where I can pretty much drill down from a use case all the way down to the last task that was performed in that use case through that whole system that we designed. And we had pretty good design wins from OEMs. So here is our CEO, Brian Krasanich, who went up to the stage and demonstrated the first Bay Trail tablet in San Francisco in 2013. We had pretty good design wins from ASUS, Dell, and all these guys. In fact, in Mobile World Congress that is just going on right now, we just had another good design win as well. So it's pretty important for Intel that we deliver good products on the tablet side. We're also working on the phones as well. So we did deploy a phone in India called the Zolo Lava phone, which we learned quite a lot from that experience. We didn't do a big marketing gig or anything. Anybody heard of the Zolo phone? Intel launched? Yeah, some of you have heard it. I mean, it's a good phone, but obviously we couldn't compete with the likes of Samsung or anything, really high-end, right? But we are learning a lot in terms of how to do this. And we have really improved through this sort of agile mindset. How do you deploy products in a good way? I mean, we do have areas for improvement. Our test automation rigor is not that high. I mean, people kind of get behind this excuse that it's hardware, so it's very hard to test, right? You cannot test hardware, right? So we had to kind of teach them how to do it. People, since we support each of the software development teams support multiple products or platforms at the same time, they get a lot of interrupts, right? We've been working with Samsung on some things and some other companies, and they would say, oh, this is absolutely a high priority. So they drop everything that they're doing and start working on this defect, right? So we need to get better at that handling interrupts. I talked to you about multiple platforms and setting priorities. We haven't yet empathized with the need for a good product owner at every team level. We have one single product owner for the platform. But the change we are driving in, at the HR level, we want to create this formal role called as a product owner and scrum master who can drive these things internally in the organization. That is still not there in that extent. And also we want to drive this business value based decision making. So for example, if I deliver a use case, if I can associate a business value as simple as low, medium, high, right? It's pretty good actually. So then I can say, OK, is this product more valuable than some other product, just purely based on the value that we are delivering? So we are working on some ideas like that right now. Hopefully for our next generation of the platform that you are working on, we will introduce these things. So I want to wrap it up. Just say that I think for a distributed agile to really work, management buying and sponsorship is absolutely essential, right? So if you try to do this distributed agile without management support, it won't work. So shift left, I think, delivers immense benefits for us. Having a clear work breakdown structure is absolutely important. If you don't have a good work breakdown structure that is common across all these teams, you'll all be speaking different languages. Some teams will say features. Some teams will say stories and epics and whatnot. You have to standardize on the same terminology. Integrated tools, eliminating all offline updates is very important, critical for a distributed agile. And the last thing I want to say is that I cannot over stress that enough. Agile is all about trust and all that. I absolutely agree. I think we need to have a trusting mindset. But also let's make sure that reported status is actually verifiable status. Without which, you can do a nice scrum of scrums and all this, but really you won't get anywhere because it just doesn't work at the end of the day, right? So that's it. Can I answer any questions people might have? Yeah. Excellent. Great question, actually. So after seeing this, I mean, till now, what was interesting was Intel was a company where hardware was doing fantastic and software was the laggard, right? We couldn't never mean our systems were all buggy and drivers would not install when you put it in an OEM machine and all that. After this experience, they have come to us and they said, Raj, we really want to do agile on hardware now. So we are just starting coaching on some hardware teams right now. And it is an unbelievable mindset change that we had to go through, right? We just started the coaching. So in the next platform that we are delivering, we are actually coaching a lot of hardware teams into adopting this. So I mean, this concept that I've talked about is nothing new, right? I mean, nothing foreign in terms of hardware. Pretty much the feature, use case, everything is absolutely applicable, right? But just because they have gone on as sort of a mindset that worked for them, we didn't even force you. So I'm glad you asked that question. Thank you. Yeah, I mean, they are really open for change right now. So the coins have turned now, where software is leading and hardware is lagging, right? So we are saying, I told you so, guys. That kind of opposition. Thank you. I mean, shift left is, if you have in a waterfall paradigm, you have requirements, design, development, validation, right? This was the hardware sort of lifecycle. On the software, you wait typically when the hardware is delivered, and then you start writing software. But what we did was we shifted left so that even as you do the design, which is done in this RTL code, you start integrating. So shift left on software means you are pushing your software integration early in the cycle versus waiting till late. That's what shift. Any other questions? Yeah. Yeah, excellent question. I mean, this is that work breakdown hierarchy I talked about, right? So V-Model talks about use cases. It doesn't talk about what you should do beyond that, right? It talks about the use cases and maybe some component level requirements. But then we thought that if you have to do that thin slice of feature that I talked about, it worked quite well because, you know, I mean, V-Model does talk about how many of you know the V-Model? I'm making that assumption. Some of you know, right? Anybody don't know the V-Model? Obviously, those of you that didn't raise your hands. V-Model is this, like, you have sort of what needs to be delivered that gets broken down into smaller definitions. And the other part of the V is, is it working, is it working, is it working, is it working, right? One part of this breakdown, the other part of it is, is it working, right? So the beauty about V-Model and I think it works quite well is because, you know, you can now, because of this what works thing, you have this ability to, you know, kind of slice into smaller features and say, I'm gonna, you know, have a feature on use case being delivered at the higher level and then stories which are thin slices, I would be able to fit that into a sprint, right? And then I have my test cases that are supporting whether I'm really what I have delivered works or not, right? So I think it is possible, right? Yeah, yes, yeah. Great question actually. So, you know, we defined what we call as integration camps, right? So once in a month or a quarter depending upon where we are in the life cycle, we would have this co-location of some of the developers, for example, audio developer, BIOS developer, IO developer, you know, firmware guys, whoever is needed to make a feature work. So we would say at the end of this milestone, at this end of this integration camp, I want four features working that are verifiable. So when we bring everybody into the same place and have them test, you know, that whole thing and make sure it is working, then we let them go. Till then we lock them in a room and say, guys, get it all done, right? So yeah, I mean, that concept is, I feel in a big company or organizations that are delivering complex products, you should really have only cross-functional teams, right? Don't have those audio blah, blah, blah teams like expertise-wise break it down, but then do something like I want Skype working by the end of this Skype, which is the Microsoft product. Up and running, I'm able to chat with somebody, right? Till then I need to get that working, right? So this kind of what we call as an integration camp, based mindset would work pretty well. We are learning some things actually, you know? But you know, it's so expensive, right? To co-locate so many people from different areas and have them come to one place, it's pretty expensive. So that's why we are kind of running with this model right now. Any other questions? I have time for one more. Yeah, I mean, first of all, we deployed this only on the software. We didn't do this on the hardware, right? The hardware teams, we didn't even involve. I mean, they basically did what they were doing with the waterfall. At the software level, I think it was, you know, a matter of teaching them how to break down work that is fittable within that one-week sprint, right? This was a very foreign concept for them, for teams that didn't know any, didn't know how to break down work. The initial part of all this was just coaching them how to do the work breakdown, right? Once we taught them, they understood it, right? They would always first resist and say, oh, firmware, we can only deliver it in four weeks for it to be meaningful. But then you ask, I mean, I'll give an example, a calculator application, you know, doing plus, minus, division, multiplication, whatever, right? So I can basically do multiplication as a series of additions, right? So to teach them how to do that, ah, I see, okay, you can actually do that, right? So it was that difficult part was helping them understand how do you break down requirements. But once you teach them, I think they figure it out, okay? All right, folks, thank you so much. I really enjoyed talking to you today. If there is any questions that you want me to answer, I'll be here.