 Hi, everyone. Thanks for being here. I know there are equally good attractive choices to other sessions going on. It's really wonderful to have you guys here. I'm Harish. I work for Software AG, which is the second largest software vendor in Germany. We are about a billion euro company, primarily dealing with enterprise products, application integration, and lately into cloud and big data. Today, what I'm going to talk about is the story of the WebMethers R&D division, just of which I'm a part of in Software AG. Three years ago, we embarked on the adoption of agile from the traditional waterfall model that we've been practicing for more than a decade. And so some stories on why we embarked on that journey, what happened, some of the lean principles, why we chose lean, how did we implement lean in our own context, the lessons we learned along the way, and a bit of a scorecard of what we've achieved in these three years. So this is, as you see, we are a distributed organization about nine locations across the world. The time zones start from India to West Coast to US. There are about 30, 35 products in the WebMethers suite, 20 odd teams distributed all over locations and plus a lot of individuals working from their own homes. And we are also a very diverse culture. These 30 odd products, many of them have come into Software AG web methods through acquisitions. So the enterprise suite generally has a lot of small products. So every time we acquire, we not only acquire a new product, we get a new team with its own culture, a new location, a new time zone. And hence it's pretty diverse. And then when we go sell the solution as a product company, it's very difficult to integrate all these projects, all the products that have been developed in different time periods, though everything is mostly Java based. But still the code differs, the kind of interfaces, a lot of standards around. So it's pretty difficult to integrate all of them. In fact, around 2010, one of our major releases when we started, it was planned originally for 18 months. And we slipped big time, 50%. We shipped nine months later on that 18 months project. And when we shipped, our product had 11,000 loan defects. Now it's startling, but in the kind of space we operate in the kind of magnitude, this is quite common. We are an industry leader. Just imagine the rest of the companies. But then we also realized this is not a sustainable model. And we were actually looking at an era where there'll be more acquisitions. We'll be moving to cloud and big data. The time to market has to be compressed. Because of these reasons, we decided to embark on the agile journey. It was pretty easy. Scrum was the most popular, most practical thing that you want to adopt. And at the team level, things were pretty OK. You can find a product owner. For 30 odd products, we had the product owner for each of them. You find the team. Somebody is always ready to be the scrum master. Scrum is done. But how do you aggregate these individual teams at a big level? How do they add up? In fact, it could really become more difficult. In Lean, they talk about this assembly line model. And when there are streams of various operations feeding into each other, all of them will have to work at a particular rhythm, which is coupled to what comes earlier in the stream and what comes downstream. Otherwise, everybody starts jumping into agile, developing faster. At the end, nothing gets better. So we needed a metaphor to basically look at it from a sweet point of view. And Lean was the answer, which is the most appealing, in which we could really see the entire model of the web method suite, especially as a single value stream. And the one single agenda for each of the team, the product would be to accelerate the flow of value to our customers. So in a sense, each one of these people has to move away from my product, my team. And we had to actually make them look at one suite. And what is the overall solution in the future that we are developing? And how fast we can actually deliver that? So that's what took us to the Lean. There are several interlocking things that we went about that are really necessary to make this transition to a Lean and Agile culture. Primarily, some are the soft aspects, the people aspects, mindset, how to recreate the right culture and stuff, and the hard process aspects. How do you implement a value stream? How do you actually build the cadence, the rhythm in which we deliver software? How do you do quality control? And how do you actually implement the Kaizen or the continuous improvement which comes from the Lean model? Today for this talk, I'm going to focus on the process aspects and not on the people aspects. So this is a visualization. It's a construct we came up with as a single value stream so that the entire division can look at it as one single assembly line. So we have these ideas from customers, the field, the sales, the architects coming up which are shaped into features. And then based on the guidance from the business that says these are the investment areas, this is the amount of effort or money you can invest on a particular feature or a product. So these features are actually broken into these buckets. We create a single suite backlog for all the 3D products. And then we come down to the team levels where each team picks a portion of what it's relevant to it from the suite backlog to its own spring backlog. And they have their own iterations they built. And then they create a build set out of that product. And daily they do a testing to ensure it's always working, which is code in agile. And then we have a separate process called the promotion process where daily there is the integration of all the products that go into the suite. So it's continuously doing. And we have an integrated suite build set every two days or so, two or three days. So we kind of do it continuously. And then this is fed to a suite test sandbox, which is the next stage where you do the non-functional test and do it. The idea is to actually develop a shipable build set, a potentially shipable build set, which is very stable and which is tested for regression. We try to develop one full build set, which can be potentially shipped every four to six weeks. And that is the cadence, that's the rhythm we try to build on. And once we are ready for a major release, which is a timed release, we could really do these release readiness activities, basically the internalization, porting, testing on some rare platforms and so forth, and really doing some system testing, which we do once in a major release. And then it goes out. So I'm coming back to the conceptual model. We borrowed this from Scrum, the backlog basically at the suite level, we have a single backlog. We don't have a single product owner. Each 35 odd products have their own individual product owners. So what we make them do is they create the features and create the product backlog. They rank their features according to their priority. Then when they feed it to the suite backlog, there is an algorithm. So we really can't have one person mediating between 30 conflicting product owners. So technology and Max comes to rescue. We have an algorithm that basically takes weights from based on what is the business guidance on how much should go. We know the capacity. Based on that, we create the algorithms and that ranks this product backlog. And then teams actually have their own views on what are the features that are relevant to them in this feedback block. From there, they pull their features into the value stream and they do the stage-wise promotion that I talked about in the previous slide. One of the most important things that we do here is something called we try to maintain a continuous visible flow. And this concept of Jidoku or Andon card comes from the Toyota lean model. Basically, this means at every stage we ensure when we take our deliverable and hand it over to the next person, the handoff happens with 100% quality, at least known quality. We don't knowingly shift something which is slightly defective to the next step. It is time box. This stage-wise thing is time box. And this is what creates the interrupt in that time box. So even if you're going to miss the deadline, but if you know your whatever you're shipping to the next stage has a bug or a defect, then you really stop the line. In the Toyota lean model, every single worker, it doesn't have to be a manager who takes a decision that the production would stop. We implement the same thing here. Any engineer, any test engineer, or even a developer who finds, OK, I know there is a problem here in this thing. So I will stop it so that this build never leaves unless this issue is fixed. It doesn't go to the next stage. So we implement this as the most critical process step in which we literally stop the build. And we try every one of the persons involved who can really correct that bug actually makes it the highest priority. They stop all other activity. They ensure this bug gets fixed, the build is refreshed, and then the cycle is resumed as soon as possible. The iteration lengths differ from two weeks to four weeks, depending on the time. So those details are completely in the team's discretion. We don't really enforce it, which means somebody who does two-week iterations can ship two builds. And if somebody has a four-week iteration, they just stay with the old build in their promotion. And then by this way, we try to establish a cadence. So every four to six weeks, we have a shippable build set. And one of the ways we do it is to actually seriously limit the work in progress. We do multiple things to do it. First thing, people only work on the right features, because there is enough work just to get the right features and the right quality done. So you don't do anything else. So that also means the tooling itself hides most of the other stuff. If you can't work on it, there's an automatic V collect metrics on what is the capacity, what is the sustainable velocity with which a team can push features or build out to the next stage. If they are not meeting it, if that capacity is full, then they can't really see the rest of the pipeline. So that way, teams are under no pressure. You just work on what is in your plate and try to get it done. You don't worry about whether you're lagging behind or what is going to come later. And all of this thing works in a continuous improvement model where teams themselves shape. There's a lot of autonomy for the individual teams to try out new practices, do their own things, where they experiment how to do things, and then they share it with the rest of the teams. And if it's a popular model, if it's working, it kind of goes global. So there's a lot of its bottom up. Some of the principles and the constraints are driven by the global vision and how we want to do the entire thing, but how it is being implemented is just left to the teams. And they experiment with that a lot. Now, what are the key lessons we learned along this way? The first interesting lesson was slow is faster. So the first year when we moved into Lean and KSN, after six months, everybody was, there was a lot of dissatisfaction in the team. Everybody felt we were going slow. So much of automation test was being written. People were really trying to resolve these problems. And most of all, as I said earlier, you have this practice of pulling the hand-on card. So there is a blocker in one product, then everybody stops working. You either help that person who's trying to solve the problem, or you just sit straight, because according to Lean, if you are doing your own production, then you are increasing the inventory. So even though it is not optimal for you at your team sense, we are trying to optimize the global whole, which means understandably teams get frustrated, especially in 30 products, if teams A and B that are in some other location have a blocker which they cannot resolve for three days, your source code is locked, and you can't push in your changes. Either you do some other thing that doesn't affect the code, or you just wait for the thing. So such things can be really unpopular. So the teams went through a period of dissatisfaction because of these things, but in the end, after a year when we really came close to the one-year thing, sudden thing happened. We realized we didn't need that old stabilization period anymore. In the previous example I told, I mentioned we were nine months late on an 18-month project. And at that time, a lot of that time was really spent on this stabilization, which means everybody said the code is done, we went on and we finally integration has certain some new things were discovered, or they said this is getting a little unstable, let me stabilize it, and that itself took more time than how much we had taken for the original implementation. So that phase suddenly disappeared. And we suddenly saw we didn't need that, we in fact had planned for a two or three month stabilization period at the end of the feature freeze. And we suddenly realized we didn't need that. At one week, this is nothing left to do. We had a stabilized thing, we had our tests, everything said things were working fine. So suddenly with what we thought was a very slow process turned out to be a really fast thing. And we also learned the value of tests. Evidently as we started tests where like the second class citizens, we soon learned, I have five minutes, I have to rush. We learned that it's a very good investment because these tests allow us, give us confidence to go and do changes. Earlier we used to be very conservative in taking big bank changes, changes to interfaces and things. Now with a safety net of automated tests, people were more ready to actually go and change because within a day or two, you will know if you're breaking somebody else's thing. So it's not a black box thing anymore. And in fact, once we finished one major release and turned that to sustaining, those guys also saw enormous improvements because now we could really give small fixes. We didn't really have to deliver a big bank service pack or something like that. So tests really paid in a better way. Progress is getting simpler. One of the other things when teams themselves experimented and tried out new processes, what we learned was getting lean and being more agile is not about doing more things. In fact, eliminating base means doing lesser things. To do lesser things, you have lesser failures and stuff. Just keep it simple, keep it incremental, and it's actually that simple. It's usually a greed that makes us a complicated thing. And the change initially comes with a lot of resistance. People really don't know what they're getting into at the start because so many things are bandied about. The management is also trying new things. But once people really appreciate when they are given the autonomy and they try out certain things and after one release, they really see what they have gained, they become believers. And after that, it's pretty easy. Even you can't really go to the team and say, hey, no more lean. Let's go back to the old system because they now have tasted the success on their own and they really self-sustain the process altogether. This co-card will go very quickly. We had a six-month release. We have actually from one and a half-month year releases. We've gone to six-monthly releases now for the entire suite. And we shipped the Navamina 2013 release that shipped overall for, I think, at this point after a few more acquisitions, we are around 40, 42 products now, I guess. And the overall thing was about 1,200 defects. And manual testing, we now have nearly 50,000 manual automated tests running daily and the total number of tests, including regression is around 200,000. And this is really paid off. Initially, it doesn't look like a good investment, but in hindsight, I think that's the best investment you can make on automation. And we now have completely moved on to six-monthly releases. And nowadays, I think for the last two releases, six months in advance, we can say, this is the rate we are going to ship and we ship really on that date. Which for, if we look at the 10, 12-year-old record of the company, it's stupendous. And even in sustaining, we have a fixed calendar. You would be able to give the fixes on. In advance, the moment a new issue enters, you could really predict which date you can really deliver the fix. So that was the quality and the predictability parameters in productivity. We are seeing that our process are becoming more and more lightweight. Earlier in the waterfall model, we had a lot of checklists, we had compliance, people checking, auditing and all that stuff. Nowadays, most of the metrics are really collected by tools. And we don't really prepare this status reports anymore. It's all there in Jira or a few custom tools that we have for certain products. And anybody who wants to know whatever is the current status can just pull in the reports and teams do their own self-practice. They define, in some of the teams, they define their own metrics. They figure out what is the next thing that they want to do. And mapping software AG in terms of agile currency that Martin talked about yesterday, I think we are somewhere between one-star company and two-star organization. We now our teams have the agile culture. The people have really moved in their minds with a sympathetic attitude of why we are doing these things and really want to do it on their own. And we have our features driven from a business perspective. It's not just we are developing products anymore. We really understand why we are doing this as a business. And the next step I think our growing edge would be to actually make the skill set shift on how to really get the skills where we will be able to deliver on the market cadds. So if market would be able to pull features from us, if somebody wants us to ship in three months, we should be able to kind of develop the feature and give it to you in three months. That's the capability that would be our next step towards leaning. Thanks, that was my think. If you have any questions, I'll be glad to answer. My question is working. Yes. And like the working software development part appears to be agile and the whole process appears to be deep. Is that true? Yes. In a sense, in the micro level, we practice the scrum. And there were few teams that were also practicing Kanban. The teams they were doing mostly sustaining and stuff. But at the higher level of scaling, we kind of did the leap. That was possible? Yes. You actually do that? You're literal that? Yes. So was it possible that instead of locking the whole board base for all the center, you would be able to do it? Isn't that possible? If that's possible, we have various things. So the real system works like this. We have a hierarchy of locks. We have local locks. We have global locks. We've gone into global locks once or twice in the three years. That's the extreme thing, like after 10 days, nobody knows how to get out of that set of blockers. But usually they are local locks. Teams are not very ready to actually pull back their changes. Because usually the way the integration suite works is some of the time when I do a change and something breaks, the right way of solving it is not to withdraw this code, but have somebody else, because we have exposed a problem and somebody else's code and that has to be fixed. And they don't have a test that catches it yet. So the first thing is like you lock it and you actually raise a blocker against that team. So the change could have occurred in product A, but product B could have the problem which needs to be fixed. So they would probably respond by adding a test that was missing in the first place, that within their interface exposes the same problem, then they'll go ahead and fix it. That's the reason we don't really recommend just blindly taking off the change. That's more optimal. It's okay for us to ship. So that's why the shipable build set, we don't have a clear cadence that every month we release. It's like four to six weeks. So we kind of make that flexible, but we don't compromise on the zero defective mentality. So if somebody says the entire thing, a particular product cluster has about 50,000 test cases automated at this moment, all of them have to pass. There's just no way of getting around it. And people have to keep adding test cases. That's one of the things where we keep raising the bar and stuff. So at a macro level, it really looks good. At the team level, it's all blood, sweat, and toil. It's the team. They do their self-pulsing. So we don't have managers or VPs checking and testing it. They're being given to kind of put that mindset. It's a good question. I would like to answer for our... That's the part I left when I talked about the people and process. So we had a fairly well-designed people aspect. We had some change agents teams. We had some experienced programmers who taught some of our QA engineers. Initially, when we moved to agile, not a lot of our test engineers were hired for coding. So they were excellent testers. They would do manual testing, but they were not just comfortable with writing JUnits or stuff. So after office hours, we really had these self-help groups where some of our managers and experienced guys actually did Java trainings. And they really helped out. So the teams basically helped each other out to build that capability. Or not being visible to the developers. So they only would seem like a window up last. How do you do that? It's not that the entire backlog is not visible. What I meant was they had a view of the entire suite backlog. So the suite backlog would have like 200, 300 features, but only 10 or 15 of them would be relevant for a particular team. And at that time in a sprint, only two or three of them would be visible. So in order to control the work in progress, it's very tempting. There is this particular feature that I'm developing. I'm 50% done and I cannot move beyond because I have a dependency on somebody else. So this is a temptation that while I'm sitting idle, why don't I pull the next feature and start working on it? So we wanted to curb that stuff because at the system level, we wanted to keep a cap on WIP. So one of the ways of achieving it is you can't even see it unless you have the capacity to start pulling it. Only then you would be able to see it so we can pull it off the tool. So they see a portion of it. It's not like they don't see it at all. It's only the things that they can work on at that moment. And only after you're finished, you can start the next. It varies from team to team. Some critical teams basically that are core platform teams, they would work at around 70, 80% capacity. Others would vary between 40 to 50% but then as a team, we had a huge history of technical debt. So the teams which have spare capacity would really start building those things. So do those, you know, writing those test harnesses, making the framework better, and those kind of technical debt addressing activities would be carried on by them. So everybody was fairly, you know, the utilization was also there but not in the conventional sense. They're planning for two weeks? Yes. So it'd be like two hours. They're co-ordinated. They would do this. They would plan for the two weeks. Yes. And they would have their own retrospective. That's where they do their own experiments on how to do things. Yes. It's done by the teams themselves. And they would post their reports on the map. So that way everybody in the company, from the top person to the... even somebody in a different location and working on a different product could just go and look at this particular team, how they are doing. That way they can use this as an input to your planning. It's all there on a common wiki in the internet web. So everything is transparent. You post your plans, your commitments, and your retrospectives on the... You're talking about the demo or the retrospective? The demo would be done by the team, and it would be done to the product owner. Yes. A multi-mistake one. For example, if it's a core team, there will be other downstream teams that would consume that. So they will attend these retrospectives. And they will also... because they will have to start writing that as cases of the other. Not retrospective. Yes. We kind of use that... reviews are generally called retrospectives. Okay. But we do go. This is a technical part, and this is a process aspect. But you're... Yes, I'll upload it on Slideshare. Right away. And in fact, there's also an experience report that is published along with it. And in fact, maybe I also thought the people aspect I'll probably write a couple of block pieces just to explain how we actually went about that. There's a lot of tooling behind the scenes here. Does that... Like you built that yourself? We built some of it. We bought some tools. We had to standardize on certain tools. But there's a lot of homegrown tools out there for various aspects. Thank you very much. Thank you. They would have their own retrospective. That's where they do their own experiments on how to do things. Yes. It's done by the teams themselves. And they would post their reports on the... So that way, everybody in the company from the top person to the... even somebody in a different location and working on a different product could just go and look at this particular team, how they are doing. That way, they can use this as an input to your planning. It's all there on a common wiki in the internet web. So everything is transparent. You post your plans, your commitments, and your retrospectives on the... You're talking about the demo of the retrospective. The demo would be done by the team and it will be done to the product. Yes. Yes, multiple stakeholders. For example, if it's a core team, there will be other downstream teams that would consume that. So they will attend these retrospectives. And they will also... because they will have to start writing that as cases of the other. We kind of use that. Reviews are generally called retrospectives. But we do both. There's a technical part and there's a process aspect that... Yes, I'll upload it on Slideshare right away. And in fact, there's also an experience report that is published along with it. And in fact, maybe I also thought the people aspect... I'll probably write a couple of block pieces just to explain how we actually went about that. There's a lot of tooling behind the scenes here. Is that like to build that yourself? We built some of it. We bought some tools. We had to standardize on certain tools. But there's a lot of homegrown tools also for various aspects. Thank you very much. Thank you.