 This is Balaji Ganesh and I work with Vipro Technologies. So let me welcome you to this session on lean for competitive advantage and customer delight. So without much ado, let me just get into the topic. So the reason why competitive advantage and customer delight words are used here, let me just try to illustrate that. Statistics or data shows that only 40 percent of the projects meet schedule, budget and quality goals and 50 percent of the projects do not give any return on investment. So considering this kind of a scenario, what we do in terms of meeting the schedule or budget or quality goal and looking at how you can optimize the cost can itself create a competitive advantage or a differentiator in the marketplace. So that is one of the reasons why I use the word competitive advantage and getting a little bit into customer delight. I do remember reading a book a couple of years back by Sam Walton who was the founder of Walmart. He said, he made a statement like the customer pays your salary and he can fire you any day. So we are all here because of the customer and because of what we do for the customer. So lean being a process which has maximization of customer value as its focus, it is probably one of the things which can help to reach there in terms of making your customer successful as well as making yourself successful in the process. So with that introduction, let me get into the agenda. So today I will be sharing an experience report of one of the application development projects. So this is not an application development project in the strict sense that it was not a development from scratch, rather we were looking at an add-on development to an existing code base. So I will be getting into the project context in terms of the challenges and the background and I will be moving on to describe the lean tenants and techniques that we have used in this particular project. I will be moving on to, I will be talking about four lean tenants and techniques namely visual controls, mistake proofing, design structure matrix and orthogonal arrays. So how did we tie up these four things together so that we were able to meet the project goals in a better way and I will be moving on to the benefits whatever we had achieved and of course the conclusion and the closing thoughts. So with respect to the project, this was a team size of 40, so this was an add on development project as I described earlier, it was in the insurance domain. The functionality was planned for roll out to five states spanning two different iterations and the scope was a complete scope, right from requirements gathering to user acceptance testing and warranty support. So the approach that we used here was before we started really getting into why we applied the tenants that we applied, we did a brainstorming, identified the challenges, then we started doing a mapping to the relevant tools and techniques and of course nothing works without a plan of action and also having some owners to get the accountability right. So we had identified owners based on the actions that we identified then we executed the plan and of course we followed the classic dictum of plan do check and act. So where these tools and techniques really helping us to achieve what we wanted to achieve. So with that let me move on to the challenge. So I have just tried to categorize these multiple challenges whatever we faced in a quadrant and the reason why I put quality and people together is because I wanted to make it a quadrant not because of anything else and also because of the reason that people do have an impact on the quality of the deliverables. So let me get a little bit of a deep dive into what were the challenges from the quality and the people perspective. So there was a lot of, see this was an ongoing project and we had the previous releases where the customer did not have exactly a good experience from the quality perspective. So there was a lot of slippage that was happening to the system testing and the user acceptance testing phases and what I mean by in process defects here is right from requirements till unit testing also there were a lot of defects which probably meant that you know I was finding more defects in the process and also having a lot of defects in the page which is getting into my system testing and user acceptance testing phase meaning that quality at source is bad so we need to do something about it. And the team for this particular release was doing the requirements gathering and the business analysis for the first time and with respect to the team you know we had a typical challenge which is faced in any service company so we had a lot of churn in the team from the previous release and also certain rotations that had happened so we had two sets of people, people who were fresh and were doing development for the first time and also people you know who were into the team for the first time and had to get familiarized with the processes and of course the team and the customer collaboration was not exactly something to be proud of so we wanted to do something about it as well. From a technical perspective there was a lot of you know code reuse which was to be done from the previous releases so that was one of the challenges and of course change management you have a large code base and you know it is more like you know you have a stack of cards you know which is stacked as a pyramid and if you're adding one or two cards and if you're not careful you will probably encounter the risk of collapsing the entire pyramid so we had such a kind of a situation here so we had to prioritize the feature sets we wanted to this project was done using a waterfall model earlier so the change here was moving from waterfall to iterative mode of development so what needs to go into each of these iterations is you know what we wanted to prioritize and there was a high degree of dependencies between the modules that was getting impacted as part of the project. Looking at the contractual obligations the customer because of the lack of confidence from the previous releases had set certain stringent parameters so we had to deliver with zero critical and high defects and less than 10 medium and low defects and the customer satisfaction was at its lowest so this was a more of a risk reward model where there were certain penalties you know which had to be paid we did not meet the quality and the schedule criteria. From a scope and schedule perspective the pressure on the schedule came from two things one was regression testing for the states which were already in production which we had to do and because of the poor upstream quality in the previous releases there was a last minute scramble and people started creating more and more errors you know because of the time pressure and because of the way the change was managed you know towards the end and the customer also had an expectation that certain percentage of change request had to be observed within the schedule which was a new requirement. So moving on now I'll be talking about mistake-proofing which is more about deducting and preventing defects at the source and about visual controls design structure matrix competency management more from the perspective of how we can allot the work as per the competency how do we develop the competency and you know you know taking a structured approach towards that and seeing how it works in tandem with the mistake-proofing or visual control you know kind of a thing and orthogonal arrays of course from optimization as well as you know design perspective as well so we'll probably look more into these as we go along. The term I use to sharpen the axe as we all know you know there is a saying that if I had 8 hours to do work I'd spend 6 hours sharpening the axe so most of these techniques that we talked about are techniques to sharpen the axe so that they can do a better job in terms of quality and in terms of the deliverables in terms of the schedule. So I would not really get into the details of what is the visual control we all know what it is but let me start off with the challenges so one of the things that we see from a challenge perspective was student syndrome so can anybody take a shot at you know what is student syndrome right so we have all been students in the past and even if we had three weeks of a study holiday probably you would study things only probably the last few days so there is a tendency to procrastinate or postpone things which we know takes lesser time to complete than what is actually given so when we make the workflow visual or you know when we get into a visual mode probably we you know there is a Hawthorne effect that you know comes into play I mean when the work starts getting noticed you know you start you know seeing signs of student syndrome getting eliminated so it's only the delays that get passed the early finish never gets passed so that is that is you know one of the standard you know project management practices that you know project management practices that we would have seen and the other thing is from a work prioritization perspective looking at workload leveling and also looking at you know how you know we will try to reduce the work in progress per person we wanted to make the workflow visual so the benefit of you know doing a visual control is you know what what hits the high also hits the mind so it creates better collaboration between the teams you know it's one of the significant benefits it also creates a ownership by because I know that I am accountable for moving the task you know which is on the board and I cannot afford to show the same status every day and there is a transparency in terms of who gets allotted what that is the other benefit and the other important thing is you know the visual when you visualize the workflow that is all you also need to focus on the handoffs the handoffs are the main reasons for delay when we look at software development and handoffs that is happening across teams is either something that we would want to eliminate or we would want to optimize you know from the perspective of work so the thing that we did here was we visualize the workflow by putting in the various phases in the form of swim lanes so we had a definition of done we had daily team huddles and since this was a distributed team we followed something similar to a scrum of scrum so we had a single point of contact who was there at each of the locations who would have this status synchronized with the rest of the teams across the globe and we also had this process and workflow diagrams which were pasted on the desks of the individual so that it can add more as you know read and do kind of a thing so that people don't miss what they need to be doing so we have already talked about the benefits in terms of ownership transparency getting a real-time status and of course there is also reduced management effort in terms of collating the status and also checking with people on how they are doing I'm sorry about the diagram anyway it's not clearly visible what we have done here but these are just couple of artifacts you know which I have shared from the project so I have just put in you know three diagrams here maybe area in five Mars climate orbiter and Gripen so can anybody tell me what is the connect between these three things so if you if you look back you know maybe if you Google it up and look back so all these three are examples of simple software bugs which had extreme consequences if you look at a area in five area in five reused code from area in four and this led to the destruction of the satellite so destruction of the satellite as soon as it got launched and Mars climate orbiter is one more example of you know usage of English metric system instead of a different metric system so in all in this case also code was reused and in Gripen which is more of a fighter plane you know as you can see there was a particular exception which was not handled in the flight control system which actually led to the explosion of the aircraft in certain scenarios so so they keep a message that I wanted to you know passes so reusing software modules does not guarantee safety to the system in which they are transferred so this is one thing that we need to take into account when we start looking at projects which are really having a huge legacy code base the other thing is there are you know certain things like flight control software or medical equipments where even a six sigma like quality will not be enough we probably need to go in for a zero defects kind of a thing and the last thing you know just a point of ponder is bugs do cost a lot so they costed 0.6 percent of the gross domestic product of the United States couple of years back visit data couple of years back and the other data that I wanted to just project this rework is about 60 to 80 percent of the cost of software development and in this particular project also we wanted to focus on rework and I would like to define rework as a cost of poor quality so how do we bring in the cost of poor quality down is one of the things that we focused on and the things that we did see we had a problem solving workshop where we looked at you know the past data in terms of what were the defects that had come in the previous releases so we did a Pareto and we did a root cause analysis based on the Pareto and the other tool that we use was FMEA which is failure mode and elements failure mode and effects analysis so we took the entire process end to end we started looking at what are the different ways by which this process can fail and do I have the capability to deduct the failure in the process and do I have a mechanism to address the impact that would come because of this failure so it was more of a structured approach so the different things that we did one was from a standardization perspective in terms of coming out with the see again here I would like to draw a line between expertise and standardization so standardization can be used predominantly for the repetitive activities that we do and can be done in the form of maybe templates or checklist or automation we just give an example here we focused on the low hanging fruits from a automation perspective so one of the things that we did was to look at the build automate the build which was the first thing that we did we also found that you know there were lots of differences between the unit test and the integration test environments so we had some simulators which we developed which would simulate some messages during unit testing so that I can deduct those defects early and we also looked at some pre-filled design templates and we also had the gating checklist which were there at the end of the unit testing phase and the integration testing phase as well as the user acceptance testing phase just to do a check in terms of whether we have done all the things that we need to do so we also had a special focus on the code quality from a code quality perspective we saw that unit test coverage is one of the things that impacts you know the code quality so we had a norm in terms of what the unit test coverage should be and it became a part of you know part of the gating criteria checklist and the other thing this we did competency management so we started looking at design structure matrix for handling the dependencies better and we also had some feedback points both from the you know both from the customer as well as from the team at the end of the design phase and the unit testing phase and we also had certain user acceptance cases you know which were done as part of the integration testing so that we can get early feedback in terms of how the code is behaving so the benefit was we had we were able to reduce the rework significantly we completed system testing one week ahead of schedule we had zero critical and high defect and we were able to you know reduce the defect by 69 percent from the previous similar releases in the past and last but not the least expertise was used to validate the code reuse and also for building the competency of the team wherever we found that you know there were significant gaps and the experts also played a role in doing the reviews for certain things like exception handling so from a code quality perspective we also followed a tooling approach so we were using we started using tools like J unit and this style to automate things and also we had a memory profiling of the code you know it was done through the standard memory profiling tools the implementation productivity improved by 33% and the other thing is you know the prevention cost this is a pyramid which gives you the prevention cost correction cost and failure cost the failure cost can be maybe hundred times more than the prevention cost so it's better to focus on prevention rather than allowing the failure to happen so I'll talk a little bit about the design structure matrix so design structure matrix is a visual representation of the forward and reverse dependencies between the various elements of a system so these elements could be features they could be modules there could be user cases or they could be story points and the way we mark these dependencies is we either put a 0 or a 1 to indicate the relative strength of the dependencies and let me just show you an example so there are three types of relationships that we can model using a design structure matrix so when we do parallel dependency we don't you know have any marks to indicate the dependency when it's sequential it's in this case B depends on A so you you have a X mark you know to indicate that B depends on A and where we have you know dependencies between A and B we have just put a X mark against each of that so the key thing that we need to note here is we will only be indicating the relative dependencies in the design structure matrix and not the transitive dependencies so let me just take an example so if A depends on B and B depends on C I will only be capturing the dependency between A and B and B and C not between A and C so that is what it is all about and there is a site you know which you can go to to look at you know the DSM standard DSM macro so this this is right now controlled by Massachusetts Institute of Technology MIT and there are also certain other tools you know which if you want to do DSM from a good perspective also it is possible so there are both commercial as well as non-commercial tools which are available so here is a you know just a sample DSM that you can see with seven elements and 11 dependencies so here A depends on B and F and we have B depending on D and so on it goes so the usefulness of this is one thing is you know you it helps you to manage the dependencies in a better way so let's assume that you get a change request and you want to look at a module A you know you know which gets changed maybe if you want to look at what are all the other modules which are getting impacted because of change to A you can know at a glance you know how these things are getting impacted and it will also DSM also output also tells you what are the different levels at which you need to develop the module so for modules at the same level I would rather do concurrent development and not a sequential development so that I can compress the timelines and the other thing is it also gives you something called as a value thread so value threads are again paths right from the start to the finish so here the value thread is indicated in red color so I can maybe when I am you know trying to plan for a particular iteration or a sprint I can probably look at completing one of these value threads so that's how I can do the planning so the other thing is you know it can be also used as an input for reviews and it can help you to improve your upstream quality and there is also a complexity factor which comes out of the DSM which can be used for the work allocation so as I mentioned earlier if you allocate the work aspect you know complexity you're probably preventing over burden or slack either way so it's an optimal work allocation and for the planning decisions I think we should focus on three C's basically the complexity the criticality and the customer value and that's what we did here so this is just a rehash of whatever we have already spoken so let me skip this in the interest of time so the other thing that we did is orthogonal arrays so orthogonal arrays is more of a system modeling so we all know that you know anything can be modeled as a system with an input as well as an output so in orthogonal array the inputs are known as factors and the values which each of these factors can take are known as levels and based on these factors and levels you know I'm just giving you some examples here which you can look at you know from the insurance domain as well as the banking domain you'll be able to generate a set of combinations or multiple rows and each of these rows or combinations would be a test case in itself and there are multiple ways to optimize I'm not getting into the details of that here but you could use equivalence class partitioning or boundary value analysis to look at what are all the levels which each factor can take simple example would be that if you're going for a flight reservation system you have three levels right you have zero to two two to eight and greater than eight these are the three different levels that age can take you know from a flight reservation system perspective so I'm not getting into the details of orthogonal arrays rather I'll just focus on the benefit so we were able to reduce test cases by 64 percent without impact to quality and the regression test execution effort also reduce the 50 percent in fact if you're interested more on orthogonal arrays I would point you to an article in I6 sigma which has been written by Fadke so there is a similar experience that he has reported from running you know orthogonal orthogonal arrays on a regression test suite so more or less you know we are seeing similar kind of data here and where do I do orthogonal arrays you know this is a decision quadrant that we wanted to use that we used for the project so wherever we have the risk impact you know the risk impact is less so possibly there are certain modules you know where the risk impact is less from the perspective of this particular release and you know those are the cases where we will use orthogonal arrays wherever we have a high risk impact and also you know the things are you know more stable maybe we would want to use orthogonal arrays more as a design approach to look at what are all the possible combinations of test cases you know that we would want to design so this is in a nutshell you know what was achieved with respect to this particular project so there was a 10 percent effort underrun so we ended up with spending 10 percent less effort than what was planned we were able to absorb 99 percent additional effort which means 19 percent is the effective gain that we add and we delivered the product one week before schedule we had zero critical and high defects and the defect density also reduced by 69 percent we had a bonus payment from the customer because this was a risk reward model and last but not the least the productivity increased by 33 percent which also gets reflected in all these numbers whatever we are looking at so just to summarize I think you know from an execution excellence perspective I think one should be looking at maximizing the flow and the value so removing the impediments to flow the you know non-value ads visualizing the workflow is a powerful technique which brings in greater collaboration ownership and transparency across the team we should also be looking for early feedback points and managing three C's is a criticality complexity and customer value which we already talked about and whether you win or lose don't lose the lessons learn so what if I the lessons learned standardized all the repetitive tasks simplify and streamline and making the tacit knowledge explicit see basically what we have done here is both for the OA as well as for the DSM we had a workshop where the entire team was involved so that way the experts knowledge gets shared across the team and we were able to make most of the tacit knowledge explicit there's probably one way to do it the other way to do it is also to look at you know standardized repetitive activities where you can create something like a standard operating procedure and also alignment between the teams can be created now when you're using a DSM you know kind of an approach all the teams talk the same language and they are aligned from the perspective of the value grid so I just want you to leave with this one closing thought there is a book by Atul Gawande the checklist manifesto getting things right so he has done a lot of research in terms of how this high complexity industries work one thing he found in common surprisingly was the use of checklist and you know this is what the New York Times has to say about this something as primitive as writing down a to-do list to get the stupid stuff right can make a profound difference and what he says here is there are two types of checklist read do and do confirm so read do is more of a recipe kind of a thing where you know things are listed down and you do it and other thing is you do and you probably put a check mark and say that you know just confirm that it has been done and he also recommends not to have more than 8 to 10 items in a checklist and this is what is found to be most effective so having a to-do list at the end of the day makes a lot of difference so I am open for questions okay