 Welcome to session 8 of our lecture on Quality Control and Improvement Using Minitab. So, I am Professor Mukherjee from Shailesh J. Mehta School of Management. So, last time we had last session what we were doing is that we discussed about quality of design and within that what we did is that we did design failure mode and effect analysis we talked about some visual tools which can be used for representation of the data which is quite popular in quality control and improvement. So, some of the things we have already discussed over there. So, now whenever the design is perfectly done and then we also talked about Roper's design concept some basic concepts of Roper's design where we want to manufacture products which are close to the target with minimum variability and for that experimentation is done in the design phase also. So, that comes out the quality of design. So, whenever the design is perfect what happens is that it is it goes for mass production. So, first pilot run will be done then mass production it will come to the process. So, process becomes important at the time point. So, process has to deliver what is the defined CTQ. So, whatever characteristics quality characteristics customers will get dissatisfied if it goes wrong. So, in that case those needs to be monitored basically. So, that phase is basic that phase is known as quality of conformance. So, at that phase we concentrate on the quality of outputs that is being generated, how close is that is with the design that is being developed like that. So, design and any of the items we will have many characteristics which will have some target values or what we call as nominal dimensions like that in design nominal dimensions is the term that they uses. And there will be some tolerance to that CTQ. So, there will be some tolerance because every product cannot be manufactured to the target exactly. So, there will be variability, there will be variations and for that some tolerance is given by the designer. So, we have to abide by the tolerance and within that goalpost we have to manufacture like that. But if we can hit the target every time point very precisely in that case it becomes quality products, quality products and quality improves like that. So, minimum variability and hitting the target that is our objective in quality. So, and that will impact the cost, that will impact the cost and overall. So, whenever the product is being manufactured in manufacturing process, so this is the basic things we need to understand over here. So, one is this inputs that goes into the process. So, maybe previous process outcomes will go as input to this process like that or maybe new materials that goes into this or new raw materials which goes into the or subassembly something like that in a process. So, this may be nth stage any nth stage process over here. So, I have just represented one of that ok. So, this is a single stage let us say process over here and there will be some variables which can be changed and these are known as controllable variables and some variables although we know and we may not know also. So, which is uneconomical to control or which cannot be controlled like that. So, people can say this is noise variables. So, in that case it may impact, but we cannot control this one. So, maybe supplier to supplier variation is a noise ok. So, those things cannot be controlled because I do not have capacity to and I cannot rely on one supplier. So, I need to have various suppliers like that. So, that can be a noise variables I know that influences the outcome, but I cannot do anything about that. So, that will be present, but some variables which we do not know because there can be innumerable variables which we have not analyzed and that can also influence the process like that. So, these are uncontrollable noise variables those are the terminologies people used to express the Z variables what I have written over here ok. So, and with this all inputs of setting condition, noise condition and inputs the process will deliver some quality characteristics that is the CTQ it will deliver. It can be single Y characteristics and it can be multiple Y characteristics over here. So, P number of characteristics can be there. But inspection we inspect this all these P characteristics we measure them, evaluate them and try to see if this is ok and then we continue the process. So, we do not need to change this setting conditions what we are mentioning over here. So, these are the setting conditions over here setting of x 1 to x p. So, this cannot we can write it as n like that. So, because P we have used over here. So, in this case we can write as n ok. So, every time a characteristics we get outputs we measure that one and any inspection what they does is that they measures all critical CTQs and in that case and based on the condition of the CTQs what they will do is that they will either change the setting condition or they will keep as it is like that. So, this measurement evaluation what is happening over here is done by a specific tool which is known as process control tool statistical process control tool ok. So, statistical because some statistical distribution assumption is taken to develop this monitoring tool. So, that is known as that is statistical that is why they name it as statistical process control tool like that ok or in short form SPC. So, within that they have a control chart of type of techniques which will be used for monitoring the CTQs which will be used for monitoring CTQ because if I can monitor I can see I can visualize and that has to be visualized also. So, if I if I can visualize immediately I can take corrective action if something is going wrong in the CTQs like that it is deviating from the target or it is variability has increased like that. So, that things are taken care of in this types of if you are monitoring and can visualize like that ok. Now, monitoring and visualization we can we can we can take data from the process and we can draw histograms like that, but histogram is not suitable because there is a time dimension with according to time what happens is that process characteristics changes. So, or maybe the shift when we change the shift person to person variation will happen the setting condition will change. So, time is an important dimension which needs to be considered over here. So, histogram does not consider time dimension. So, for that we need something else over here to monitor the CTQs with respect to time. So, statistician immediately come up with a solution to this which is known as statistical process control chart ok. So, here we have to consider that we have to remember that the CTQs that is coming out of the process let us assume that it is a continuous variable that means it is measurable and it is a continuous variable like thickness or something like that which we are monitoring which we are interested into ok or surface finish something like that from manufacturing process like that. These are the characteristics which comes out diameter or ovality, taper and all these things if you are considering let us say engine cylinder bores or something like that. So, in that case these are the characteristics which we may be interested in and which if there is anything goes wrong over here everything goes wrong. So, customer will get dissatisfied oil consumption will change. So, everything will change over here. So, all these things will impact the final customer basically ok. So, if it is a continuous variable and we need to monitor that one then we have a statistical process control approach to do that like. So, I have also shown you this diagram which says that inspection does not do much on reduction of variability or close to targets like that, but if I am assuming that everything is on target. So, variability we may not be able to reduce in case we are doing inspection because we are not doing 100 percent inspection sometimes sampling inspection. So, some amount of variability can be eliminated because we are having some check over here and we can eliminate which is a bad product which is a good product like that. So, inspection is the starting point where from we start quality initiatives like that when we take initiatives we start with inspection basically ok. So, before quality initiatives or full-fledged quality initiatives. So, initially inputs outputs will be measured and then they will see that it may be based on 100 percent, it may be based on sampling like that not 100 percent, but few of the samples will be taken and that is ok then we will allow the products to go for next assemblies like that otherwise we will stop that one. And in when we are taking from outside also we will follow this principle of acceptance sampling which is inspection basically and based on the criteria that is set we will either accept or reject this one, but this is a post-mortem activity we cannot do anything product is already produced. So, we are not proactive over here we are not proactive over here. So, then then statistician gave us this statistical process control charts like that Shivart developed this process control chart and he emphasized that it should be proactive. So, whenever there is a process is deviating and gives you some signals immediately we not have to wait up to when it goes out of specification. So, we can take corrective action so that well ahead before it reaches the upper specification or lower specification we can take some corrective action and process variability can be reduced by using this monitoring kind of tool. And he developed he proposed control charting technique which can visualize the data and based on which we can we can take some action in the process proactively and whenever there is a signal we will take action and if there is no signal we will allow and the setting conditions need not have to be changed like that ok. But reduction of variability somewhat reduction of variability will happen because we will differentiate between normal and abnormal signal use. So, and we will also stop recurrence of the abnormal signal use like that. So, in long run what will happen is that process will have less variability as compared to if you are implementing only inspection. So, this is a proactive action. So, so rather than post-mortem activity. So, this is a proactive action some amount of variability will be reduced what you can see over here. So, some amount of variations has reduced over here. So, before it reaches specification limits and and products are not getting rejected which was happening over here. So, this is the advantage of statistical process control. And finally, what happens is that ok some variability is reduced some abnormal conditions are blocked or causes are blocked over here what we have discussed like cause and effect diagrams last time. So, so whenever we have done that there can be common variations also there can be other factors interaction between factors which will affect the variability of the process. So, those things needs to be taken care of and for that what we do is that intentionally we change the condition and we monitor the process and then we develop a function and then we optimize the function and that is what we do is design of experiments design of experiments. So, from many factors what should be the factors sitting there so that the variability what is seen in statistical process control can be reduced further. So, whenever we are going for design of experiments we have to remember that the process should be in statistical control then only we can go for design of experiments. So, that is the primary condition which is considered over here. So, if there is any assignable cause if there is any abnormality that has to be eliminated first then only we go for reducing the common cause variability is like that ok for that design of experiment is recommended generally ok. So, so that we will also cover. So, statistical process control let us start with statistical process control where CTQs are being generated quality of conformance we are talking about over here and whenever CTQ is generated that needs to be monitored and there are charts which can visualize this monitoring process complete monitoring process and that we want to understand how it is done in MINITAB ok. So, and let us give a brief about inspection over here which is also a vast area to study. So, this is known as sampling inspection and the latest plan is military standards that you see 105 E and this is US standard which is being followed and they have given you based on dodge and roaming sampling plan like that and they have given some guideline how to inspect from the suppliers and let us say some products have come. So, this is attribute sampling plan basically. So, they have given some sampling plan over here. So, in that case what happens is that lot they give in batch. So, a supplier can give you products in batch then you select number of samples from that batch randomly like that you analyze the CTQs that whatever is required whether it is matching or not and whether there is any rejection like that. So, there are some products which goes out of specification like that out of 10 let us say one goes out of specification that means one is rejected like that ok. And based on the criteria that is defined in military standard that you see 105 E there will be some given criteria like that if the 1000th lot size is this much how many you have to inspect and within that how many if how many components can be wrong can go wrong basically ok. And if the number of rejections is more than that certain quantity which is defined by the standards we will reject the complete lot of 1000 products like that or otherwise we will accept that one. So, if there is a if the criteria says that if it is less than 5 defects we will accept that one. So, in that case we will accept the lot even with 4 defects like that. So, that will be the criteria that will be used to select that one. So, whenever there is mass production in that case you do not want to inspect 100 percent you want to reduce the inspection cost and in that case sometimes what happens is that you have confidence in your process and in that case or systems like that in that case I will take some part of the samples and we with some risk and in that case what will happen is that either we accept the lot or we reject the lot. So, if we accept the lot it goes to production if it is rejected what will happen is that it will go to the supplier like that or it will be segregated and again again send for scrutiny like that ok good products from back products like that. So, this is what we do in sampling products is already manufactured it comes to you and you are doing some screening activities over here which may be 100 percent which may part of that to reduce the cost and then you come up with some criteria you have some criteria and based on that you either accept or reject the complete lot like that this is what we do in sampling inspections ok. But in statistical process control whenever the product is coming out of the process immediately we plot it in control a chart which is known as control chart and it works like a signaling system like that. So, it is yellow so you can run the process like that, but if it goes beyond certain limits and in that case what will happen is that you have to stop the process diagnose that one investigate and then adjust that one so that everything goes right and everything is in green like that ok. So, this is a proactive action. So, whenever is whenever something this gives you an alarm this control charting techniques statistical process control chart techniques how it helps is that I can visualize the data I can see the variation I can see the shift of means like that from the target values and what is going wrong in the process is there any abnormality in the process. So, this is basically to segregate this process control chart is basically to segregate the normal from abnormal scenarios like that ok. So, which is normal which is abnormal that is the only segregation it is doing it is not doing anything else like that no design of experiment nothing it says in a process process is running is the is there any movement in the mean values that is or the setting condition which generates the process mean or CTQ means like that is it shifting like that either it can be mean shifting or it can be variance shifting or variation is changing or not that can be identified using this control charting techniques like that ok. So, it works like a signaling system traffic signaling system. So, it can be green when everything is right yellow be pre cautious like that and red is it is going out of out of control scenarios. So, there is some cause which is abnormal cause like that you take some precaution actions like that ok. So, this is what is what we do in statistical process control. So, basically what I showed is that there will be inputs over here there will be process running and this output is the CTQ that we monitor over here and then what we do is that we verify with the control chart techniques like that. So, then we try to detect whether there is an assignable cause if there is an assignable cause identify the root cause of that issues and then eliminate that cause so that it does not recur in future like that. So, we implement the corrective action over here and then we do follow up and again monitor the process like that. So, monitoring diagnosis corrective action or adjustment of the process is required if there is any abnormality that is observed. So, that this types of abnormality does not occur in future and some amount of relief if that if that cause is completely eliminated like that. So, some amount of variation will go down because of this if you take some corrective action. This cannot be done if only data is given. So, I need some visualization tool for this. So, fantastic visual tools are developed and proposed as control chart like that for a given set of data like that. So, here you can see that we are assuming that the process follows normal distribution like that and this is the centering what is observed over here. So, this may be the starting point let us say from here. So, you can see the average value goes over here and the average value goes over here and like this. So, again it has shifted over here. So, average is moving over here. This point what you see is basically the mean of the distribution mean of the distribution over here and this is the variability what we see as sigma odd variations over here. So, and this is a normal distribution that means some data is collected and assuming the data follows normal distribution which is the basic distribution when we talk about any types of statistical analysis like that for a given set of data. So, that is the primary assumption in many of the situations or statistical analysis like that, but this has to be verified whether it follows or not. So, our basic assumption over here in control chart is that data follows normal distribution like that and how to verify that we will see later, but that is the primary assumption. So, here what we are seeing is that very if I consider centering over here. So, mu 0 condition it has started and this is moving like this and suddenly it has also gone to the extreme values over here that means it has it has shifted over here. So, this may be mu 1. So, everywhere mean is moving over here. So, this is mu 2 maybe these conditions are again it goes to over here and then it moves to like this. So, this is with respect to time. So, if you can see this is with respect to time over here ok. So, mean is shifting and it cannot be steady and located at a single place like that. So, this always happens in a process. So, centering of the process although you have kept the centering as it is, but because of the input condition because of noise condition what will happen is that your mean outcomes that is CTQ outcomes will continuously keep changing the locations like that or average value will keep shifting from one location to the other location. If I can minimize this change in the shift in that case we can avoid by the whatever target is given we can just satisfy that target condition like that. So, in this case what happens is that this normal distribution is an important distribution where parameter mu and sigma defines the distribution like that. So, I am not going into details of this distribution because this comes under statistical course and you may have done that and you already understand that any CTQs can follow a distribution probability distribution that is a and that can be exponential or that can be normal like that. So, we are assuming normal distribution the assumption of normality is very important because what happens this is a unique distribution where it is a bell shaped type of curve what you see and this is symmetric distribution. So, on this side and on this side you see that it is just a mirror image what you see on the right and what you see on the left of mean over here. So, this is a bell shaped distribution and this is a we can say well behaved distribution over here and you have some data set if it is following normal in that case you expect that within the average value and plus or minus one standard deviation what will happen is that one sigma variation on this side and one sigma on this side 68 percent of the observation will be will be inside that zone like that. So, if you make a demarcation zone like this on this side and this side what will happen is that you can expect 68 percent of the observation will fall within this zone like that. Similarly, if you can if you go to one more one sigma over here. So, that means, totally two sigma on this side and two sigma on this side what will happen is that you can expect that 95.46 percentage of the observation the data that you have collected. So, this is data number one let us say 100 observations are there. So, and it follows perfect normal distribution it follows normal distribution and these are the CTQs that you are measuring over here and the values are represented at y 1 to y n like that and out of this total observation if it is following normal in that case what will happen is that 68 percent of the. So, it will have a mean over here. So, I can calculate the mean and I can calculate standard deviation. So, let us take this point over here and then demarcate this on one side one sigma on this side and other side minus one sigma over here. So, this may be the population whatever value or if it is the representative sample we can assume x bar over here and this is minus s on this side and plus s on this side like that one standard deviation you can you can just make a demarcation line and then then you figure out out of this total number of observation how many are falling within plus or minus one standard deviation you will find that about 68 out of 100 observation will fall within plus or minus one standard deviation like that ok. So, this is if it is perfectly normal. So, in that case we expect that 68 percent of the observation. So, if you can draw two sigma on this side and two sigma on this side and then you can see how many observations falls on this within this zone like that you will find 95 percent of the observation and if you if you go to plus or minus 3 standard deviation what will happen is that if it is perfectly normal. So, in that case what will happen is that 99.73 of the observation will fall within this plus or minus 3 almost all observations will fall within this. So, this concept was used to develop the control chart. So, one assumption is it is normal assumption is mean keeps on shifting and the assumption is that if it is within plus or if it is following perfectly normal distribution in that case we expect that observation at a given time point t, observation at a given time point t if nothing abnormal have abnormal observations are there. So, it is expected that within plus or minus 3 standard deviation of the data. So, all the observations. So, this may be the so, this is what we can see is that this is the central line if I say. So, this will be mu 0 location what we have taken over here and this may be the plus this may be on the minus 3 on this side and this may be plus 3 on this side. So, if I have drawn a line like this if I have drawn a line like this we expect at a given time point t t 0 may be over here. So, in this case if I can if I can demarcate this line over here what is expected is that the me or any average value any average value at a given time point let us say I have taken 5 observation and I calculate the average of that and we expect that average should be within plus or minus 3 standard deviation that is the assumption what we are making from normal distribution over here ok. So, and also we take average value we do not believe in one observation like that sometimes what we do at a given time point we take multiple observations like that ok. In statistical process control also this concept is utilized and we try to take more samples at a given time point. So, that I can take a average and that is also important because individual sometimes observations are seen that they follow different types of distribution, but when you take the average what happens is that they may follow normal distributions like that that is the assumption of a theorem which is known as central limit theorem like that. So, that is the central limit theorem assumption. So, that is why the people prefers to use average over here. So, average follows normal distribution and if it follows normal distribution then within plus or minus 3 standard deviation we should expect most of the observations should all almost all observations should fall within this plus or minus 3 standard deviations. Although there is a chance some points can go outside even if even if it is normal there is a possibility it can go outside, but the chances are very rare out of 100 maybe less than 1 also there is a chance that if everything is going right it is normal we expect that within plus or minus 3 standard deviation the average value at a given time point for a CTQ will fall within the plus or minus 3 standard deviation line over here. So, this concept of plus or minus 3 standard deviation was used by Sheward to develop the control chart principles like that. So, but normal distribution assumption is a primary assumption. So, we will see when this fails what you can do like that and how to implement control charts in certain scenarios. So, we will take some examples in Minitab and we will try to solve that one first the theory behind this we want to understand that we can monitor the mean using control chart we can also monitor variance using the control chart like that ok. So, what it does is that the same theory what I am saying over here. So, it will take mean and plus 3 standard deviation on one side mean minus 3 standard deviation on the other side like that. And if the this is the this is known as upper control limit like that and this is known as lower control limit like that. So, what it will do is that it will calculate the average of the process over here. So, this is symbolically noted as x double bar over here and with the information of sample standard deviation what it will do is that or range over here what it will do is that and you know about what is standard deviation how calculation of standard deviation is done how range calculation is done maximum minus minimum and standard deviation is the deviation from the mean square deviation of the mean and square root of that ok. So, we are penalizing I am making square and then taking the square root of that ok. So, then the scale becomes same as compared to x double bar like that. So, that is the idea of considering x double bar and R or S like that ok. So, if this is the average of the process which may be very close to mu 0 let us assume and S may be the estimation of sigma over here. So, generally the process should this if I draw a limit line over here on this side from x double bar what is expected is that any process at any given time point if nothing is changing. So, in that case we expect that all the observations should fall within this that is why this is one observation falling within this this is one observation all are falling within this. So, there is no abnormality that is going in the process, but this is a time point let us say t 1 this is a time point t 2 this may be a t 3 time point I have taken observation or average observation like that this may be t 4 and this may be at t 5 at a given time point like that. So, what we observe that t 1 it is within limit. So, no abnormalities we are observing t 2 also no observer no abnormalities because it is falling within this plus or minus 3 standard deviation over here. So, this is one standard deviation what you see this is 2 standard deviation over here. So, plus or minus 2 standard deviation and this is plus or minus 1 standard deviation and this is ultimately 3 standard deviation 3 s over here we can think assuming that the s is a perfect estimation of sigma over here. So, if this is the limit line. So, in that case, but one observation what we see over here is t 3 let us say x 3 over here this has gone outside the limit lines over here. So, we can we can just plot a normal distribution over here. So, this is not perfect to my drawing is not so good. So, in that case, but we can we can just see that if this is a normal distribution expectation was all should fall within this zone that is defined over here this zone and this zone over here, but one observation has gone outside. So, this is a abnormal scenario this is a abnormal scenario and in real time we are monitoring the process. So, whenever this goes outside the limit line immediately signal is given in the process that alarm will ring immediately. So, that now people understand. So, operator will understand that something has gone wrong at this time point something setting condition has abruptly changed over here or maybe the input condition has abruptly changed over here. So, immediately I will look for assignable cause. So, we will look for assignable cause over here. So, assignable cause we will figure out why this is happening basically why at this time point something has gone wrong over here. So, this is a signal that immediately indicates visualization signal that gives and immediately we can we can what we can do is that we can see what was the cause over here and then try to eliminate that cause. So, that this type of scenario abnormal scenario does not happen in future. So, we will try to eliminate the cause. So, assignable cause we say. So, this condition does not arise in future. So, for that we will take some precautionary action over here. It is not easy to identify because if you have gone to the process immediately you have to identify and isolate and then take action and then allow the process to run. So, this is not easy because in a mass manufacturing environment in that case what happens is that immediate immediately you cannot take certain actions like that. But you have to ask operator whenever something goes wrong you please write down what are the causes which is abnormal conditions. Later on in offline when the process is not running we will take corrective actions how to stop this one and try to brainstorm and figure out and then based on that we will take some precautionary actions and we will try to monitor again and see that whether these types of abnormalities occurring again and for the specific cause which was the root cause which was identified by brainstorming like that. So, it is difficult to identify assignable cause, but if we are if we can stop the process at a given time point whenever there is assignable cause and brainstorm and we have a quality circles like that to identify and take corrective action immediately like that. So, small small initiatives so this this are known as quality circle where people associate together and try to solve small problems like that. So, that can immediately take place and we can take some corrective action on this assignable cause on this assignable cause so that it does not recur like that. So, we will stop here session 8. So, what we will do is that we will start with session 9 from here what we have left. So, thank you for listening. So, we will continue from here and we will try to see how it is done in Minitab. Thank you.