 Through our session 28 on Quality Control and Improvement with Minitabh, I am Professor Indrajit Mukherjee from Shailesh J. Mehta School of Management IIT Bombay. So, we are discussing about the primary aspects of design of experiments which is a stepping stone of improvement ok. So, whenever this process is in statistical control, then we can adopt this statistical experimentation to further reduce the variability and bring the mean to the target values, mean of the process to the target values like that ok. So, in our discussion what we have mentioned in last session is that there are some controllable variables which are of primary focus for us and I want to determine the setting of these controllable factors. In presence of this noise variable or uncontrollable variables over here and the inputs will always change because sample to sample variation is always there that will contribute some part of error components when we are developing functions between x 1 to x p with y characteristics within the CTQ. So, I am more interested in developing relationship between factors and CTQ. So, it can be used for screening purpose before I go for full-fledged experimentation or it can be during full-fledged experimentation I want to develop the response surface or functional relationship that exists between y and x like that. So, either for screening we use this experimentation screening to screen the variables and otherwise what we do is that we try to develop the full-fledged function and then try to optimize the function like that. So, we are more interested in controllable factors and at present our discussion will be confined with controllable factors and experimentation for that and how to control that one so that I get the best CTQ conditions like that that is output should be optimized like that ok. So, what we have to understand is that how do we experiment? How do we experiment with this control factor? How do we experiment? There are different ways we can do experimentation like here we can see some control factors. So, what is feed controllable factors and depth of cut these are the three factors we can think of x 1, x 2 and x 3 like that and this is the CTQ which we are monitoring out of the which is coming out of the process and we are interested into this CTQs like that ok. So, experimentation primarily can be done like trial and error methods that or maybe arbitrarily we can change any of the characteristics x 1 factor over here and keeping other things fixed and we want to see what is happening on the surface finish. Then if we reach an optimal conditional speed then I fix that one and then go to feed like that and then fix the what should be the value of feed that will maximize my surface finish like that and finally, we keep speed and feed to the optimal level and then go for the depth of cut like that. So, what we are doing is that we are linearly moving from one dimension to the other dimension and then to the next dimension like that. This type of experimentation when we are doing this is known as one factor at a time experimentation and this is the trend you will see that most of the time. If you are not knowing design of experiments what you will do is that you will try to do experimentation based on with the controllable factors what we do is that keeping other fix change one then fix that one and change the other one then fix that one and change the third one like that. So, this way we try to go for experimentation if we are not going for statistical experimentation. So, this is known as one factor at a time how many factors are there you can see this is one factor, this is a second factor and this is a third factor over here ok. So, what we started with this is that this is the first combination we have started 55, 85 and 80, 30 over here and this is the y observations that we are getting y ij. So, this is coming out to be 23 over here ok. So, I want to improve and means let us say we want to maximize the surface finish over here which is coded variable over here and shown with some units over here ok. So, then what I will do is that keeping this 85 feed and depth of cut fixed I have just changed the level of speed over here. So, 55 to 60 over here when I have changed this one the other things remain fixed that means other two variables are fixed at 85 and 30 like that. When I have done this with one factor I have changed this one what I observe is that this is coming out to be 29 which is improving that is I have got a improved position over here. So, this is an improved position and then what I will do is I have already only two levels let us say this is 55 and 60 I want to enumerate only two levels. I do not want to go more than two levels for experimentation ok either I can set at 55 or 60 that is the option that I am having and then what I will do is that I will change the next factor and fix this that 60 over here we have fixed now speed 60 over here because we have seen improvement at 60 as compared to 55. So, 60 is fixed over here then from 85 I will change it to 90 which is the other second level of the feed over here that we have options like that and the third variable is fixed over here that that is 30 retained over here. So, I am going like that. So, second variable is changed and what we observe is that surface finish is deteriorating now 23 is the surface finish like that ok. So, but previous condition was better like that. So, what we will do is that we will fix 60 and then go 85. So, we have reverted back from 90 to 85 over here and then what I will do is that third variable I will try to change over here. So, 30 it is this is not experimented from 30 to another position that is 35 or the another level that is given over here where I can change that is 35 ok. So, we have changed speed we have changed feed and we are now changing the depth of cut over here. So, when we are changing the depth of cut to 35 what we observe is 24 which is inferior to the values that we have got over here which is the best value we are getting out of this. So, out of this 4 experimental trial which is done as a one factor at a time experimentation now we have got the best position over here that means feed should be 60, feed should be 85 and depth of cut should be 30 over here and this is the 29 ah value of the surface finish that we have got over here ok. Now, there are three variables over here and some combinations of the variable four combinations of the variable are shown over here, but there are another remaining four variable ah combination which we have not shown. So, there are three factors and all factors are at two level. So, total possible combinations all all possible combination over here this should be 2 to the power 3 over here. So, this should be 2 to the power 3 that is equals to 8. So, total number of all combination is actually 8, but whenever when I have done one factor at a time and gone by linear positioning from one location to the other we have stopped with 4 4 combinations like that. So, we have stopped over here although the number of trials is less, but we could have done if we have done another 4 trials maybe some other combination of speed feed depth of cut should have given me some values which may be 32 or something like that which will improve the surface finish like that. So, one factor at a time does not do all enumeration over here. So, that means, some of the combination over here ah which could have given a higher value of surface finish we have missed out on that. So, we have mixed out on that. So, what is required is that all possible combination we should check like that and that is what we do in design of experiments what we do is that all possible combination and that is known as factorial design what we do is that all possible combinations at different levels for all the factors are enumerated and based on that we define. So, we are not going linearly over here we are covering a total surface area like that we are covering a total surface like that and that we can think of as design of experiments right way of doing experimentation. So, I am exploring all region not just specific regions by moving in lines like that what we are seeing in one factor at a time experimentation. So, there are advantages when I go for experimentation which is statistically sound in that case and then there are other advantages which we will discuss afterwards like that what is the advantage of doing design of experiments over this one factor at a time experimentation. But here what we can see is that we are not exploring complete region that is complete enumeration where we should have done so that we could have got something better on the surface finish side and better combination of speed, field and depth of cut over here. So, this is one factor at a time experimentation and if I go for all possible combination of these that is eight combination over here basically I am doing design of experiments. So, that is that is what we are doing in design of experiments over here ok. So, there are different types of statistical experimentation that we will try to discuss over here in this course. So, and these are all used for we can we can think of one I told is screening like that some of them are very very popular in screening experimentation and some of them are popular for optimization that means we want to optimize the response surface and for that we have something which is known as response surface design like that multiple response optimization like that. There are procedures which which is used over here. But preliminary over here what you see is that two factor experimentation this is the first one that I have written over here two factor. So, we are moving from one factor experimentation that is one way and over that we have seen to a two factor experimentation we will try to understand like that and the complexity will increase as we go ahead with the design of experiments with more than two factors like that ok. So, this is two factor experimentation we will try to understand and then general factorial design that means if it is more than two factor how do we experiment how do we analyze like that. Then there is an important concept which is known as blocking in factorial design that we will also cover which is an important area or sometimes we call it as local control in experimentation like that blocking is an important technique in design of experiment. So, these are the pillars of experimentation like that. In factorial design we will talk about this and and then there will be fractional factorial for screening experimentation what I told there will be screening experimentation and that aspects we will try to cover. And so, certain scenarios may appear when we do not want to do all experimentation because there are n number of variables and if we have at least two levels. So, 2 to the power n combinations are required and in f is very large and if very large and in that case what will happen is that number of trials will be increased and it becomes infeasible to control that one. So, the I need to reduce the number of factors over here number of factors which is significant over here which is actually contributing to the CTQ and which is when we change that that impacts the expected value or the variance of CTQs like that. So, we want to reduce the number of factors like that in that case what we do is that we go for fractional factorial design we go for fractional factorial design. And finally, when we do full experimentation with the with the potential factors that is identified based on screening experimentation we go for full out full full scale experimentation we develop a response surface also and we optimize from here. So, we optimize the response function over here and then try to determine which is the global optimal solution we want to find out the global optimal solution over here ok. And that we will just address at a certain time point in this course and then there will be scenarios when we have multiple ways like that which is I can have CTQs which is more than one like that and they are interrelated with each other they are interrelated with x1 to xp how they are related and if we can develop a function with interrelationship between y1, y2, y3 like that we can also optimize that function like that. So, first initially we will discuss about single response like that and then what we will do is that we will also discuss about multiple response that means multiple outputs also how do you optimize what are the theories that goes behind multiple response optimization and how do you implement that in Minitab. So, everything is related with how we will use in Minitab that that way we will go and how to analyze that one with some theories and some examples on Minitab we will try to demonstrate. So, we will try with two factor design first we will we will start with two factor design first which is the extension of analysis of variance one way analysis of variance when we told that of x1 to xp one of the factor let us say x1 is changed and this has more than two levels. So, that is why we have used analysis of variance and if it is two level we are going for two sample t-tests like that that is similar. So, only thing is that analysis of variance control the type one error like that. So, that is kept at a fixed level let us say 0.05 which we have defined earlier also. So, we will go for two factor experimentation. So, and before we move into two factor experimentation there is also a another technique which is also popular during more it becomes popular around 1980 and one of the concept that was given by this person who is known as Tarbucci over here is known as robust design which is known as robust experimentation, robust design we can think of and this is the time point when we when we talk about robust design robust experimental design like that. So, in this case the concept that he that he tries to emphasize over here is that simultaneous optimization of we can say mean to the target values like that and also we want to reduce the variability towards 0 like that. So, I want to simultaneously get a setting process setting condition which will not only bring the mean to target and variability to near to 0 and also it will require minimum number of experiments minimum number of experiments number of trials and that should be less than when we are going for full-fledged experimentation that is to require in let us say I do not want to explore full-fledged experimentation over here I will take part of this experimentation part of this trials over here and then and the results we can see that and this person has also proved that the results are very close given a certain assumptions is true. So, in that case what what happens is that I do not have to do so many trials like full permutation full combinations of the all all combinations of the factors like that we do not need that one. So, he emphasized on that and he told that using a signal to noise ratio index signal to noise ratio indices we can think of what he suggests that using that index which takes care of mean and variance both the things we can get a setting which is very robust and which will bring the mean to target value also reduce the variability like that. So, this a typical scenario came when this idea was implemented like that. So, this experiment was carried out what was seen is that so Sony US is measuring some characteristics over here and Sony Japan they are manufacturing TV sets like that and in that case color density is one of the parameter which was which was of concern to them and they are seeing that Sony Japan they are producing color density target value is given over here. They are producing the when they have taken samples the graphical representation of the CTQs that is coming from Sony Japan is like this normal distribution curve what do you see, but Sony US they are producing within specification that is this is the US you can think of and this is the LSL you can think of lower specification and upper specification limit that is defined by the customer and in that case is using the whole tolerance zone over here. So, that means whole tolerance I am using and it is consuming the full tolerance over here. So, this is a mentality which is known as cold post mentality in earlier days that is because of monopoly what they are doing is that they are producing within the specification, but the variability of this distribution is much more than the variability of this distribution over here. So, people are more satisfied that is the idea or satisfaction level of the or voice of the process is this one which is having less variability as compared to the voice of the process which is delivered by Sony US like that and how Sony Japan is delivering that one that is based on some concepts that is robust design like that that is concept is given as robust design like that. It is exploring the non-linear region over here of variability for a. So, X is at a given position X is changed from this level to this level and this is the Y characteristics that is monitored and this is the variability that is coming out when I am controlling the X in this region. But if I change the if I if I go further away from this X variable over here and this is the region let us say B region over here I am exploring A region to B region and then what we see is that the due to curvature over here due to curvature over here what is happening the reflection that is coming over here is much less and the variability of the Y characteristics that we are generating over here is having less variability as compared to the variability that we are generating when we are fixing the X in A region like that A region can be defined like this region to this region and B region can be defined like this region and this region like that. So, that is the idea what was Taguchi was emphasizing that there is a non-linear region where if I if I can shift the X condition to that region then what what happens is that my variability in Y also decreases and it also brings to the target value. Here only we are showing that variability is changing like that we can bring it to target values by by other other conditions like that by using other options. So, first is reduction of variability is important then bringing the mean to the target value that is also important and that is done by one single experimentation and based on that only we will identify which factor influences the variability and which factor influences the mean and based on that we will make the setting. So, that I produce characteristics which are very close to the target value and also with minimum variability like that. This is one of the concept which is controversial which is also controversial because statistician has also contradicted the idea of signal to noise ratio that I have mentioned and and theoretically statistician has also challenged these techniques like that ok. But this techniques gives results and that is being proved in many industrial scenarios and this requires less number of trials also and because of that assumptions that we we will understand try to understand what is the assumptions that in Taguchi's method we try to adopt. So, and most of the scenarios in design aspects at least when we are manufacturing some products mostly we have seen that assumption holds like that ok. So, that is why industry started adopting this technique because of the less number of trials like that and getting the mean to target and also the variability is going down. So, this is known as robust design concept that is given by Taguchi and we will cover it at the end phase of our design of experiment lectures like that ok. So, first what we want to emphasize is that how we are doing two factor experimentation and that is based on statistical experimentation. How do we interpret the results like that and what are the possibilities that is given in Minitab that we want to explore over here ok that is the overall objective that we want to emphasize ok. Now, we have identified the factors we have also we have shown in the process view of the of the characteristics that we are trying to generate over here that is the CTQ and this is a process view there will be certain inputs over here or it can be influenced or we can think of maybe covariates which is influencing the y characteristics over here, but we are not interested into this we want to freeze this control level variables control level factors or variables over here which we are more interested what is with the level of x 1, x 2 up to x p variables and there will be some uncontrollable factors or variables we can think of. So, these factors or variables we will use this as a substitute whenever we can talk about factors also we can we can say that is a variable also ok. And which will take different levels during experimentation intentionally I am doing the change and we are seeing the outputs change in the CTQs and then we are basically trying to see that the factors influence as a mean value or standard deviation values and based on that we will make the settings we will make the settings of controllable factors. So, we are more interested to set the controllable factors in presence of this and there will be some factor which we will not change which is rarely changed in a process like that there will be some factors which are fixed we cannot do anything over here. We will try to fix those factors like that. So, that minimally impacts the why that is the assumptions that we are taking over here. So, some are held constant some are held constant factors held constant like that there are uncontrollable factor this is an important area also we are trying to explore in design of experiments. That means, these are z factors what we have written over here this are uncontrollable these are uncontrollable or uneconomical to control that is the scenario and how to exploit this information like that we will try to see. So, this can be considered as when we are modeling let us say response surface we are developing and this x 1 to x p we are modeling over here and the errors that we are getting in the model that the errors is contributed maybe by this one of the components is this and maybe this input conditions also variability sample to sample variation will also contribute to the error and that we cannot that is unexplained variability we can say. So, we want a efficient model that if I can get an efficient model and if the error is minimum over here in that case we can adopt that functional relationship over here to optimize the process basically to optimize the process that can be adopted. So, we are more interested in controllable factor and variation in controllable factors how do we change controllable factors and based on that design of experimental results how do we interpret the results and how do we fix the labels of these controllable factors like that that we will try to understand. And we have in design of experiments three important thing that we should note over here in experimentation the primary importance is given to randomization of the experimental trials like that ok. So, we will explain what is randomization like that and you may be knowing about randomization generating random numbers like that. So, that concept is used from a certain distribution we can generate random numbers like that. So, we have to do the experimentation in a randomized pattern like that. So, there will be no as such visible pattern, but we will have experimentation which are completely random we will select and how to do that we will just try to explain with an example. And the second part is randomization of the trial is important then replications is also an important factor. That means, when I am doing the trial at a given combination of speed depth of cut and the other conditions that we have considered in this experiment and speed, feed and depth of cut let us say. So, in this case for a given combination of these three factors over here I can repeat the trial in this at a given given level fix that we have fixed like that. So, the levels that we have fixed over here is that 55 let us say this is the first trial over here 55, 85 and 30 and this combination I redone again with the different samples like that that is known as replication that is known as replications over here and that is what is done in experimental trial because more the number of replicates over here. So, n equals to 1 means no replicate n equals to 2 means 2 replication over here at a given condition of speed, feed and depth of cut like that. So, at a given combination if we have this combination will be run two times. So, first sample is run over here, second sample is run and observation of the CTQ is measured over here, observation of CTQ is measured over here or we can say as y 11 over here and y 12 is the second observation that is recorded for a given trial like that. So, replication why it is important because if we replicate we are more sure we go by average values that is the central limit theorem and that is underlying assumption that we are making over here. When we are doing replication more replications we we we can we can do in an experimental trials more accurate will be our results that means, if we can increase the like what we have done in control charts as the subgroup size increases the precision of the control chart also improves like that ok. So, but there is always a cost associated with this whenever whenever I am selecting the samples in that case what will happen is that we will have a constrain on the cost aspects of that and we have to consider that cost aspects before we decide how many replicates we will do like the in an experimental trials like that. More you replicate more precise estimation you will get and more accurate will be your results or interpretation and if you if you cannot replicate in that case how to how to interpret the results that is that we have to also see we have to also see in this experimentation. So, there is possibility that there will be no replicates in certain scenario there will be replicates in certain scenario when we are doing experimental trials like that ok. And the third important aspects which is known as blocking or local control over here. So, sometimes what happens is that the factor is uncontrollable, but it is known and uncontrollable. So, that type of scenarios we have to deal with those those kind of scenarios that means, certain variables which we are unable to control, but we want to see our main focus is controllable variables whether it impacts the mean value of Y or sigma of Y like that or that CTQ like that, but some other variables may be may be may be also influencing my my results over here. So, in that case what we do is that we adopt certain means to deal with these types of missions variable or uncontrollable variables like that ok which I I cannot control in real processes like that. So, if noise factor is if this noise or nuisance variable or uncontrollable we can think of uncontrollable factors over here is known and controllable known and controllable for the sake of experimentation what we do is that we adopt blocking principle in experimentation we adopt a blocking principle in experimentation and blocking we will try to understand how we can block a particular factor like that and estimates the value of variance due to blocking over here. So, that we we we will we will deal with that when we are doing how to consider blocking in experimentation and how to interpret the results like that. So, one important aspect is that the noise variable is known and it is also controllable for the sake of experimentation basically. So, in that case we may use blocking ok. So, some of the techniques that is used classical techniques that is used is Latin square design for blocking like that. So, those those are typical scenarios where blocking can be adopted like that and we can we can we can see some examples of blocking experimentation like that ok. Sometimes scenario may be that noise is known and uncontrollable I already have some information of the noise and but it is uncontrollable for the sake of experimentation here also we can control that one. So, there are alternatives to this it can be controlled. So, in this case certain scenarios we we can also adopt Taguchi's experimentation methods to deal with this that means whenever noise is there, but we have some knowledge on the noise like that. So, one is classical approach of blocking one is Taguchi's method how to deal with the noise variables like that. So, Taguchi goes by saying that for the sake of experimentation I can control like that. So, how it is done in Taguchi's method also we will see ok. So, how to deal with noise variables, but both of the scenarios what we are assuming is that noise control noise variables or or this uncontrollable variable is known and for the sake of experimentation we can control that one. So, there is classical approach of dealing with this and Taguchi deals with this in a different way in a different way like that ok. But if the noise is unknown and uncontrollable like that if it is unknown I I I have no information like that. So, the influence of the noise can be minimized if we are doing randomization which we have mentioned over here in the initial phase like that randomization of the experimental test. Why why because every factor we cannot identify during experimentation in a process sometimes what happens is that brainstorming and everything is done, but some important factors we may have missed that is that is also creating variation in the process that is also creating, but we are not able to identify immediately on that. So, it is unknown, but it can influence. So, so to minimize the effect of influence of such hidden variables what we do is that we try to do randomization. So, it will uniformly distribute the influence of the variables throughout the experimental data like that. So, it will have minimum minimum influence on that. So, that is why they say is that we should randomize the trials we should randomize the trial like that. So, three important aspects we should know that in an experimentation one is randomization, one is replication and one is known as local control what is known as local control. So, wherever blocking is possible block that one and see the effect of blocks and based on that you make an interpretation, but if you do not do that you may get a distorted results like that ok, but always try to randomize and always try to replicate that means replication will give you more precision, randomization will help you to minimize the effect of hidden variables or variables which are unidentified which we have not identified like that. So, these three aspects are very important and we should we should be careful while we are doing this experimentation. So, then we will try to see experimentation how experimentation is done. So, this is one of the example that we will discuss in our next session a paint adhesive strength problem over here paint adhesive strength there are two factors now we are dealing with two factors more than one factor over here and how to make the results and interpretation that we will try to see. There is one factor name primer type and one factor name application method and these are two factors that are changed and the adhesive force adhesive strength over here which is the CTQ that is measured the values are given over here. So, for a given application method which can be of two types over here dipping and spraying and primer type which is of three types over here that is three different levels we can think of. So, and every combination of this is run and we try to see adhesive strength and we want to maximize the adhesive strength. What is the combination of A and B factor that will maximize the CTQ or adhesive strength like that we will start with this in our next session. Thank you for listening.