 As Salaamu Alaikum, welcome to lecture number 41 of the course on statistics and probability. Students, you will recall that in the last lecture, I discussed with you, internal estimation and hypothesis testing regarding mu 1 minus mu 2 based on the t statistic and the t distribution and that situation was valid when the populations were normal, the standard deviations of the populations were unknown, but equal and the sample sizes were small. Towards the end of the last lecture, I began the discussion of the application of the t distribution in the case of paired observations. Let us continue that discussion and let us go back to the example that I quoted at the end of the last lecture. As you now see on the screen, 10 young recruits were put through a strenuous physical training program by the army, their weights were recorded before and after the training with the following results. For recruit number 1, the weight before the training was 125 pounds, whereas after the training it was 136 pounds. Similarly, we have the figures for all the other recruits. Using a level of significance of 5 percent, would you say that the program affects the average weight of the recruits, assume the distribution of weights before and after the training to be approximately normal? Students, as I mentioned last time, this is a case of pairing and we can say that this is natural pairing. We are interested in finding out that, is there any difference in the weight of the training or not? As I was saying last time students, the statistic in this particular case is t equal to d bar minus mu d over sd over square root of n. So, we are going to compute the differences in the weights before and after the training. So, as you now see on the screen, if we regard the weights after the training as x1, the weights before the training as x2 and we define d as x1 minus x2, then the 10 differences come out to be 136 minus 125 equal to 11, 201 minus 195 equal to 6, 158 minus 160 equal to minus 2 and so on. Students, each of this differences I have, you should regard them as a sample of differences out of a population of differences. If we do not have 10 recruits, if we do not have 10,000, if we had 1000s and 1000s of recruits, the entire population would have been there. Theoretically, we can say that if we had spent every one of them from this physical training and we had recorded the weights before the training and after the training for all of them, then we could have found differences of this type for all of them and that would be a large population of differences. These 10 differences can be regarded as a random sample out of that population. Another very important point to note is that if both x1 and x2 are normally distributed, then the difference d which is x1 minus x2 that will also be normally distributed. This is because of a basic mathematical property of the normal distribution and the property says that if x and y are normally distributed, then x plus y or x minus y will also be normally distributed. So, a x y or x1, x2 point is that if those two variables are normally distributed, then the difference of these new variables is also normally distributed. In this problem, x1 is the weight after the training, x2 is the weight before the training. So, weight to parhal approximately normally distributed. Therefore, we can say that this population of differences x1 minus x2 is also normally distributed. This is very important because if the parent population is normal, its standard deviation is unknown or sample size is small, then we apply t-statistic. This is what is happening. We are assuming that we have a random sample of differences from a normally distributed population of differences and of course, the standard deviation of the population which I may call sigma d is unknown or our sample size, it is small and we had only a sample of 10 recruits. So, all those assumptions and conditions are being fulfilled and this is why in this particular case, we will be applying the t-statistic which you now see on the screen. t is equal to d bar minus mu d over sd over square root of n. As I have said two or three times, this is similar to the statistic that we had earlier which was t is equal to x bar minus mu over s over square root of n. This statistic we had used earlier, if I wanted then I could have read it this way, t is equal to x bar minus mu x over sx over square root of n. If you replace x by d, then you get the one that we have now, students, they are one and the same. There is nothing wrong with that, that is absolutely correct, because when we were talking about x variable, then we used to compute mu or s. After all, mu that was the mean of the population of the x values and so, there was nothing wrong with saying mu x. Similarly, sx would have been the standard deviation, sample standard deviation of the x values that we had taken in the form of a sample from that population. So, baat phir wohi students ke aap basic pattern ko dekhye, aur dekhye ke wohi baat repeat ho rahe jo pehle ho rahe thi. Now that we have the statistic in mind, I think we should go to the six steps that we have in any hypothesis testing procedure step by step according to a methodical approach. The first step of course, is the formulation of the null hypothesis and the alternative. And what is the null hypothesis in this particular example? So, students hum ye jaana chaatein ke ye jo physical training program hai uska weight pe koi asar patta hai yaani to null kya hoga? Ye hi hona chahiye na that there is no difference in the weight before the training and after the training, yaani us program se koi farat nahi patta as far as the weight is concerned. And what will be the alternative that it does make a difference? So, the alternative says that mu 1 is not equal to mu 2. In this case 1 is standing for the weight after the training, 2 is standing for the weight before the training and what we are saying is ke wo baraabar nahi hain, iska matlab ke kuch fark pat jata hain, ho sakta hai, kam ho jaata ho, sakta hai, jada ho jaata ho, fark pad hain. So, students issi baat ko hum iss tarasi bhi toh kya sakta hai na, ke null hypothesis ye hain ke mu 1 minus mu 2 is equal to 0 and the alternative is that mu 1 minus mu 2 is not equal to 0. As you now see on the screen mu 1 minus mu 2 is equal to expected value of x 1 minus expected value of x 2 iss liye ke expected value ka matlab hi mean hota hai. And this can be written as expected value of x 1 minus x 2 iss liye ke ye toh basic algebraic equation hai that e of x minus y is equal to e of x minus e of y. So, mu 1 minus mu 2 equal to 0 means expected value of x 1 minus x 2 equal to 0 or in other words expected value of d equal to 0 or in other words mu d equal to 0 and students this is exactly our null hypothesis in terms of the variable d H naught mu d is equal to 0 and H 1 mu d is not equal to 0. The next step is the level of significance and as you now see on the screen in this problem we set it to be the usual level 0.05. The test statistic under H naught as mentioned earlier is t is equal to d bar minus mu d over s d over square root of n, but since our null hypothesis says that mu d is equal to 0. Therefore, t can be written as simply d bar over s d over square root of n. Now, it can be mathematically proved that this particular statistic follows the t distribution with n minus 1 degrees of freedom exactly similar to what we had when we were considering the variable x and testing for the mean of that variable. The fourth step is the computation of the statistic and as you now see on the slide in order to find t we first have to find s d and to find s d we need to have a column of d square which you see in the last column of the table that you have in front of you now the d square values are 121, 36, 4, 169 and so on. Substituting the sum of this column and the sum of the earlier column in the formula of s d square we obtain s d square equal to 50.23 so that s d comes out to be 7.09. Of course, d bar is very easily found by simply dividing the sum of the d column 47 by 10 and that comes out to be 4.7. Hence, the computed value of our test statistic in this problem comes out to be 2.09 students. Obviously, we would like to compare this value with the critical value and what is the situation now? Is it a one-tailed or a two-tailed test? Of course, it is a two-tailed test, that mu d is not equal to 0. Either mu d is less than 0 or mu d is greater than 0 or less or greater, dono sign involve hogay. So, both the tails of our t distribution are involved and therefore, we will have to divide our level of significance 5 percent by 2 half of it in the left tail half in the right. So, we will have 2.5 percent area to the right of the critical value that we will have on the right tail and 2.5 percent area to the left of the critical value that we will have on the left tail. Now, how do you find the critical value? Degrees of freedom n minus 1, we have 10 recruits to 10 minus 1 means 9 and so, when we look in the t table against 9 degrees of freedom under 0.025, what do we get? As you now see on the slide, the critical values come out to be 2.262 and minus 2.262. Interesting point. In order to test mu 1 equal to mu 2, degrees of freedom mu 3 n 1 plus n 2 minus 2. 10 plus 10 minus 2, 10 before the training, 10 after the training values 20 minus 2 18. Why did I not use 18 degrees of freedom students? The reason is as I explained earlier, ab aap isko ek population of these, yani differences tasavur karein aur yeh samjhein ke ish population me se you have drawn one sample of size 10 in this case, a sample of these. Yeh do alagalak sample, jis ke liya hum kaheen ke n 1 plus n 2 minus 2, yeh ustra ki situation nahi hai. This is the case of paired observations, jaha pe hum isko ish dosre tari khe se treat karte hain, jasa kum hai aaps ke saath baat karein. That is the last step, the conclusion. Our computed value of T came out to be 2.09 and the critical values are plus minus 2.262. Isle saaf zahire ke humare value acceptance region ke ender fal karein and therefore, we can accept H naught. Isko matlabhi hua that this data does not provide evidence to conclude that this particular physical training program affects the weights of those recruits. Of course, we are not talking about any particular individual, but we are talking on the average. Let us consider one more example on paired observations. As you now see on the slide, the following data give paired yields of two varieties of wheat. Each pair was planted in a different locality. The values are that for locality number 1, the yield for variety 1 is 45, whereas the yield for variety 2 is 47. Similarly, for locality number 2, the yield of variety 1 is 32 and the yield of variety 2 is 34. Isi tarase humare pas baqi figures majuudhe. Now, we would like to test the hypothesis that on the average, the yield of variety 1 is less than the yield of variety 2. Also in this problem, we are required to state the assumptions that are necessary to carry out the test that we are going to carry out. In addition, there is a very interesting question. How can the experimenter make a type 1 error and what are the consequences of his doing so? Next, isi tarase hum ye bhi jaan na chaathe ke, how can the experimenter make a type 2 error and what are the consequences of his doing so? In addition, we would like to compute the 90 percent confidence interval for the difference in the mean yield of the two varieties. Students, sabse pehle aap ye note karein that this is again a case of paired observations. Humare pas das mukhtalif farms hai aur har farm me humne aadhe me variety 1 ko boya hai aur aadhe me variety 2 ko. Isli ye soil is common aur ab agar pehle baar me kui faar aar hai. So, it is not because of soil or any other extraneous factors which could affect the yield. It is simply the difference in the varieties that is producing the difference in the yield and this is exactly what we are wanting to measure. So, if we are satisfied that this is the way we should treat this problem, then what are the assumptions in order to conduct the test? As you now see on the screen, we will be assuming that the differences in the yields of the two varieties in the 10 farms that we have, these are a random sample from the population of differences and the population of differences is normally distributed. This means that we can think that if we talk about 100,000, 10,000 farms instead of 10,000 farms, in every farm we give half variety 1 and half variety 2. So, we have a population of yields for variety 1 as well as for variety 2 and after that if we compute the differences then we have a population of differences. This population may say we are drawing a random sample of size 10. All right students, what is it that we are wanting to test? Statement ye thi ke variety 1 ki yield jo hai on the average it is less than variety 2 ye hum test karna chaatein. Dekhye iss ki andar less than ka sign hai. Equal ka sign iss me nahi aata iss ka matlab hai ki this will be in the alternative hypothesis. Agar hum variety 1 ke liye subscript 1 use karein or variety 2 ke liye subscript 2, then what are we saying? We are saying mu 1 is less than mu 2. This comes in the alternative hypothesis and what is the null hypothesis then? mu 1 is greater than or equal to mu 2. Lekin agar hum in hypothesis ko translate karein into those which are in terms of d, then as you now see on the screen the null hypothesis is mu d is greater than or equal to 0 and the alternative mu d is less than 0. The second step is the level of significance and let us keep it as 0.05 the usual level that we take. The third step is the test statistic t is equal to d bar minus mu d over s d over square root of n and the fourth step is to compute this statistic carrying out the computations just as in the last example our d bar comes out to be minus 2.8 s d square is equal to 1.7333 so that t comes out to be minus 6.71. Now the critical region lies on the left tail of the distribution and looking under 0.05 in the t table against 9 degrees of freedom the value is 1.833 but because we have the left tail in our mind therefore we say that the critical value is minus 1.833. The last step of course is the conclusion. Our value is minus 6.71 which is far out in the left tail and therefore we reject the null hypothesis and therefore we conclude that there is sufficient evidence for us to say that mu 1 is less than mu 2. That means variety 1's yield on the average it is less than the yield of variety 2. Students you will recall that there was a very interesting question in this problem. How can the experimenter make a type 1 error and what will be the consequences of his doing so? You remember type 1 error means that h naught is actually true but you reject it. In this problem h naught kya tha mu 1 is greater than or equal to mu 2. That means mu 1 the mean yield of the first variety is either more than or at least equal to the yield of variety 2. To agar yeh such ho aur ham it se reject karne aur ham kahe variety 1 ki yield kam hai variety 2 se to is aisa error karne ke consequences kya honge? Students first ki jay ke variety 1 joh hai that is less expensive than variety 2 aur ham kahe ke jee expensive to kam hai lekin uski yield bhi kam hai lehaaza we should not take that one we should be buying the other one and showing the other one. But then students will be not be wasting money is liye ke dar asal to uski yield less nahi hai asal me to h naught is true aur agar wo inexpensive hai to phir to hame wohi lene chahiye thi na aap ko mai ke is maam le pe gaur ki jay. This is how you link the mathematical theory with real life and if you pay attention to it it can be very interesting. The other question was how can the experimenter made a type 2 error? Type 2 error kya hota hai? Ke joh nal hypothesis hai that is actually false. But you accept it because of your data being that way to is problem me nal hypothesis ye hai ke mu 1 is greater than or equal to mu 2. First ki jay ke ye asal me galat hai. Asal me mu 1 is less than mu 2 variety 1 ki yield kam hai variety 2 se on the average. Lekin ham agar accept karne h naught ko which is actually false. To ham kainge ke neji variety 1 ki yield kam nahi hai variety 2 se. To aisa karne se nuksaan kya hoega? Aisa karne se nuksaan ye hoega students ke ham jo isafa kar sakte thi payada war me by using variety 2 wo ham nahi karir sakenge because of this error that we have committed. So, this is the kind of situation that we deal with in hypothesis testing. Now, the last part of the question was construct a confidence interval for mu 1 minus mu 2 and how do we proceed with this problem? Aapko yaad hai na ke ye wala problem toh bad observations kahe. To iska matlab ye hua ke ju hamara confidence interval hai that also has to be in that manner as you now see on the screen. In the case of paired observations the confidence interval for mu 1 minus mu 2 in other words the confidence interval for mu d is given by d bar plus minus t alpha by 2 n minus 1 degrees of freedom multiplied by s d over square root of n. Once again note ki jay exactly the same formula that we had in the first instance when we were trying to construct a confidence interval for mu the mean of our x variable. Uswa formula kya tha? x bar plus minus t alpha by 2 n minus 1 degrees of freedom into s over square root of n or ye formula bilkul wesa hi hai. The only difference being that our variable now is not x, but d. So, let us apply this in this problem as you now see on the screen d bar is equal to minus 2.8, s d is 1.32 and n is equal to 10 and therefore our confidence interval is minus 2.8 plus minus t alpha by 2 at 9 degrees of freedom into 1.32 over square root of 10. As noticed earlier the value of t alpha by 2 that is t 0.025 at 9 degrees of freedom ab ye jo t alpha by 2 ki baat horei hai students note ki jay ke ham chaate hai ke ham 90 percent confidence interval nikale. Iska matlab ye hua that 90 percent area in the middle 5 percent to the left 5 percent to the right. So, having a look at the area table of the t distribution of course the t value comes out to be 1.833 just as in the case of hypothesis testing that we did a short while ago. Therefore, putting the value 1.833 in the formula our confidence interval comes out to be minus 3.565 to minus 2.035 and rounding these figures the 90 percent confidence interval for mu 1 minus mu 2 is minus 3.6 to minus 2.0. As you can see the values are negative and this means that mu 1 is less than mu 2. Zahire agar mu 1 mu 2 se kam ho ga tabhi mu 1 minus mu 2 negative eigena alright students. We have conducted a number of tests and constructed a number of confidence intervals based on the t distribution. Kame mu ki baat ki mu 1 or mu 2 ki baat ki paired differences ki baat ki. Isse pehle you will recall that we were testing about mu on the basis of the Z statistic and also when we were doing the Z statistic we were testing about p 1 p 2 and also constructing confidence intervals regarding proportions. Another very important quantity that we might wish to estimate is the variance or the standard deviation of our population. Agar ham sigma square ya sigma ke baare mein statistical inference karna chahi then what will be the procedure? Students you will be interested to know that there is another distribution called the chi square distribution and this is the one that will enable us to do statistical inference for sigma square the variance of the population. So before I do the application of the chi square distribution in real life situations let me first define this distribution for you in a formal manner. As you now see on the slide the mathematical equation of the chi square distribution is f of x is equal to 1 over 2 raise to nu by 2 into the gamma function gamma of nu by 2 and this whole quantity multiplied by x raise to nu by 2 minus 1 multiplied by e raise to minus x by 2 and this whole equation is valid for x lying between 0 and infinity. All I would like you to do is to note that it has only one parameter represented by nu and also note that x ranges from 0 to infinity or its properties. Let us have a look at the equation one more time f of x is equal to 1 over 2 raise to nu by 2 gamma of nu by 2 x raise to nu by 2 minus 1 e raise to minus x by 2. Ab de kray hei ke sirif nu hei wo quantity hei which is the unknown quantity and which is the one lone parameter of the chi square distribution. Now coming to the properties of the chi square distribution first and foremost we can say that it is a continuous distribution ranging from 0 to infinity as is evident from its equation. And as far as the shape of the distribution is concerned the parameter nu determines the shape, nu is known as the degrees of freedom of the chi square distribution and there is a different chi square distribution for each number of degrees of freedom. As such it is a whole family of distributions just as we had in the case of the t distribution or the normal distribution or for any other distribution. The curve of the chi square distribution is positively skewed, but the skewness decreases as nu increases. Students yeh ju point abhi main aapko diya iska matlab yeh hua ke jay se jay se aap degrees of freedom barhate chale jate hi uski jo skewness hei that becomes less and it looks more and more like a normal distribution. The next property as you now see on the slide the diagram that you have in front of you now in this we have drawn three chi square distributions one for two degrees of freedom one having six degrees of freedom and one having ten degrees of freedom and you can see that whereas the chi square distribution with two degrees of freedom is extremely skewed the one having ten degrees of freedom is much less skewed and looks much more like a normal distribution. The next property pertains to the mean of the chi square distribution and it has been mathematically proved that the mean of the chi square distribution is equal to nu the number of degrees of freedom. As far as its variance is concerned it is equal to two times nu iska matlab yeh hua ke agar hum ten degrees of freedom wali chi square distribution ki baat karene to uski jo mean value hei that is equal to ten or uska jo variance hei that is equal to two times ten that is twenty yeh ni agar hum iska square root leh square root of twenty how much is that is it four point something yes then the standard deviation the spread of this distribution is square root of twenty. The next property is also very interesting as you now see on the slide the moments about the origin for the chi square distribution are as follows mu one dash is equal to nu mu one dash of course is the same thing as the mean because after all do you not remember that the first moment about the origin in other words the first moment about zero is exactly the same thing as the mean also mu two dash is equal to nu into nu plus two mu three dash is equal to nu into nu plus two into nu plus four and mu four dash is equal to nu into nu plus two into nu plus four into nu plus six dekhah aapne kis kadar hubsureth formulae it just goes on in a systematic pattern. But students we are more interested in the moment ratios and as you now see on the slide the first moment ratio beta one comes out to be eight over nu and the second one beta two comes out to be three plus twelve over nu. Students aap hui aadhe abhi thori dekhah lee main aapse kaha that as the degrees of freedom increase the skewness of the chi square distribution decreases and it becomes more and more like the normal distribution is baat ko in moment ratios kis aath relate ki jai jo abhi abhi aapne aapne present kiye gaye. Since beta one is equal to eight over nu isn't it obvious that if nu tends to infinity beta one will tend to zero and also since beta two is equal to three plus twelve over nu as nu tends to infinity beta two will tend to three plus zero and that is three and students do you not remember that for the normal distribution beta one is equal to zero and beta two is equal to three. Hence what we said earlier regarding the shape of the chi square distribution is validated by the moment ratios. Students having discussed the basic properties of the chi square distribution let us now begin the discussion of its role in estimation and hypothesis testing. As I said earlier we can do hypothesis testing or estimation regarding the variance of our population on the basis of the chi square distribution. Let me explain this point with the help of an example. Suppose that an aptitude test has been devised in such a way that it carries a total of 20 marks and suppose that this test is administered on a large population of students and when the result comes out we find that the marks of the students are normally distributed. Now a random sample of size eight is drawn from this normal population and the sample values are nine, fourteen, ten, twelve and so on. We are required to find the 90 percent confidence interval for the population variance sigma square representing the variability in the marks of the students. Let us have a look at the data once again. The marks of the students in our sample are 9, 14, 10, 12, 7, 13, 11 and 12. You can see that these marks are not all the same obviously there is some variation. Now if we talk subjectively, some will say that there is quite a bit of variation. So, obviously we need a proper mathematical way of doing it and as you now see on the screen, if we are interested in constructing a 90 percent confidence interval for sigma square it is given by summation x minus x bar whole square over chi square 0.05 at nu minus 1 summation x minus x bar whole square over chi square 0.05 n minus 1 degrees of freedom. This is the lower limit and the upper limit is summation x minus x bar whole square over chi square 0.95 n minus 1 degrees of freedom. The next slide shows what we mean by chi square 0.05 and chi square 0.95. The value chi square 0.95 indicates that value on the x axis. The area to the right of weight is 95 percent of the total area under the chi square distribution. Similarly, the value chi square 0.05 is that point on the x axis. The area to the right of weight is 5 percent of the total area. As indicated already, the degrees of freedom that we have to use are n minus 1 and in this problem that is 8 minus 1 and that is 7. Now, in order to find the values of chi square we will need to consult the chi square table which you now see on the screen. The very first column indicates the degrees of freedom and the very top row indicates the areas that we would like to have to the right of our chi square values. Hence, in this particular problem if we want 90 percent confidence, then we will be looking against 7 degrees of freedom. Once we will be looking under 0.95 and we find the value to be 2.17 and the other time we will be looking under 0.05, so that we find that chi square is equal to 14.07. Substituting these values in the lower and upper limits, we obtain summation x minus x bar whole square over 14.07 as the lower limit and summation x minus x bar whole square over 2.17 as the upper limit. Now, the question is, what is the summation x minus x bar whole square value? That is very obvious. We have the data and we can find these values either directly or by the shortcut formula. So, as you now see on the slide, in this problem x bar is equal to 88 over 8 because the sum of the values is 88 and therefore, x bar is equal to 11 and substituting this in the formula sum x minus x bar whole square, we obtain 9 minus 11 whole square plus 14 minus 11 whole square plus so on, so that the sum of the squares of the deviations of the values from their mean comes out to be 36. Substituting this number 36 in the lower and upper limits, we obtain 2.56 as the lower limit and 16.61 as the upper limit of our 90 percent confidence interval for sigma square. Students, we have found the 90 percent confidence interval for sigma square. The values are lower limit 2.56 and upper limit 16.61. Now, this seems to be a bit wide. But, if we talk about sigma square instead of sigma, that is, standard deviation, then we should do that and that will give us the confidence interval for sigma. As you now see on the slide, if we do so, then the lower limit for sigma comes out to be 1.6 and the upper limit 4.1. And you can see that this of course, is much narrower than what we had for the variance. But, then do keep in mind that after all the variance was in square units whereas, the standard deviation itself is in the original unit. So, we adopt this method several times. The basic formula is for sigma square, but later we may take the square root and we can then regard it as a confidence interval for sigma. Students, we solved the problem, but the question is how did we achieve this particular formula? Well, it has a derivation similar to the derivation that I convey to you for the confidence interval for mu based on the z statistic. But, I will not be doing the detailed derivation of this particular formula in this course. All I would like you to note is that the basic requirement for this formula to be valid is that the population from which the sample is drawn that population should be normally distributed. Having done interval estimation regarding sigma square, let us now proceed to hypothesis testing regarding the variance of the population. Let me explain this with the help of an example. As you now see on the slide, the variability in the tensile strength of a type of steel wire must be controlled carefully. A sample of the wire is subjected to test and it is found that the sample variance is 31.5. The sample size was n equal to 16 observations. Test the hypothesis that the population variance is 25 against the alternative that the variance is greater than 25. Use a 5 percent level of significance. Students, you will agree that in any production process we want to control the product according to the specifications and the variability that is what we want to control and that is exactly what we have in this particular problem. So, we want that the variability should not be greater than 25 in terms of the variance of the tensile strength. Now, how do we solve this problem? As you see on the slide, the null hypothesis is that sigma square is equal to 25, whereas the alternative is that sigma square is greater than 25. The level of significance is 5 percent and students the test statistic in this particular situation is summation x minus x bar whole square over sigma square which under H naught has a chi square distribution with n minus 1 degrees of freedom assuming that the population is normal. In order to calculate the value of chi square, we note that summation x minus x bar whole square is equal to n times s square and substituting these values in the formula and substituting 25 instead of sigma square, we obtain chi square is equal to 20.16. The next step is the critical region and since this is a right tail test students therefore, we will look against 16 minus 1 that is 15 degrees of freedom under 0.05 and doing so, we obtain the critical value as 25.00. Since our computed value 20.16 is less than 25, therefore, we accept the null hypothesis and we do not find evidence to conclude that the variability in the tensile strength is greater than 25. Students in today's lecture, we discussed the difference between means in the case of paid observations and after that, we began the discussion of the chi square distribution. I would like to encourage you to study these concepts in detail and until next time, Allah hafiz.