 Discussing reliability in your study requires that you discuss what is the effect or expected effect of random measurement there on your results. So it's worth understanding generally what unreliability, what kind of influence it has on our study results. Let's take a look at an example. So here we have perfect measurement, we have X and Y measured with X and Y. And there is no measurement there, so that they are observed X equals the X that we're interested in, observed Y equals the Y that we're interested in. And they have a correlation of 0.5 in the population, our sample size is 300 in this X sample, and we take one sample, the sample correlation is 0.5 on 2 digits precision. We have very close to the correct correlation if our sample size is large enough when the measurement is perfectly reliable. So what will happen if we have unreliable measurement? If we have random noise in our measurements, the estimated correlation will be too small. So here in this case the reliability of X and Y is 50%. So that half of X, observed X is measurement error, half of observed X is the latent X that we're interested in, same for Y, half is measurement error, so we are 50% reliable. We can see now that the observed correlation is a bit less than 0.3, which means that it's underestimated by about 40%. So the real value that we would like to get from population correlation between latent X and latent Y is 0.5, but we observe a correlation of 0.29. So this is called attenuation. When you have random noise in your data, then all statistical relationships observed from that data or bivariate relationships will be smaller. The impact on regression analysis when you have multiple independent variables is more complicated, but generally when all variables are unreliable, then on average all regression coefficients will be biased downwards. If you have two variables for example, you have the interesting variable that is perfectly measured and you have a control variable that is a bit unreliable, then the unreliability of the control variable can cause the coefficient for the interesting variable to be overestimated. So that can also happen, but generally this attenuation happens when all variables are unreliable, then all regression coefficients will be slightly underestimated. And you have to understand that and then discuss what it means for your particular research result. A second important question is beyond the impact of reliability is how do we improve reliability? You can of course improve reliability by improving the reliability of individual measures and developing your scales better, but there are also other alternatives. So the idea of improving measurement is that when we have multiple different indicators of the same thing, when we have multiple repeated measures, then we can take correlations between these measures and use that to assess reliability. We can also take a mean of those measurements. So for example, if you have a wiggling child and you can't wait the child precisely because the child wiggles, then if you wait her five times and take a mean, that mean is more reliable than any of the individual measures. That's the reason why we take sums of scale scores and that's the second reason why we take multiple indicators. The idea is that we triangulate so that we measure the same thing over and over. We take the mean, the mean is more reliable. That can be understood the same principle through the target practice diagram. So here we have a highly reliable but invalid measures. Here we have a bit unreliable but valid measures. The idea of taking a sum of or mean of these measures is that we check what is the center, the mass center of these five shots. It's right in the middle in this time. So the mean of these individual shots is a lot more precise than any individual shot. So that's the reason why we take, we use composite measures. So instead of using these values from this innovativeness scale as separate indicators or just pick one that is most reliable, we take a sum and then the sum is our innovation score or scale score and that's going to be our measure in our regression analysis. It's theoretically possible to increase reliability of a sum by introducing weights for these indicators so that you weight one indicator a bit more than others based on the reliability. But the advantages of that approach are trivial and therefore we simply take a mean or the sum. That's the practice and also all the reliability indices that you would normally apply assume that it is an unweighted sum that's your scale score.