 Kun luotetaan yleensä ympäristöjärjestelmistä tai ympäristöjärjestelmistä, siellä on kaksi perustelua. Ensimmäisenä pitäisi kauttaa statistiivista ja kuantipuolista yleensä, ja sitten pitäisi kauttaa ympäristöjärjestelmistä. Jos olet kohdeffisintalfa 0,75, mitä se on tietysti? Tämä on mitä minä sanoin tämän videon. There are some concerns in journals that authors who try to submit articles to these journals are using reliability statistics rather mechanically. So this is an example from Guide and Keto-Kivi editoria from Journal of Apparitions Management, and these co-editors are explained that quite commonly people are into reliability coefficients by simply checking if the coefficient is more than 0,7 and they cite a non-alist book from 1994 or perhaps 1970-78, which is a bit more common. So what's the problem with this strategy? Well, the first there are two problems. The non-alist book site isn't here, does not have a page number, and if you actually go and look what that book says, it does not recommend a specific cut-off that is great for every possible scenario. The second thing is that reliability is not a yes or no question, but it's to a degree, and you need to evaluate what does the specific number mean in your context. Sometimes high reliability is required, sometimes not so. What does Nanali say? Well, Nanali does not specifically say that 0,7 is always acceptable. Instead, he recommends that for very early research, 0,7 may be okay if you have a more refined and developed theory that you're testing, then 0,90 may be required. And typically when authors try to submit papers to journals, they just pick the lowest number out of convenience. Many probably have not read about this segment of the book or seen the book, they just cite it out of habit. So that's one thing. Nanali does not really say that 0,7 is always okay, but it depends on the context. But there's also a more general problem. The more general problem is that why would we care what a psychologist said 40 years ago. So this is Nanali's personal subjective opinion 40 years ago in the context of psychology. Why would it was okay for psychologists based on one person's opinion 40 years ago be relevant for other disciplines now? And maybe there are better points marks than this benchmark that we could apply. This is something that this editorial in journal for reasons management tries to address. So they're saying that instead of relying on this rule of thumb and they are actually pointing to the page number where readers can see for themselves what Nanali is recommending and that's a very good idea to do. They're saying that you should contextualize. And this is not the only editorial that I've seen that calls for contextualizing the reliability coefficient or generally any statistics. The problem is that none of these editorials that I've read actually explain how should you contextualize. How do you actually make a contextual assessment of the reliability coefficient? There does not seem to be much guidelines on that. Well Nanali's book actually has a chapter on assessment of reliability and he talks about some strategies for how to estimate how to interpret the reliability coefficient in context. But this does not really are, it's not very modern source anymore so the field has advanced beyond this book. There are basically two ways of assessing reliability. The first strategy is to estimate how much bias the degree of unreliability is expected to have on your results. And to do so you can use errors in variables regression analysis if you don't have a statistical software that contains an errors in variables. Procedure you can do that with structural ecosystem modeling software by fixing the error variances to the reliability estimates and all correct correlations for attenuation using this formula here. One problem with these strategies is that the reliability's are estimates so there's uncertainty and taking that uncertainty of the estimate into account in the correction it's actually quite difficult to do. But nevertheless even if your standard errors for the correlation corrected for attenuation would be slightly too small the correlation is useful or errors in variables is useful to give you kind of like a what if scenario. What if my reliability estimates are correct and what would the result of the analysis be in that case. So you understand or you should also understand what the reliability index quantifies. So beyond coefficient alpha for example if you use one of these multi-dimensional indices then depending on which one you apply you will get a different result for the corrected correlations or corrected recursions. And you need to understand what the reliability indices quantify to understand which should be applicable. Should you use test readers reliability index or these internal consistent indices and so on. One thing is clear you should never use a lower bound estimate. So for example the greatest lower bound using that it's going to underestimate reliability and if you underestimate reliability then you are over correcting the regression estimates for unreliability. Same with coefficient alpha it tends to underestimate reliability unless the tau equivalence assumption holds and if so then you are over correcting which means that you're overestimating the correlations. So this is one strategy. Calculate the model that takes the reliability estimates into account and corrects the regression estimates based on those reliability estimates. The second strategy is use contextual benchmarks. For example you can check compare your reliability estimates against reliability estimates calculated from the prior applications of the same measure or you can calculate prior or compare against prior measures different measures of the same constructs prior application of similar measures of similar concepts in your domain or reliability levels that are typical for research that is on the same level of maturity as your studies. So instead of looking at what was Nanali's opinion 40 years ago you look at what are the typical reliability levels in your field now and then you make a comparison. So is your reliability better or worse and after that you need to explain. So if reliability is typically assessed and it's let's say it's 0.75 and then you get a 0.95 reliability then you need to explain why is there a difference. Also you need to obviously explain differences to the other reactions. So if others have reliability of 0.85 and you get 0.65 then you need to explain what is the reason. And remember that reliability depends on two different things. It depends on the sample variance of true scores. So if you have low reliability it does not mean necessary that your measurements are imprecise. It can also mean that the total variation in the sample is very small which means that the relative precision which reliability in this is quantified is small. So it does not mean that there's more error variance in the absolute sense but it's about the relationship between the true score variance and the error variance and both of course affect the reliability statistic. My recommendation is that you should always apply both of these strategies. So even if you don't use an errors in variables regression or structure regression model as your main analysis technique it's a useful way of doing kind of like a robustness check or a what if analysis. So what would the results be if reliability was corrected and if the results are very different from your original results then you have some explaining to do in your study. And also comparing a best best practice is always a good idea.