 Dear students, we have talked in the last module about stability and reliability. And now we are going to study the consistency reliability. And as I discussed with you, there are three types of consistency reliability. Inter-rater reliability, inter-item reliability and suplate half reliability. Let's first see the inter-rater reliability. Inter-rater reliability is a measure of consistency used to evaluate the extent to which different judges agree in their assessment decisions. What happens in this? To check the phenomena that you are measuring, you have different observers. We can also see this for an object and for a phenomenon. And the observers actually give you the points on that phenomena. For example, if there is a performance and there are five contestants there, they score 0 to 10. So the contestant 1, the judge 1 has scored 6, the judge 2 has scored 8. Similarly, the contestant 2, the judge 1 has scored 7 and the judge 2 has scored 6. Similarly, the contestant 3, the judge 1 has scored 4 and the judge 2 has scored 4. Similarly, the contestant 4 has scored 3 and the judge 2 has scored 2 and the contestant 5 has scored 4 and the judge 2 has scored 4. After having the judgment scores, we actually check the agreement between the judges. So if we look at the contestant 1, both judges have an agreement. This is why we give it a 1 number. And in contestant 2, the judges don't have the agreement. This is why we give him a 0. And in contestant 3, there is an again agreement. And in contestant 5, the judges have again the agreement. So in this way, out of the five contestants, the judge's agreement is 3. So we can say that this is more than an average agreement between the observers and judges. So this is why its inter-rater reliability is comparatively high. If this agreement was 1, we would say that its inter-rater reliability is low. And if this agreement was 5, we would say that this inter-rater reliability is very high. What is a split-half reliability? Split-half reliability is determined by dividing the total set of items relating to a construct of interest into two halves and between even numbers or odd numbers and comparing the results obtained from these two subsets of items that are created. So what happens in this? You make an assessment tool and divide the items of those assessment tools into two sections. And then you take the opinion of the respondent from those two sections. And let's see how similar the opinion of the respondents is. For example, I will explain to you that there are three respondents here. So in one assessment tool, there is an even number on one side and on the other there are odd numbers. 1 to 99 are odd numbers and 2 to 100 are even numbers. So for instance, three respondents collected data on half-1 and half-2. It is written that how these consistencies are in half-1 and in half-2. So the last type of reliability is inter-item consistency reliability. This is the most used form of reliability. When we measure reliability in social science research, we are mostly looking at this form of reliability. In this form of reliability, we actually check the consistency between the multiple items measuring the same constructs. So there are two methods to check this. One is that you have made a scale for a construct. For instance, to measure the fear of coronavirus, there is a scale of Cv19S. There are seven items. So check the inter-item correlations of those seven items. So the correlation between the inter-item correlations will be significant and high. It means that its value will be above 0.5. So we can say that inter-item consistency reliability is better. Similarly, the other measure is the Cronback Alpha. We will talk about the Cronback Alpha in detail in the next module. In this module, we have talked about the consistency reliability types. In the next module, we will talk about the Cronback Alpha.