 Hello everyone, welcome back to a new session on dentistry and more. So today's topic is a small topic, training and calibration of the examiner. So before we are going to a research process or before we are conducting a survey, we need to train and calibrate the examiner. Suppose let's take an example. We are doing a oral health survey. We have indices OHS and DMFT. We are going to check oral hygiene status and dental caries status of a group of people. So we have two examiners, Mr. A and Mr. B. They are supposed to be well-versed and perfect in understanding the indices so that they can produce accurate results. So what if Mr. A and Mr. B not studied or not prepared well with the indices or the criteria and the methodology and they produce wrong results and they produce inaccurate results. The entire research or the study will be at a total loss. So this is a procedure to make sure that the examiners are well-versed with the procedures, criteria and other protocols of what they are going to apply in the study. So this is commonly done via pilot study. So before any research process, if you are doing a study on 1000 people, we will conduct it on 50 people or 30 people. That is known as pilot study. So we have seen pilot survey in earth finder type. So why we are doing pilot survey is to ensure that the examiners are good and they are well-versed, they are accurate and they are properly trained and they are good at exploring and identifying the diseases, what, how they are supposed to identify. So training and calibration, it is nothing but to ensure that the examiners are uniform in interpretation and they are good at identifying and classifying the disease based on the indices or whatever they are supposed to use. So let's see what is training and calibration. It is to ensure uniform interpretation, understanding and application by all examiners of the codes and criteria for various diseases and conditions to be observed. So just what I said, like if you are using OHS and DMFT, the examiners should be very thorough with OHS and DMFT, they should not in and out of these criteria and methodologies and the instruments and where to use this. So there should not be any confusion between the examiners. So always while doing training and calibration, we need to establish the intra-examiner and inter-examiner reliability. That means the person is able to reproduce the same results on a repeated performance. So Mr. A has to produce repeatedly consistent result, similar wise Mr. B. So checking the consistency of Mr. A or Mr. B is known as intra-examiner reliability and checking the reliability between Mr. A and Mr. B is known as inter-examiner reliability. So I repeat within the same person. If we have three examiners, we need to find out intra-examiner reliability of A, B and C. At the same time, we need to establish inter-examiner reliability also. That is between the examiners. So how do we calculate the reliability is by statistical testing and that particular test is known as Kappa statistics. So it is like mathematical calculation will be there. We will be alloting few students or few patients for each examiner and they will be recording the data. So each person will be recording the same person twice to produce intra-examiner reliability and there will be all the examiners who will be checking on the same examiners and by checking the inter-examiner reliability. So if we have 10 patients, Mr. A will be checking the 10 patients. Now, and after one hour again, he will be checking. That is to prove intra-examiner reliability and these 10 patients will be checked by Mr. A, B and even C. That is to prove inter-examiner reliability. So if we have a perfect positive one means there is a perfect agreement between the examiners, even if it is intra or inter. That means there is a 100% reliable examiners we have. So it ranges from minus one to plus one. If there is minus one, there is 100% disagreement between the examiners and zero means there is no agreement. And always we should have at least 80% agreement to start a study with using these examiners. So that is the idea of training and calibration to have 80% agreement or 0.8% agreement. So if they are not giving 0.8 or 80% agreement, they need to be trained again and calibrated. They need to study the indices properly. They need to understand the methodology properly. Again, they have to do the examination on pilot study and they have to prove. Once they have proven both intra and inter-examiner reliability more than 80% they are fit for the examination or survey or epidemiological survey. So that is the training and calibration method. It is nothing but to prove the reliability of the examiners. So that is about training and calibration. I'll come up with a new session on the end to stay on. Thank you.