 Inter-observer agreement, the degree to which two or more people indicate that they saw the same thing at the same time in the same context, right? So did the observed values match what actually happened, right? So you get one person observing something, another person observing something. How well do they agree? We break it up over intervals to make sure it's nice and easy, and then you calculate there's a whole bunch of procedures for calculating IOA, inter-observer agreement. Please don't use the term inter-rater reliability because inter-observer agreement speaks to believability, not reliability, right? But oftentimes in the field of psychology, you'll hear those two terms used interchangeably. We just prefer inter-observer agreement because it's about how two people agree not how reliable your data is. An example here would be how many times do I pick my nose during the videos? You want to grab a partner and watch 10 or 15 of my videos and see how many times I pick my nose. If you end up with agreement, then your data is probably pretty believable.