 Mean count per interval, interval observer, iterator, that, mean count per interval, iterator, reliability, fucking, mean count per interval, inter-observer agreement. This is a tricky little procedure, especially when you read about it, because you end up saying the word agreement a lot, and then mean and averaging, all that crap. All right, so listen carefully. Two people, one in one, all right? Two people, they're each doing observations of the same thing, okay? So of the same person, the same behavior. So they're gonna end up with a number, a count, right, so they're gonna end up with a count for interval one for each person. So this person got five and this person got four. We're gonna create a ratio of those. Take the smaller divided by the larger. So five divided by four, 80%, right? We're gonna do that for each and every interval that we have. So you can have 0%, they didn't agree at all. 100%, they agreed perfectly, right? So the ratio between those two numbers. Two divided by three, 66. One divided by three, 33. You get the idea, right? So we're gonna get a measure of how well they agreed in each interval. Then we're gonna average those ratios, those percentages at the end. So we're only gonna work with three things here. So first one was 100% agreed. Let's just start over. So first interval, we both saw the same. Both of our observers saw the exact same thing. Three and three, divide those by each other, one by the other. Lowest by the highest equals 100%. The second one, they saw one person saw nothing. The other person saw one. We're gonna divide zero by one. You end up with 0%. So now we've got two measures, right? In our third interval, we're going to do, oh, I don't know, we'll do another 100% here, right? So we got this person saw it twice and this person saw the behavior twice. Divide that 100%. So we have the first interval, 100% agreement. Second interval, 0% agreement. Third interval, 100% agreement. We divide all those things out. So we add all those up. 100 plus zero plus 100. So we end up with, oh, I don't know, and then divide that by three. So you end up with what? 75% agreement overall, something to that effect. It doesn't have to be 100 or zero. It could be 22%, 25%, whatever. You get the idea. You're just gonna average each one of those ratios, those percentage agreements between each interval and then get a number overall. It is not as detailed and not as accurate as the total count or the exact count method, but it's better than some of the other methods.