 Hello, my name is Mark Handler and good morning, afternoon, or good evening, wherever you may be. Appreciate having you all on here tonight. And I'm not going to take a whole lot of time, but I'd like to spend a little bit of time with you all talking about some recent research work that we've done on the Verify product just to let you know that we didn't just kind of run with this thing. I mean, it's been years in the making and I want to talk to you about the basically the algorithm that we've got in it now and some of the recent work we've done. So John Kirch will be my co-presenter on this. I'll do a lot of the talking and John's the brains behind this. So we're going to talk about something called a single issue V3R protocol. It's a mouthful, especially this late at night for me, but essentially it's the Verify 3R protocol, but it's a single issue in that the guilty subjects in this case were programmed to commit one single issue mock crime and that mock crime we'll talk about in a little bit here, but it's the kind of the tried and true University of Utah mock crime scenario that we've been using and Dr. Kirch and Dr. Raskin have been using for years. Go ahead, please. So for this particular experiment, we use iPhones, androids, selfie sticks, and the test will run on what's called Procter Modem. Todd talked about that earlier and explained to you all what Procter Mode is. Instead of having the person take a link and run the test by themselves, it's actually Procter, much like a traditional eye-detect test would have been. So that's how these were conducted. Go ahead, please. So for this single issue test, essentially it begins much like you had seen in the earlier video if you had watched. The subject is given instructions and basically an opportunity to practice a little bit just to kind of practice some true, false, and get used to answering properly while holding the phone in the right place. The questions are presented orally so they hear the questions and they're answered verbally. So they're told to answer quickly and accurately lest they will fail the test. And the point being is that we don't want them playing around. We want them to listen to the question and answer them. Now these statements, for those of you who are familiar with what we call the audio MCT, these statements are much like the audio MCTs in that the subject won't know their answer until they've heard the statement. And we'll give you an example that you can see there at the end, but they don't know if they're going to answer true or false until the statement gets to the very end. There are three issues in this test, only one of which the guilty folks are programmed to participate in or program guilty for. The other two, they were not. No two questions queried or asked about the same thing in a row. In other words, you wouldn't be asked about stealing the $20 twice in a row. So it was they're mixed up like that, but they're presented randomly. The statements like always are balanced for length and in this case for negation. Then as I mentioned, the subjects won't know what their answer is until they get to the end. So for example here, regarding the theft of the $20, I am innocent. Well, even if the person were programmed guilty, you would expect them to answer true and lie to that, right? Regarding the theft of the $20, I am guilty. You would expect them to answer false to that. So go ahead, please. So as always for us, we recruit or we try to recruit from the community and it's a very challenging endeavor. I can assure you, anybody who's done this kind of research knows that and we've been taught this by doctors Raskin and Kirchher. We know that the best way to recruit subjects is from the community from which you would likely be testing. That way the folks kind of tend to generalize to who might end up in your testing place. So finding these people is difficult. We've got a couple of folks that are really good at shaking the trees and getting them out, but it takes a lot of time and a lot of work. We offer them $40 to play lie detector test and then with the opportunity to win an extra $30 if they can pass the test. So when you think of these things and as you think of imagine if every time you brought somebody in and tested them, you had to reach into your wallet or person, pull out $70 and give away $70 and you start running hundreds of cases and hundreds of people and it gets really expensive. One, it gets expensive in time and effort finding them and then in paying them. So I throw that out there just because a lot of times we hear, well, just do more research. Well, that's nice to say, but research is not easy and you've got to have the statistical chops to handle the results of the research as well as people that conduct it. And we have a couple of folks there at Converis that are phenomenal at it. They're randomly assigned to one of two conditions, clearly innocent or guilty, ground truth state. Those who were innocent stole nothing. Those were guilty stole $20 and everybody was asked about three different thefts, a theft of a ring, a theft of a cell phone and the theft of the $20 for which the guilty were programmed. No one stole a ring, no one stole a cell phone and the innocent people didn't steal anything. Go ahead, please talk. We're going to talk a little bit about the results from the single issue of the V3R. So that's six times as late, right? So when you're measuring these things and trying to find the window of evaluation, where we're in the best data lie, again, it's not a simple thing. And it takes Dr. Kircher days and days of analyzing data to find out just to get the best, where the best amount of data comes from. And essentially what we're looking for is, in this case, we're looking at there's a question onset, that's the beginning of that pinkish, I guess you could call it pink, I don't know, peach maybe, where the peaches, that's where the question begins to be asked, right, of the person. When the question's done being asked, that's the end of the peach colored area. And then that line where you see it with the minus sign down at the bottom is they're answering no. So for us, we use the question for measurements for the most part, for most metrics that we're gathering, we're looking at from question onset until the time when the examinee actually says true or false, when they answer, with the exception of response time. Response time is the amount of time that it takes for a person, the latency that it takes for a person to answer, once the stimulus is offset, basically once the question is done being asked, how long between then and the answer is what is the time period, because that is pretty diagnostic actually, the amount of time that it takes and has been diagnostic for both traditional eye to tech and it is in this verify. So those are the kinds of the time windows that we were looking at. I'm going to show you a little picture of what just pupil looks like in the groups of folks who are programmed innocent and guilty to kind of give you an appreciation for the fact that things do happen when we lie and they do happen, you know, our CTO bends ability to draw this information out using a cell phone camera to me is just phenomenal, but you can see it here, this is an actual picture of the group for about seven seconds. We plot the response curves for both the innocent and guilty folks, obviously the innocent are on your left and the guilty are on your right, and we want to see what differences occur when someone lies. So these figures show pupil change from the question onset for about seven or so seconds. The red line is the pupil curves for questions about cash and the blue line shows the reactions to questions about the ring and the green line shows reactions to questions about the cell phone. Again, no one stole a ring and no one stole a cell phone. And as you can see and as would be expected if things work and we do see that they work here for the innocent folks on the left panel, there's no difference or very little difference between the three relevant questions, the relevant statements, because they didn't steal a ring, they didn't steal the cash and they didn't steal the money. However, if you look over on the right hand side, the guilty subjects show much more on a pronounced response to the theft cash question, as opposed to the other two questions for which they were telling the truth. So sometimes they say pictures worth a thousand words and I think in this case, I don't know if I spent a thousand words on this slide or not, but I'll stop and you can use the picture to shut me up. Next slide please. So when we're looking at these things, what we look at is the validity coefficients and we want to see which metrics, which things that we're gathering and measuring and collecting, which kinds of things give us the biggest bang for the buck. Now it's no surprise, there should be no surprise, anybody that's that's played eye detector, played in the eye detector sandbox for any amount of time that the pupil gives us a lot of information. And there are several aspects of pupil dilation that we that we take into consideration that Dr. Kircher has found to be very valuable in separating or differentiating truth tellers from liars. And so when we extract these features, we're basically trying to put people into buckets, truthful bucket or deceptive bucket. Now the closer those numbers are to plus one or minus one, the more diagnostic or the better it discriminates between truthful and deceptive people. If the validity coefficient were zero, it basically it's not diagnostic and wouldn't be very helpful at all. So all of these features that we've listed here just on this particular slide are statistically significant, meaning that they are, they provide a meaningful answer towards placing someone in one category of the other. Pupils, obviously, like I said, more diagnostic, but vascular activity is is, you know, which is a measure of blood flow in this case is diagnostic. And we've seen that in polygraph through the years and something called a photoelectric plethora graph. I think Dr. Kircher has been using one since the 70s. I don't want to date you, David. However, they've known for years that that blood flow can be diagnostic and response time using verbal answers is diagnostic. It's not as diagnostic as we had seen when someone uses a mouse, let's say, or clicks on the arrows. Typically, most folks use the mouse and the traditional lie detector. But response time is diagnostic. When you see numbers with a negative number in front of it, that means that it happens less when someone is lying, just if you wonder why there's a minus sign there. So I'll give you an example that liars respond more quickly than truth tellers. That's why the response time is a negative number there. All right, Todd. So I'm going to give you five seconds to memorize this equation. We'll put it away. And then if anybody who can spit it back out will win a prize at the end of the day, that is simply not true. I'm lying to you. This is the logistic regression model that we use just to give you an idea. And we've shown this before. It's just Dr. Kircher's way of taking the individual measures and giving them a certain amount of weight. So they develop these features. They take the features and they develop this model. Dr. Kircher develops a model. And then when we want to determine how well the model works, what we do is what's called an n fold, the letter n fold. And n stands for the number of folds or the number of buckets that we're going to create with the data. And oftentimes that has to do with the number of subjects that we have available to fold away into buckets. And so you see in the next slide, we'll talk about, for this particular study, what we had was 130 subjects and we divided them into four folds or four subsets, if you will, with approximately equal numbers of innocent and guilty in each fold. So then what we do is take out one subset. Let's say we take out the first subset, fold one and we put them in time out. And then we develop a model. Dr. Kircher will develop a model using the remaining three subsets. And the reason for this is then he'll use the, he'll take that model and he'll calculate the credibility scores for the subjects in the holdout model. Meaning that the folks who are being evaluated in this holdout model, their data were not a part of the model that was used to develop to, to score them. And so it helps to see how well this, these models are going to generalize. And so what happens then is then you take out group two and you put them over on, on the timeout bench. And then you create a new model with one, three and four. Once you've got the model, you come back in and then you evaluate the group two folks with that model. And you, each time you do that, you write down how well you did with the innocent and how well you did with the guilty. And you lather, rinse, repeat until you're done. And then you can calculate the average or the mean for the innocent and the guilty and then combine them and give yourself an overall mean. So mean accuracy is, is what we call the figure of merit. That's what we're really concerned with at the very end. Mean accuracy for innocence and the mean accuracy for guilty folks and then the overall mean. Now our goal is to get to 80, 80% accuracy. And we're bumping it coming close. And I, I have full faith in Dr. Kircher that the more he evaluates these data and the more subjects that we get, that we will get to that 80% accuracy level for subjects who live out at least for, for categorize and liars versus truth tellers people who live out one or more relevant issues on this. So this kind of sums up what we've done so far with this. And if there are no other questions, if there are questions in the chat that Russ or Dr. Kircher have answered, I am finished.