 My name is Todd Mickelson. I'm the president and CEO of Converis and I'm really excited to share with you a major new product that we are launching today. If you've gone to our website, you may already know what we're gonna talk about because we released a press release and we turned on the capabilities on our website to expose what we announced in the first session this morning. But in summary, it's a new and exciting product. It's something that we've been working on in research and development for almost three years now. And as you'll see as we get into the session, it's really nothing short of a significant technological breakthrough in the area of truth verification. So let's jump in. Exactly 10 years ago yesterday, on March 7th of 2013, we founded Converis with the goal of being the leading technology provider for credibility assessment or what some referred to as truth verification. At the time we formed Converis, most solutions in the market, if not all, had at least one of these three issues. They were either unreliable, costly, or laborious. For many of the solutions at the time and even continuing today, some of those solutions have issues in all of these areas. Some solutions were unreliable because their accuracy wasn't much better than flipping a coin. The testing process wasn't automated and the results were left too much to human interpretation, opening the door for bias. Some of these solutions were and continue to be very costly because they all require an experienced examiner and specialized equipment in order to run these kinds of tests. Other solutions were and continue to be laborious. Because of the length of the time of the tests, the complexity of the testing protocols, the scheduling of participants, et cetera. So when we founded Converis, we did so with the belief that we thought we could really make a difference in this area because of the work that John Kircher and David Raskin and the other members of our science team at the University of Utah had done in the previous 10 years relative to ocular-based deception detection. In summary, if you're not familiar, lying requires more cognitive effort. In other words, it takes more effort to lie than to tell the truth. This increase in cognitive load, as referred to scientifically, causes involuntary changes in the eyes. And by developing technology that can monitor these involuntary diagnostic changes at the time we launched the company, we believe that we could deliver more automated and less labor-intensive solutions. Solutions that could be delivered via a computer that people could run without having significant training and experience in this area. So a little over a year later, three months later in June of 2014, we released our first commercial product, iDetect. It was really the first accurate, automated non-contact and fast solution in the market in terms of being able to take a test. What we felt is very, very quickly, iDetect tests take between 15 and 30 minutes depending on the number of issues that you're testing for, test scores and reports are automatically generated within about five minutes. And the accuracy of iDetect is between 86 and 88% depending on which protocol you're running. And test proctors, we call them proctors instead of examiners because really they can be less experienced individuals who are trained on how to proctor a test and sure that the test is going well. They can be trained in under a day. So since that time, we've seen a great adoption of the product. We have more than 600 customers in more than 60 countries that are running tests in more than 50 languages. Many of these customers are running thousands of iDetect tests every year. And if you are one of our current iDetect customers, we thank you for your embrace and use of the technology. And we look forward to continuing to support you in the use of iDetect and these new products that we're announcing today. In 2021, in the middle of COVID, we launched our second major technological advancement. It's referred to as iDetect Plus. It's really the first automated polygraph with iDetect Plus. You get all the capabilities and benefits of iDetect combined with the traditional channels and sensors and capabilities of polygraph. But you get this capability in a faster, less intrusive way. The uncomfortable blood pressure cuff in polygraph, which the pressure of that cuff must be relieved periodically during the test, causes polygraph exams to be longer. And if you talk to people who've been polygraphed, they'll say it's the most uncomfortable aspect of a polygraph. Well, one of the major technological advancements in iDetect Plus was this ability to replace the traditional blood pressure cuff with pulse transit time. Pulse transit time enabled us to automate a polygraph exam. You could still run it like a traditional polygraph in terms of advancing the questions, but the questions are presented by the computer. And we capture not only the traditional polygraph channels, but also the ocular capabilities that exist in iDetect. And that enabled us to get even better accuracy, where accuracy increased to 91%. So those were the first two, this idea of using ocular and then the coupling of polygraph sensors with ocular in a less intrusive way were our two significant technological advancements in the last 10 years. Despite the difference that we believe we've made in advances in accuracy, and being able to automate the test to make them shorter to enable people with minimal training to be able to use these products. iDetect, iDetect Plus and other existing solutions in the market still have some of the limitations that I mentioned when we entered the space and formed the company 10 years ago. Specifically, specialized equipment is still required for all of these tests, which significantly limits who can be tested. In other words, the test takers must go to a location with specialized equipment, or you need an experienced examiner or a proctor to take that specialized equipment to them. Furthermore, testing is limited and costly because with most of these tests, including iDetect and iDetect Plus, you need a proctor present during the test or an examiner must be present during the test. And although iDetect tests for single issue investigative tests are as short as 15 minutes in length from beginning to end, many customers and interested potential customers have said, depending on the circumstance and what they wanna do, that that's still too long. They are looking for something that will enable them to verify truth like on the identity of an individual, whether it's their actual identity in a five minute test. Another issue with iDetect is the protection of personal identifiable information or what's referred to as PII in situations like identifying verifying identity where maybe you wanna do so before you grant someone a new account online in a banking system. Well, you're not gonna take an iDetect station to them or have them come to an iDetect station. There needs to be some way for that organization to verify something like your social security number, your identity, your name without having to share what they're asking with us, without us seeing it and be able to do so in an online environment. So in summary, a combination of these limitations make it so that people can't take a test anywhere or anytime. And that's something that we've been trying to address in our research and development efforts for the last three years. So without further ado, I'm happy to say that we believe we've solved all of these challenges. And today I'm pleased to announce the release of the world's first accurate mobile app to verify truth. The product is called Verify and it is, as I said, an app. It's a mobile app for iOS or Android phones. It has out of the box two testing modes. It has a mode where tests can be self-administered. So you can open the app and take a test yourself without a proctor or without an examiner present. Or you can flip the app into a more traditional proctored mode where you use the device, much like you'd use an IT tech station to control the test, use the device to give the test, but you're sitting there watching to ensure that the person is being compliant during the test. In terms of accuracy, we are still tuning the scoring algorithms in one of these following in the third session of this conference. Dr. Kircher and Mark Handler are going to walk through a lab story study that we just completed for one of the two protocols that we're releasing and share the details of that accuracy. And you'll see in the details of that study that we are at about 80%, 79.2% when using a four-fold validation process, which shows that the model is in fact generalizable and it's not biased towards the data that we collected. And we're seeing this in these initial stages on tests that take less than five minutes to run. We're confident that in the very near term, perhaps in the next release, we'll be able to share even greater accuracy as we continue to collect more data and tune the way in which we're capturing that data so that it's at a higher quality and this gives us better measurements, thus increasing the accuracy. Because Verify is a mobile app, tests can be taken by yourself anywhere, anytime, and really in any language, right from your own mobile phone. We have ID Tech tests running in 50 different languages today, but it's only languages for which there is a text to speech voice that is available to be run in a standalone Windows-based environment. And by moving to a mobile phone with everyone using text to speech and speech to text capabilities every day, just to finally text and communicate via our mobile phones, pretty much every language in the world is supported and the voices are excellent. They sound like real humans and our flexible environment. So let's watch a short one-minute video to give you a first look at Verify and how we envision organizations and individuals using it to verify truth, whether it be about a person's background prior to hiring someone, whether it be verifying their identity, verifying whether they're a credit risk. So testing them for whether or not they lied about whether they've had other loans in the past from non-traditional lenders that they defaulted on, whether or not they lied about their income. And for other various claims, whether it be like insurance claims that are fraudulent or just really any claim could be a parolee who claims they didn't violate their parole and you wanna verify that quickly. So let me jump over to the video and we'll watch this one-minute video. I suspect that you're not hearing this because I didn't turn on my computer speakers. So let me stop sharing and relaunch this with computer speaker included. Okay, there we go. And try this again. Some apps can make your smartphone smarter, but Verify by Converis will make your smartphone the smartest of them all. With Verify, you now have the power in the palm of your hand to accurately verify truth in just five minutes. Verify is the world's first app that can accurately and quickly verify the truth about a person's background, identity, credit worthiness, claims, and much more. It validates the truth by measuring involuntary changes in eye behavior. Tests can be self-administered anywhere, anytime, and in almost any language or organizations can proctor tests. Verify is a free app for Apple and Android phones. Use Verify to screen new hires or evaluate current employees, confirm a person's identity or online profile, validate a credit history or documents or claims made about events, activities, accidents, compliance, and more. The way the world verifies truth has just significantly changed forever. When truth matters, use Verify. Okay, so hopefully that gives you a feel for what the product looks like. The apps are actually available right now in the Google Play Store and the Apple App Store where you can download them free of charge to your device and start playing with it. Let's shift now to what are the technical aspects behind the product and how it works. In summary, to make use of the app itself, all you need is... Yeah, buddy. So, I'm getting... Paige, you need to mute yourself. You're on mute, Todd. So all you need is a test link to run a test. Tests are associated with test links that can be created either in the Converis dashboard or via the Verify API. Test links contain pertinent information about the test itself. The exam ID, the test ID, the test creator ID, information about the servers from where when the user clicks on this link, it's going to go and retrieve the test template as well as the server from where it's gonna go retrieve the test questions. Where these test questions could be personal identifiable information that you don't want us to see. So you're gonna serve it up from your own server and it's just combined with the test template that comes from our server. These links contain information about the server to where the data will be uploaded, specifically the measurements that we will use to score the test. And things like the language and the locale of where you want the test to be able to be run. So I can create a test in Spanish, for example, send the link to you in Spanish. And even though your phone is configured to a default English setting, it will run the test in Spanish because I know you're a native Spanish speaker and I wanna give you a test in Spanish instead. So the idea here is that I can text you a link, I can email you a link and when you click on it, you'll be prompted to go, it will check to see if you have the app already. If you don't, it will direct you to the app store where you can download it for free and then it will launch the test automatically and you can take it from there. As I mentioned, apps for both Android and Apple phones are available today. The Android app will run on phones with the API 21. It's also known as Lollipop version five provided that the phone that you're using, which would be really any more recent phone has a screen resolution of at least 750 pixels. Lollipop was first released on Android phones in 2014. So the idea here is as we recognize that there'll be situations where people are running older phones and you still wanna provide a link to them where they can take a test. And so we need to run on kind of the lowest common denominator. The Apple app requires iOS 12. iOS 12 was first released by Apple in 2018 with the iPhone 10, but iPhone 5S and above technically is capable of running iOS 12. There may be some other limitations on certain versions of the iPhone up to 10 where that may not be the case, but for the most part, these apps will run on the older phones. And that was something that we really focused on to make it broadly usable. As I mentioned, there are two verified testing modes Verify test can be self-administered or proctored. There is an audio presentation of the questions. So the test taker hears the questions read to them. The test taker responds verbally. So you're not clicking with your fingers. You'll see why here in a moment. You're hearing the questions presented in an auditory way and you're verbally responding to those questions. And the microphone captures and records those answers. We currently offer both single issue and multi-issue test protocols. So if you're familiar with our IDTUK product, we had our RCT protocol, which was a single issue. And then we introduced the directed live protocol, which is single issue. And then we introduced our MCT protocol. In this case, we have a single issue protocol and a multi-issue protocol that allows you to test on up to four relevant topics, where one of those topics is used as a comparison topic, just like we have on our standard audio MCT, regular MCT and hybrid MCT protocols. The single issue protocol that we refer to as a V for, V for Verify 3R is in a final beta stage. As I mentioned, we'll share the results from the lab study that we just completed. We anticipate that before the end of March, we will publicize that it has been officially moved to a production ready stage. So between now and the end of March, you can use it, we think it's solid, but we wanna just collect some additional data in the field and monitor how it goes before we flag it as production ready. The multi-issue protocol is in what we call an early beta stage. It works from end to end. You can score a test and so forth, but we still have some additional data in that lab study to collect before we finalize the algorithm and the model for it. So the official word is it will be production ready sometime in April. If we finish collecting the data between now and then have scientifically validated it, then we'll make it production ready as soon as we meet that internal threshold and requirement. So in summary, both protocols are fully functional and ready for use, ready for beta testing immediately. We'll show some of those sample tests here in the live demos in a moment. Finally, as I mentioned, the actual testing time is five minutes or less. So that does not count the instructions at the beginning. If you have a longer set of instructions where you're describing the topics that you're gonna test the individual on, then that could make the test a little bit longer. And it also does not count the tutorial that we're gonna show you here in a moment that is about a minute and 10 seconds long. The tutorial would be required for someone who's self-administering or taking the test in what we call solo mode, where you just send a link to someone and say, hey, answer, I want you to answer some of these questions, click on the link and follow the instructions. So that's about a minute and 10 seconds long. Okay, well, let's go look at a demo. So I'm gonna show you a recording of a self-administered test. So a scenario where there's no proctor involved, where we've just sent, someone has sent a link to someone that is going to take this test. The video will start showing that the individual's already clicked on the link, downloaded the app and the test has started. Early at the beginning of the test, it goes into a tutorial, a tutorial that has animations in it and describes the ideal circumstances under which you should take the test. Like obviously you wouldn't wanna take it in a noisy room. If it's using verbal cues for responding, then that could affect the test. So in the video, we will transition from seeing the test taker hold the phone and starting that tutorial to the tutorial itself. So you can see it in a larger fashion on your screen and hear it better and then it will transition back. But it is an end-to-end test that we'll go take a look at right now. And then we'll show a live version of this in the demos at the end. Okay, here we go. This test monitors your eyes to determine truthfulness. Follow these instructions for best results. Take the test indoors and avoid direct light sources such as windows, TVs and computer screens, which can affect test results. Take the test in a quiet place without interruption. Sit comfortably and rest your elbow on a table. You need to hold your phone steady for about five minutes. Listen to instructions and answer verbally. Do not use earbuds or headphones. When instructed, remove eyeglasses, contacts are okay. Reading is not required. When instructed, flip the phone upside down with the back camera at the bottom, facing you. Position the middle of the phone level with your eyes. Hold the phone about a phone length away. You will be alerted if the phone is out of position. Don't cover the camera with your hand or fingers. The phone's light will be enabled during the test. Do not look directly at it. Look halfway up the phone right above the camera. This is a truth verification test. It will ask about three crimes. It will ask if you stole a cell phone from an office. It will ask if you stole $20 from a secretary's backpack. And it will ask if you stole a ring from a desk. You will now take the practice test to begin. Say start. Start. Remember to answer verbally. Regarding cell phones, I can see one right now. True. In regard to large screen TVs, I am looking at one right now. False. You have finished the practice test. To begin the actual test, say start. Start. This test is based on the idea that a deceptive person will have a difficult time answering the questions quickly and accurately. Some deceptive people answer too slowly. Other deceptive people make lots of mistakes. So if you take too long to answer or make too many mistakes, you will fail the test. If you do not hear a tone after you answer and you hear a time limit expired, speak up, or you will fail the test, you will now take the truth verification test to begin. Say start. Start. Regarding the theft of the $20, I am innocent. True. On the topic of stealing the diamond ring, I did not do it. True. As to the theft of the cell phone, I am responsible. False. With regard to stealing the $20, I am guilty. False. On the topic of stealing the cell phone, I did not do it. True. With regard to stealing the diamond ring, I am guilty. False. Concerning the theft of the cell phone, I am guilty. False. Chest completed. You can look at the screen now. However, do not close the app until the test is uploaded. Okay, so hopefully that gives you a feel for how the app works. Let's take a look at the actual verified test report that was generated from that demo test that we just watched. So this is the report, most of it, trying to get it to fit on one screen. You can see that the report not only gives us an indication of whether the person was credible or deceptive, but it also indicates the quality of the data that we collected. If the test taker had taken the test in less optimal conditions, such as, you know, a scenario that has bad lighting or loud environment that interfered with the microphone's ability to capture the answers, then that data quality score would be lower than the 95% that you see on this report. Obviously, we wanna know about the data quality because if it's really poor, that could affect our ability to measure whether or not the person was being deceptive. So we would take that into consideration in the score as well. Notice that we randomly take pictures of the test taker. This is on purpose. We actually, on the phone, compare digitally whether the same individual who started the test or the individual who started the test is the same individual who finished it. Later in this presentation, I'll show you options for not sharing this information with us because this would be considered personal identifiable information. And obviously a bank, for example, that's doing identity verification wouldn't wanna share this information with us. So we'll talk about how this is supported in a way where you can separate all this and still get the same benefit of knowing if it's the same person. Notice here in the report that we do give, just like in IDTECT, we give a credibility score on a scale one to 100 for each topic. And we make a definitive decision where 50 and above indicates that the person was truthful and 49 and below indicates that the person was deceptive. Or at least we detected signals that they were being deceptive. We also provide a summary of the unexpected answers. An unexpected answer would be someone who's answering in a way opposite of what we would expect. We would expect them to say, no, I didn't do it. But if they're continually saying, yes, we wanna know that because they may actually be confessing through the way they're answering. We also capture the number of unanswered questions. So if they just sit there and don't answer and it proceeds on to the next question, then we need to know that because that could be malingering or a countermeasure that they're applying, thinking that if I just don't answer, then I'm not gonna fail. And we may actually fail you if you let too many of those expire without answering them. And then finally, you'll notice that we have a category here at the very bottom left hand corner that says unrecognized answers. So we do have implemented into the product a way of allowing for variations of the same answer. So in this demo, if Daniel had actually said, yes, instead of true, we would have counted that as a recognized answer for true. If he had said no in place of false, it would have worked. And we also accommodate for situations where the microphone just doesn't pick it up, but yeah, we know they're answering. We noticed early on as we ran tests that when you say false, if you don't accentuate the S on the end, the microphone may think you said false instead of false. And we have the ability to expand the list of acceptable answers for each category based on the quality of the microphone and whether it can pick that up. And this applies for any language as well. So it can get smarter in terms of what it's hearing. Finally, you'll see that we include the pre-test instructions in the report. So you know exactly what the test was about. And we provide at least one example of a question for each of the relevant topics that we were testing on. Mark and John will talk about the formatting of these questions because they do seem a bit odd, but there's a reason for that. And I'll let them talk about that when we share the results of the study that we just completed. Okay, so what are we capturing? How are we detecting ocular changes in order to be able to generate a score, much like we do with iDetect and an iTracker? Well, during the test, we capture and record really the same or similar data to those data that we capture and process in a standard iDetect test. Specifically, these features or predictors are used in the algorithm to detect deception and generate the credibility score. And they are a pupil diameter, x and y gaze, blink rate, and some other proprietary ocular measures to detect deception. So the first four that you see here are the same predictors or features that we use in our scoring model in iDetect. Obviously, we have to detect them in a different way because we're not using an iTracker and I'll talk about that here in a moment. New to verify the accuracy of the data New to verify. So this does not exist in iDetect. Unless you're running iDetect Plus and you're using the polygraph channels through our Physiotracker, we are utilizing a vascular or a blood flow measurement. This is something that we're capturing through the camera on the phone. And you'll see in the study results that Mark and John share that it is actually one of the diagnostic features. It's not as diagnostic as pupil, but it is an additional feature or predictor that helps us know the person was being deceptive. Okay, so what happens during the test? As you saw in the demo, we have the person flip their phone to take full advantage of the better camera on the back of the phone. The camera on the front of the phone is more designed for selfies only. And if you get the phone close enough for us to detect sub 10th of a millimeter changes in pupil diameter, then it's blurry. You can't control the focus on the lens. You also don't have as easy of a way of turning on a light like the flashlight on the back to illuminate the face. And in some cases prevent some of the reflections that show up in the eyes where our eyes function more like a mirror that could interfere with our ability to find and measure the pupil. So we, during a verify test, the verify app video records eye behavior and movements at 10 to 30 frames per second, depending on the phone. The most sophisticated, fastest phones don't allow you to record more than 30 frames per second. So depending on the quality of the phone, we capture as many frames per second as we can. And then we take that video and we extract from that video each frame. And then we extract both a left and a right eye image from that frame. From there, these images are processed to identify and measure pupil diameter. The images that you see here on the slide show a four step process that we go through where we process and filter, in this case, a dark brown eye. We specifically chose a dark brown eye because it's really difficult to distinguish between the pupil and the iris. And you can see that as we apply these various filters, we ultimately get to a state where there's a clear delineation between the pupil and the iris. And we can basically count the number of pixels to derive the pupil diameter. If you look closely on these slides, these particular images that were taken out of the video have reflections to the left and right of the pupil in the iris itself. And so one of the filtering steps that we go through is to identify reflections that may make it difficult for us to identify the pupil and measure it. And we adjust the coloring of those reflections to be the color of the iris and allow us to still identify the pupil and measure it. Okay, from there, we identify the area of the scolera. The scolera is the white area around our eyes outside of the iris itself. The scolera is something that enables us to derive the X and Y gaze blinks and other eye-based measures. If you think about it, if I'm looking to the right, I'm gonna have more scolera visible to the left of my iris. And if I'm looking to the left, I'm gonna have more scolera visible to the right. So despite the fact that we are only using one camera, even though your mobile phone may have three cameras on the back, the mobile phone only uses one of those three cameras. It chooses the lens that's best for the circumstances. So unlike an eye tracker that has two cameras in it where you can more easily calculate gaze, we're calculating gaze through identifying and measuring the scolera around the iris. Okay, so let's move to use cases. Verify, as you saw in the video, the intro video helps organizations or individuals verify the truth about a person's background, identity, credit-worthiness, and claims, and pretty much anything else that you wanna test for. Eye detect is well known for its value and use in pre-employment screening and ongoing employee evaluations. In fact, probably 95% of the tests out of the 500,000 eye detect tests that we know about that have been run are pre-employment or ongoing employment screening. But verify enables organizations to streamline even more the hiring process by having applicants self-administer tests right after they apply online. So rather than having to invest time and money in scheduling an applicant to come to a physical location and to have a proctor give that test, although it would be a more accurate test if you're using eye detect, this allows kind of a pre-pre-screen scenario where you could send a link to someone immediately after they apply and do an initial screening that they run in a self-administered scenario. And for identity verification, verify could confirm a person's identity before tying it to a biometric or granting a new account to them or granting a transaction to go forward or giving them credit or giving them access to something. The beauty here is there's nothing worse than associating someone with an identity that's fake or fraudulent and then tying a biometric to it that they use going forward. The biometric is not really them. And now you're gonna perpetuate something that's invalid. It's especially bad if you were to hash this into blockchain where it's immutable and you can't change it going forward. So the benefit is that you can verify that identity on the front end and know that it is who they purport to be and tie it to a biometric and then use a biometric with confidence for allowing access or future transactions or whatever it may be. For credit use cases, regardless of where the person is, you can validate their credit history or verify their application details or the documents, verify if the documents they submitted were fraudulent. We have a large client in Peru that has an ID text station in 50 different dealerships, car dealerships used to be that it took two weeks for you to get a loan if you went into buy a car from them because they went through a lengthy verification process, including sending someone to your house and everything else because they don't have a central credit bureau and people get loans from other nontraditional sources. And they switched ID text, saved millions of dollars. But more importantly, they shortened that time to about four hours. So you could go in and look at a car, fill out the application in the dealership and then walk down the hall and sit down in front of an ID text station and they would test you on whether or not you had defaulted intentionally on any loans, whether or not you lied about your income in your application. And the third issue, so this is an MCT test that they run is whether or not you submitted any doctor or fraudulent documents as part of your application process. Imagine micro loans now where you could go to more of a retail store, apply for a loan right in the retail store to get a flat screen TV or a couch or whatever it may be and they could do a loan verification there. Finally, with verify, you can verify statements about what we call claims but those claims could be an insurance claim. It could be an online profile. Imagine in a marketplace being able to see that someone has been verified relative to whether they are insured, licensed or bonded, whether or not they're a registered sex offender. If they're gonna come into your home to perform work you may wanna know that. Could be verifying parole compliance. They claim that they didn't violate their parole and you could quickly and easily verify that with verify. So let's look quickly at two examples here. The first is the hiring process that I mentioned where you could really streamline the hiring process by prescreening someone right after they apply for a job and the scenario by which this could be done and Russ and I are gonna demonstrate the actual implementation of this on a real site here at the end. But the idea is the applicant visits the employer's website, they fill out the application. The information goes to the employer system where on the back end they have a set of processes where in some cases they programmatically check to see if your address is correct, see if you have a police record, see if you meet the minimum requirements for education whatever it may be and if you do, they could programmatically generate a verified test and a verified test link which is automatically texted to you or emailed to you depending on the implementation. And when you click on that link, it prompts you to download the app because you wouldn't have it at this point, download it for free from the app store. You take the test, the data is processed and uploaded. So processed on the phone, uploaded to the Converter Scoring Server where it's scored and a report is generated from which the employer can now make a decision. They could potentially just extend an offer or you could use this to make a decision on whether or not maybe you wanna fast-track this applicant because they have all the right capabilities that you're looking for and they passed. And so you're gonna spend your limited time and effort with applicants on these more expensive steps of bringing them in and doing a face-to-face interview and so forth. Here's just a quick example in Mexico. Three different organizations that all have a turnover of 15% and you can see we're showing a scenario of what the estimated loss and costs are due to turnover, estimated cost in Mexico for implementing verify the number of people you'd have to test based on that turnover and an estimate of how many applicants for each job that you would test to fulfill that job. And you can see that there's a huge ROI, 850% to 1,700% on the money that would be saved by hiring better people by reducing some of the losses that occur by not properly vetting people or spending time and money on applicants in steps of the process that are more expensive. Okay, second scenario and we will also demonstrate this scenario using an actual implementation that's live. The scenario is a FinTech that implements verify for identity verification in a way where they can do so without sharing any personal identifiable information or PII with comparison. This implementation uses the verify API. So rather than going to our dashboard and creating a link manually and sending the link to someone, this is using our application programmer interfaces to generate the test link to send the link. And in this case to manage what we call data sites. Data sites are variable strings of information in your test that you don't want us to see and that need to be dynamic. So we'll show you a demo with that here in a moment, but the scenario is this, someone goes to a location and to fill out an application. In this case, the scenario is they're applying for a new account, could be like a new bank account. And they go to the FinTech's website where they put in their information in the FinTech website's form. That information would include some PII, like their name, their address, perhaps their birth date, their social security number, information that they definitely don't want to share with us or anybody else. That PII is captured by the FinTech and stored internally on their server. But at the same time, it's replicated or shared with an internal question server. So that's what we call a server that a customer manages where they put the information that they want to present in a test. In most cases, it's questions, but it could be part of the preamble, the instructions at the beginning in the test itself. Anyway, the information that they supply to the FinTech is also used by the FinTech to generate the verify test link. Separate from the FinTech's architecture that I showed you before, exists the verify server. This server has a couple of functions. One is it serves up the test template. So what's expected when someone answers the first question, are we expecting them to say true or false? We don't know what's being asked, but we need to at least know are we expecting them to say true or false? This server also scores the data after the test is taken. So when the applicant clicks on the link that's been provided to them after they fill out this form, iDutect is launched if they have it already on their device. If not, they're prompted to download it and then it launches. And at the time it launches, it will request the questions, including the PII, from the FinTech's question server. And simultaneously, it will request the test template from the Converis server. Both the questions and the test template, information are encrypted. So it's a requirement when this data is served up from the respective servers, it comes to the user's phone in an encrypted form, and then it is merged together, the test template with the questions stitched back together on the phone to provide a unique personalized test to the applicant that they're gonna take on the phone. So the applicant takes the test as you saw before, and then the data at the conclusion of the test, specifically the measurements are, and how they answered, are sent to the Converis server for scoring. At the same time that happens, the applicant's photos that you saw in the test report, and any other sensitive information, it could include a customized consent acceptance that's digitally logged at the time they click on the consent in the Verify app. All of that information gets sent back to the FinTech question server. And then at that point, the FinTech uses the Verify API to request the scores and the information on how the person answered the questions from the Converis server to assemble it with the questions and any other information that was set up to the phone from the question server. It's assembled all together to generate a report, an interim report for the FinTech. And anyway, that's how it works. So we will demonstrate this functioning from end to end here at the end, using this exact scenario and data sites and the API to server.lum. So in summary, Verify is our third and latest technological advancement like IDTECD and IDTECD Plus. Verify validates truth by measuring involuntary high behavior. It's really the first accurate, and that's the key, accurate mobile app for truth verification that exists today in the world. Verify enables organizations or individuals to verify the truth, as you've seen here about a person's background, identity, credit worthiness, claims, and much more. The best news in all of this is it is available today. So it's officially released. The protocols that I talked about are in a beta form, but they are functioning and the application and the technology behind is production ready and available. So in conclusion, after 10 years and with the addition of Verify to the Converis product suite, we believe that we're addressing all the challenges that I teed up at the beginning that existed when we first founded Converis and went to work on IDTECD. Whereas 10 years ago, the available solutions were unreliable, costly, and laborious. Well, laborious in some cases, the solutions didn't have problems in all three of these areas. The Converis product suite now offers a complete, accurate, cost-effective and a set of simple solutions. And anyway, if you're not already a customer, we would welcome the opportunity to support you in deploying any of these Converis solutions that are now available. So with that, that's it for me.