 This is Senate Judiciary on Thursday, March 25th. Today, we're honored to have Gina Vincent, who's an expert on risk assessment. And we're taking up S-20, H-20 and S-127, which have to do with risk assessment. And our main topic this morning is bias in risk assessments. And we thank Gina for joining us this morning. And I believe David DiAvora from the Council of State Governments Justice Center will be joining us shortly. And I want to say we were a little late at the start of this morning due to technical difficulties. Believe it or not, we still have internet issues in Vermont. So thank you, Gina. And welcome. I should introduce, we should introduce us. I'm Dick Sears, State Senator from Bennington County and the southern part of Vermont. Senator White. I'm Gina White. I'm from Wyndham County, which is right next door to Bennington County, where Sears is from. Senator Nitka. I'm Alice Nitka from Windsor County and a few other towns out of the county, which is in the center of the state, so to speak. I'm Senator Behruth. Good morning. I'm Phil Behruth. I'm from Chittenden County. And I'm Joe Benning, Gina. Good morning from the wild, wonderful and actual real Vermont called the Northeast Kingdom. Technically the County District. Good morning. It's nice to meet you all. I just heard of the North Kingdom for the first time yesterday, actually, I'm going to have to come check it out. Yes. It's really beautiful up there. Although, you don't want to go until June. It's still snowless. We're still skiing. We still have skiing going on here, so yeah. Anyway. We started discussing risk assessment, Gina, and in two bills and you don't need to talk about the specific bills. But then we heard a lot of data about our lot of concern about racial bias, particularly in other biases that are built into some of the risk assessment, particularly those that are pre trial. In age 20 makes free trial risk assessments optional. And that's 127 tries to follow some recommendations from. Justice working on justice reinvestment. Which would be pre sentence. Risk assessment. Okay. Okay, great. Thank you. It's nice to meet you all and it is an honor to be invited. I'm always happy to speak with any state or state agency who's interested in hearing more about the research to help you make an informed decision. And I think it's, I just think it's commendable and wonderful that you're all doing that. I'm going to, if it's all right with you all, I'm going to share my slides. Okay. All right, thank you. And I am in Massachusetts at the university of Massachusetts medical school. And we don't have any snow anymore. But I am originally from Alaska, which I believe, yes, I believe the landscape is much more similar to Vermont. The politics are nothing like Vermont, but the landscape is very similar to Vermont. What part of Alaska. Well, I'm generally embarrassed to admit that, but. What's the law. And you can, you can see Russia from there. Yes. I knew it was coming. Oh God. I have to say that I spent a good five hours in Wasilla having the oil changed on my motorcycle. So it's a nice little town. My son spent some time in Alaska in Wasilla, and he said that they say all I saw. Which is was a lot backwards. It is. Wow. Hardly. I've never met anyone outside of what someone that knows that. So I'm very embarrassed. Can I ask you all, are you seeing the correct, are you seeing my presentation view, or are you seeing just a slide. So I'm just a slide right now. Okay, perfect. Then there's the smaller slide with difficult to read benefits of proper use of risk. Okay, then you are seeing the wrong version. Okay, I'm going to switch. And how is that. That's good. That's better. Yeah. Okay. All right, great. So the question about our risk assessments racially biased. The short answer is it depends, which is I know an obnoxious answer because doesn't answer the question. The short answer that depends. And the other point about racial bias and risk assessments is that is that bias and within the risk assessment is not the only issue to take a look at our address here. There's also the way the risk assessment is used, and that is actually the bigger concern in many ways. So I'm going to walk you through three points to give you the short version of all the information that we have on this topic to date and then open it up for questions. But of course, feel free to interrupt me as I go with, if you prefer. So the reason, as you all are probably aware, because I understand that you do use a number of risk assessments in Vermont. The reason that these were initiated the reason people created these and the reason that they have really caught on is because risk assessments are more than very strong evidence that the more accurate and more reliable than an unstructured assessment, they're going based on the gut, even for individuals who are very experienced. They're also objective and transparent, so it's very fairly clear how someone is making their decision. And so these were many of the reasons that risk assessments were put into place. I can all, I can also say that the original intention of these was to reduce incarceration. Most individuals who come into contact with the system are not high risk. And if you have an objective tool to help you discern who is high risk and who's not, statistically speaking, you are going to incarcerate less people. Okay, that's before these tools started. I understand that you all have already weeded out a lot of walrus people from the system, which is amazing. So I'm very good reason to use these. The other benefits of these that have been shown by research is on balance and meta-analytic studies, which is our highest level of evidence in science. On average, across all well-done studies, there is evidence that people or agencies that are implementing these assessments are seeing some benefits in terms of decreased incarceration. Okay, so on balance, incarceration rates are going down. They're going down equal for whites and people of color, so it's not getting rid of disparity that's already in the system. Okay, it's not exacerbating it, but it's not eliminating it either. Second, there's some good data coming out showing that these tools are leading, if implemented properly, are leading to increased diversion. And within this particular county that was published about that increased diversion happened at higher rates for African American youth, meaning that in this particular county it is actually reducing disparity when used in diversion decisions. So this is good evidence. There's also evidence that if you put a pretrial risk tool in place, which I understand is one of the questions here, if you put a pretrial risk tool in place, it has equivalent accuracy for people of color and white defendants, but it's not followed by the judiciary or whoever the decision maker is in that case. In this particular study, there was still significant disparity within lengths of stay in jail and within who, and in terms of who got jail, even though the risk assessment was not biased, and it was operating in the same way and getting the same average scores for black and white defendants. What that means is that you can put a good tool in place, and it's not going to get rid of your disparity if nobody's using it. All right, so these are some of the issues to grapple with. Many individuals have this notion, or there is this notion out there now that risk assessments are biased, that came largely from this piece by ProPublica where they analyzed data from Florida, and they determined that there was software around the country used to predict future criminals and it is biased against blacks, a very attention grabbing sort of title to this piece. This piece has since, I mean, they, they, I should say the value to this was that it brought the issue to the forefront, we're now all talking about it, I'm here talking with you about it. We're really taking a close look at scrutinizing our risk assessment instruments, so that's good, because we don't, we certainly don't want instruments to exacerbate the racial disparity that already exists in many justice settings. The downfall of this is that this was not peer reviewed, the way that they analyzed their data did not adhere to psychometric standards. And they actually conducted the study with the wrong population, meaning the population they conducted it with were individuals that the tool wasn't even designed to be used with. There has since been peer reviewed research that has come out discrediting the study and their methods reanalyze the data and found that the compass is not in fact biased against black defendants. Here, the problem is the peer reviewed research doesn't have the same voice as the research that comes out in the media and has not been peer reviewed, that's one of the issues here. And that's why I'm pleased to be talking in front of you. On the other hand, I do want to credit ProPublica for the fact that we're all talking about this now. So three quick points that I think are important to know when you're making these decisions. First of all, risk assessments are not all created equal, they're not all for the same purpose, and as Senator Sears already mentioned, you've got more than one type of risk assessment instrument that you all are using or considering. Second, there's a lot of confusion about what it means for an instrument to actually be racially biased. And lastly is this issue of the difference between a racially biased instrument and disparate impact, which is what we're really most concerned about. And disparate impact comes down to how you use the instrument in your system, if you use it at all. And so there's the issue of bias instruments, which we don't want, we don't want to be using bias instruments at all. And then there's the issue of the instrument that you use is not going to eliminate any bias that you may or any differences, disparities that you may have in your system already, if it's not used properly. Okay. All right, so risk assessment is not all created equal. You all have a pretrial instrument, it sounds like, or you're making decisions about it's a choice whether people can use it or not. I believe is what you just said. Yeah. Yeah, what age 20 basically does is makes it optional to use the risk assessment, which in and of itself could lead to some bias if we're afraid to use it. I remember hearing about Hawaii and doing the risk assessments and using the tool completely inappropriately and getting bad results and then when they were taught how to use the tool, they got good results. So somebody decided that in the house, they passed a bill that made risk assessment voluntary pretrial. So interesting. Yes, if it's you if it's misused that's when the disparity occurs. And if it's not misused that's when you won't necessarily get rid of as I mentioned you won't necessarily get rid of disparity if there is already disparity in your system, but you will see good outcomes in terms of the incarceration. So, so, is you all probably know these pretrial tools. This is an example of one that's the one that's the most commonly used in our country. It's the PSA. These tools are designed for short term decisions. The majority of the items on these tools are based on one's criminal history official records of criminal history. The reason for that is because in this country we worry a lot about people's rights against rights to not incriminate themselves at the pretrial stage. We worry a lot about protection of information. And we tend not to in most of our states allow anyone to interview a defendant pretrial. There's a lot of good reasons that we do that the downfall the downside of that is that when we're trying to estimate someone's risk, we're sort of stuck with what we have in their file. And that tends to be just their criminal history. So these tools tend to be largely based on criminal history and I'll get to the issues with that in a minute that's very different than whatever instruments you're using to help you with this positional planning or pre sentence planning, or like decisions about the conditions or case planning or treatment or so on. Those instruments are much more comprehensive. They involve interviewing the individual. We get a lot more information about risk factors that we call dynamic. So these are things that translate into their needs that if we address them we're going to be more likely to reduce their risk. So these would be things like substance abuse problems in the home and problems with employment. Criminal thinking. Those kinds of items you don't often see in pretrial tools because again, we're often in most jurisdictions not allowed to interview people and get that information. Okay. So different tools are designed for different purposes and that affects the likelihood that they're going to have racial bias. Right. So there's other differences between instruments. Some instruments are formulaic, which most pretrial instruments are that means that the decision about whether someone's low moderate or high risk is based on a score based on some kind of formula. I understand is the or as the or as is an actual learning tool. It does the same thing. Okay, it's formula based. Not all instruments are formula based. All right. There's something known as structural professional judgment and I won't get into that unless people are particularly interested in it. The methods that are used to construct an instrument are also different. Some of them are were constructed entirely statistically. So the items that end up on the tool are simply things that predict recidivism within your jurisdiction. That means the tool is going to be comprised of risk factors that are strongly related to the way that your police practices are in your system. They're based on research. So like the or as they picked out a lot of things that we know about that are risk factors for individuals to continue offending and that's what's in their tool. Okay, so that was research informed, which is different from entirely statistically constructed. Some instruments have been validated across race and some haven't. The last thing is the composition of items, which I started alluded to earlier. So some tools are almost entirely based on one's history. There's static risk factors, we can't change them. They're things like the age they first offended the number of convictions they've had in the past, whether they've ever committed a violent offense, whether they've ever failed to appear, we can't change that stuff. It happened. So then those risk factors and these instruments that help us determine if someone's low, moderate or high risk, but they also are things that are changeable. Like I mentioned, those are the need areas, substance abuse, criminal thinking, impulsivity, and so on. Tools vary to the extent that they contain both types of items. And tools vary to the extent that they rely on criminal history, official records of criminal history. These are all of the areas that are likely to lead to a potentially more biased instrument. If it's formula-based, now I'm not saying formula-based instruments are bad, but the big outcry has been against formula-based instruments. They have a higher potential for having some bias. Many of them still don't. It was statistically created only, which a lot of the pre-trial tools were, but not all of them. If it hasn't been validated by race, that's a problem, and if it's overly reliant on criminal history. Okay, the problem of being overly reliant on criminal history is that in most jurisdictions in our country, we know from U.S. data there's large disparity and who gets referred to court. There's large disparity and who gets charged and who gets convicted. So if you look at official records, you tend to see a lot of racial differences in our country. If you ask people what behaviors they've actually engaged in, and this research has been done, you see much less disparity in what offending behavior or violent behavior they actually report. Dooming. So if you were to interview a lot of African American boys and a lot of white boys about their behavior, what they've done that's illegal, what they've done that's violent, we find very few racial differences there. Very small. And so anytime that we include criminal history within a tool, we are, and we're using that to predict who gets re-arrested. If our re-arrest rates are also disparate, it's got this sort of inherent bias problem that we can't even detect statistically, and this is one of the things people are concerned with. All right, what's it mean for a tool to be racially biased? So that's the difference between instruments, and I'm going to summarize for you what's good and what will minimize bias for you in just a moment. So first I want to define what racial bias means. So a lot of people think racial bias means that somebody scores higher on an instrument, that one particular group scores higher on an instrument than another group. So there's this perception that if people of color score higher on your instrument than white people, that means it's biased against people of color. That's not necessarily indicative of bias. Okay, men on average score higher than women on risk assessment instruments. Almost always men score higher on women than risk, higher than women on a risk assessment instrument. That's because men are more likely to offend and be arrested than women. The tool is doing its job. It's scoring men as higher risk. If you have a woman who's presenting with the same kind of risk factors that a man is, she's going to score high on that instrument too. It's just that women tend not to have risk factors to as great of an extent as men. They just don't offend as much. Okay, that does not mean the tool is biased. That means the tool is doing its job. If I could just ask something there so in the example that you showed of the instrument, it was based entirely on conviction records charges, etc. These are cases of data that might themselves have been infused with bias. So in other words, someone's record, whether they're facing current charges, how serious, all of that could be systemic bias. So then I see what you're saying the risk tool is not per se a biased instrument, but it is designed to make use of the correct bias that exists in the data input. Exactly with that. Exactly. We deal with that by trying to not use items instruments that are overly reliant on criminal history. Because when we start looking at other risk factors. We see way less black and white differences. And we're coming to those kind of tools. Yeah, well, we're coming to some, yes, we're coming to some suggestions about what you might what you might do, but you've got it. That is exactly that is that's one of the issues right there. If you have a tool that's heavily weighted towards criminal history. And you're in a system where people of color are more likely to be charged or convicted. They're going to score higher on your instrument. The instrument's going to be doing its job because it's going to be predicting who gets re arrested because they're the ones that get re arrested. And then you've got this inherent bias circular problem. Okay, if you go for tools that have less reliance on criminal history, we don't want to get rid of it because it's an important factor. If they have less reliance on criminal history, you have much less to be concerned about. Okay, thank you. Okay, yeah. So, so, here's the definition of what is a bias instrument based on our standards are standards of assessment. And it's a little challenging to understand and so I'm going to also show a graph it's just confusing to understand. The bias instrument me if an instrument is biased that means the risk scores on your instrument are differentially related to recidivism based on group membership in the case way shall group. So if the scores mean something different for one race than they mean for the other race, that would mean your tool is biased. Let me show you what this is a hypothetical example. Let's say you've got an instrument. These are the scores it's scoring people as well moderate or high risk, and we've got a number of individuals who got involved in the system white black and Latinx. Here's, and then we tracked, we looked at their scores on our risk assessment and then we tracked their recidivism over the course of a year, their eventual recidivism. This graph, which be what a non by what an unbiased tool would look like. And it's showing that for each group regardless of what group you're in higher scores are related to a higher likelihood of reoffending. That's how a risk assessment supposed to work. This is what you want to see. The groups may not reoffend at the same rate. They may not have recidivism at the same rate. And I'm totally making this up again so it may be that that black individuals do not get arrested as much or do not get as convicted as much as Latinx or as white individuals. The rates of offending are going to be different for each of these groups at different levels of risk. But the tool is not biased. It's doing what it's supposed to do. And the difference in rates have a lot more to do with the community and arrest and, you know, how many of these individuals live in urban areas versus rural areas and a lot of other factors that don't have to do with the assessment instrument. This assessment instrument is doing what it's supposed to. Okay. This assessment instrument would be bad. This is what we don't want. This is when risk level interacts with race and its ability to predict recidivism. This sort of instrument is showing that for black individuals, when they're labeled as high risk, they're no more likely to have to reoffend than when they're labeled as moderate risk. The outcome of this would be that we may have some kind of harsher response or more intervention with black people who are labeled high risk. And appropriately, because they actually weren't any more likely to reoffend, this would be considered a biased instrument. Does that make sense to you? So when we're talking about bias, it's about does the tool operate in the same way regardless of group membership? So if this was men and women, same thing, we would see the majority of risk assessments that have been tested work for both men and women. We found that over and over again. But women's rates of reoffending are going to be really low and men's are going to be high, right? So there's differences in base rates, but the pattern is what matters. On balance, the instruments that have been tested appropriately in the way that I just described, there have been 12 studies of different risk assessment instruments that we have found in the last 20 years that were tested in this way. The majority of these studies showed that there was no racial bias on these instruments and two of those were pre-trial risk instruments. The majority of them were those instruments that I talked about that are more comprehensive, more like your aura. So they have dynamic risk factors and so on. Okay, but instruments have been found to be not biased. They're pre-trial or they're longer or what have you. There have been a few studies that have found some instruments have bias. These are different instruments. And interestingly enough, the bias for two of these tools was in favor of the individuals of color. What that means is individuals of color actually scored lower on these tools than so individuals of color who actually reoffended scored low on the instruments. That's what that means. It favored them. So it was biased, but in the opposite direction of what people tend to be most concerned about when we're talking about racial bias. So this is what I say about our instruments. This is why I say when you ask our instruments racially biased that it depends. Some have been found to be biased. Some have been found to be biased because of their overall reliance on criminal history items and more have been found to not be. So it comes down to has your instrument been studied and examined for racial bias, either by the initial developers. It doesn't necessarily need to be by you, but by the initial developers of the tool. That would be the first question. So where it's been found, it has more to do with the tool assessed and with the particular race or culture. I, we do have to admit we know a lot more on these instruments about white versus black individuals than we do Latin X or Asian or other groups that may get into our system. Okay, the research we have don't have as much research on other groups and I don't know the demographics and Vermont as well. And this is a little bit complicated, but it will be impossible to have an instrument that is if you calibrate your instrument really well to your recidivism rates in your state. And again, if your recidivism rates have racial disparity. Then it is mathematically impossible for you to have a really good highly predictive accurate highly accurate instrument that doesn't miss classify more individuals of color as high risk, because mathematically speaking, if you have if more individuals of color get re offense rate, get re arrested in your feet, and you have a very well calibrated instrument, you're also going to get a higher false positive rate for that group. That's just math. And that's one of the reasons that we don't can that this is not considered to be bias and an instrument because it has everything just has a lot more to do with what is going on with the jurisdiction that it has to do with the instrument itself. So what do you do about this. Well, there's concern. And so, first of all, you check your and check your instruments which you may have done to see if you're getting mean differences by racial group. The concern is that those mean differences will lead to them lead to one group being treated more harshly than the other. Currently, there is actually no evidence based on the studies available today that that judges and decision makers, even probation officers, there's no evidence that they are treating people more harshly. And therefore that any of these instruments, even if they are biased and I'm not suggesting anyone use a bias instrument but there's no evidence that any of these instruments is leading to people being treated more harshly within our system. There's no evidence right now and that is really also one of the big questions of how are they being used. There is evidence that when instruments that were not racially biased that works the same for the for different groups of people were not followed that the system ended up being biased. So that's happened. There's been two studies now that have shown this one was on the pretrial level when the pretrial tool wasn't followed. Black and black individuals actually got longer stays in jail, and they were less likely to get diverted from jail. There was another study with black youth in Florida where they use a risk assessment instrument to help make decisions about sentencing. And they found that despite the fact that there were racial differences in these guidelines, the overrides were happening more commonly for black youth, or whoever is making the disposition decision this was happening more So, there's a difference between tool bias and what's happening in the system so suggestions. First of all, I highly recommend you only use an instrument that has been appropriately validated in the way that I described by race and ethnicity, whatever race and ethnicity are most common within your state. If your instruments haven't, if that hasn't been done with your instruments yet, then, then there could be a remediation plan to do that. I think many states are in that situation right now. And what I've shown you is that many of the tools are showing are turning out to have good data. Okay. Use tools that include dynamic risk factors as much as possible for another suggestion and don't overly rely on criminal history because like I'm like we just talked about Senator. So, Bruce, if they are highly weighted towards criminal history and you and people of color are the ones getting recharged. You're not going to pick up the bias necessarily in the tool it's sort of inherent is there and statistically there's really no way to identify it. So, second, every decision that involves a risk assessment instrument also involves a decision maker. Just putting a tool in place doesn't mean that it's going to dictate decisions and in fact, this is this is one thing I want to be really clear about. It would not be good practice from a scientific standpoint or an evidence based practice standpoint to ever have a policy of saying if a person scores this way on an instrument, we should do this. We have a standard error of measurement in every one of our instruments so it would not be appropriate to say if someone scores a 10 they should be incarcerated and if they score a nine they shouldn't be. That's that's never a good thing or someone's high risk. This should happen and if they're moderate risk this should happen. That's never a good thing. They're there to guide decisions. The decision maker should consider and weigh all of the factors and we know that higher risk people can be treated effectively in the community if we're putting the right stuff in place. So the idea is to prevent them from from hurting anyone prevent them from reoffending while giving them the best services to help make them successful. So our goal is to prevent not to predict. And I think the best way to do this is to get a non racially biased instrument train the decision makers in the cultural relevance of different risk factors so they're weighing them appropriately in their decisions. And also hope that they use the tool if it's been shown not to be racially biased hope that they're using it to some extent in their decision making. And then lastly track outcomes by race to see and evaluate how is this working our individuals of color more likely being incarcerated if so why is that happening our individuals of color having longer lengths of stay if so why is that happening. It's just always good practice to track so those those are the recommendations and it is a slightly complicated issue and I hope that I haven't over complicated it more. So any so questions. I guess I guess where does the aura stand in that. I, you know, I got sold on free trial risk assessment and need screening early on by listening to Ed La Tessa from the University of Cincinnati, who talked about it. So if I had a heart problem, would I go to the doctor and just say okay open me up and put in the mechanical heart, or would I want to have test before we go to the worst possible scenario. And that that's how I understood the risk assessment. So now I'm, I'm wondering, you know, where have we gone wrong, or if we have gone wrong. I want to be clear that I am a proponent of risk assessment and I'm a proponent of the whole resting responsibility approach and Ed La Tessa I'm sure talk to you all about that approach. We've seen these do more good. We've seen a lot of evidence that these have done good. We haven't seen, as I mentioned, any strong evidence they've done bad. And so I, I'm a strong proponent of doing that. I think that I think that the or as the good thing about the or as that's an instrument that has a lot of dynamic factors. It's also an instrument that wasn't created statistically in the way that's most likely to lead to bias decision making and or inherent bias. It was created responsibly based on the research and what we know predicts or is associated with offending in adults. So the aura is a good instrument. I would ask as La Tessa and his crew, if they have evidence on racial bias, and if they've looked at that with the aura. And I, and that would be my first question. And if they haven't asked them. When are they going to do that. That would be my first question, or you can do it within your state. Yeah, what, what about the juveniles you mentioned juveniles at one point and I'm, it's not clear to me which we use for the Jew, it's a Yahtzee or something like that. Yahtzee. Is it the Yahtzee. I believe that's what they use for juveniles is that validated as well. The Yahtzee is is similar to the aura in the way that it was constructed. And so, and it's also got a lot of protective factors and strengths in it that's something people really like. The juvenile, the juvenile scene is, is where I spend most of my time and I can say the Yahtzee to date has not published data on racial bias in the way that I've described. We haven't seen these analyses for the Aussie yet. I can say that that there have been two studies of racial bias on the Aussie that didn't look at it in the most. The didn't look at it in the way that I just described, but did look at differences in predictive accuracy and they found that the Aussie was not biased. That it was operating very well for black and white youth. That study was done in Virginia. So there is promising data there. I would ask them again to do, I would ask them to do one of these studies looking at it in the way that I just described. You've got two tools that are very promising that have low likelihood of being biased. They've got all the right ingredients. And so I think that I would ask the developers for their evidence, or I don't know if you all have a research department that you work with and there I know there's a great university and there's some good professors that can do this kind of work. If you wanted to have them take a look at it. Our biggest problem I think is getting data to use to do any research. When the Justice Center came in to look at our data for have a data driven response to criminal justice issues they couldn't find much data. So it was hard. They had to really dig. David D. Morris for this right now. Thank you, David may want to comment on the difficulty in getting data so we're trying to upgrade our research right now. And trying to have that available. David, did you want to comment on these issues as well as on the data the difficulty of getting data here. Excuse me. Thank you, Senator shoes. I would just say a few things. The first is that indeed there are difficulties getting all of the necessary data that we would want to be able to accurately look at all of these and Vermont's very aware of this and is indeed working on improving that both at the state legislative level, but also the court system is looking at this as is the Department of correction the agency of human services so folks are very aware of this. It's not easy to fix and it's not cheap to fix and so it's not going to be an overnight fix but folks are certainly working both within the current structures to improve that and really looking at what ultimately frankly needs to be a replacement of the current structure. Ultimately, there are too many siloed kinds of data entry issues or data. Piles of data in different places that pulling together gets difficult, but but Vermont is certainly looking and working at that. I just want to make another comment as I'm sure is no surprise to anybody on this committee that I would echo Gina's views in terms of the importance of appropriately using appropriately validated risk and needs assessment. If you look at the model of what's called risk needs and responsibility, you can't break them out. You can't look at needs without understanding risk because needs are the other side of the coin of the risk factors. You can't figure out how to meet needs. If you don't understand what the risks are. The third component in that model and this is where systems fail all the time is the issue of responsibility which means understanding the people that you're working with the differences in terms of culture, ethnicity, etc. There is nothing personal about Vermont. It just is there. It's probably not there the same in every part of your state, but it is there. And the argument that I constantly make and I've said to this committee before is that if you don't have something that you are able to measure with. If you have the canary in the coal mine, then you can obfuscate that bias wherever you are. You can come up with one excuse after another as to why it is or isn't in a particular jurisdiction. But when you have hard numbers showing you consistently that in jurisdiction one people of color are being accurately rated higher than in jurisdiction to it tells you you have a problem in that jurisdiction. If you have that kind of data, you can fool yourself. And so we, you know, I strongly support the again appropriate use of these tools, and they need to be transparent and they need to be. Everybody needs to be looking at them carefully, the judiciary, the legislature, the defender general and all the defense attorneys, the prosecutors. They need to be open and clear to everyone what is being looked at what is being measured and how it is being used. And those are my comments. Thank you. Yeah, would you like to comment? Well, I, well, I have a question. And so I agree with David, I mean, you have two very promising tools, good evidence that will give you a lot of information that allow you to follow that risk need responsibility approach and decision making. Very good stuff. I don't know about your pre trial tool that you, you have one people can have a choice whether they can use it, but I don't know if it's something that's been validated. I don't know where it comes from not familiar with it. So I was just curious what your pre trial tool is. I don't know the name of it actually they'll do you. They'll crook. Yeah. We don't administer on the pre trial side but I do believe they're using the or a screening tool. David's here he might know. It's the or as the tension. I believe so I'm not positive though. Either David or Matt Lario. David chair of the age use office. Yes, I believe that is correct. I'm actually trying to get will raise her up right now to confirm some of the details on this, but yeah, I think that is correct. And the type of tool is being used. Okay, great. Thank you. So that tool was out. I believe that is Matt Lario the defender general. Yes, I do believe it is the or as. Okay, great. Nice to meet you all. So we know that you have a tool that was initially validated. It didn't just come out. It just wasn't just made up, which some of the pre trial tools were so made up. So based on value judgments among people in the jurisdictions, those were the most dangerous ones. We know that you all have one that was designed. Well, it has been initially validated and so and again you could ask Latessa and his crew. What evidence there is for racial differences on it or bias on it. So, I have a sense that many of our decisions are made. In the plea bargain area, and may not actually take in the real risk and the needs of the individual. What are, again, based upon two lawyers making an agreement that the defendant will agree with it a victim will agree with. And that seems that is loaded for bias. Yes. I think so. I would agree. And I think the plea bargaining issue is a big hindrance to following with the responsibility. For the reasons that you've mentioned. And I think a lot of states are grappling with that right now. I'll just say that I agree that. I was, we just passed a bill on competency trial. Issues of mental illness and having noticed to the state's attorney when someone is not. Was given an order of non hospitalization. And I was listening to what happened in Boulder, Colorado, and background of that guy. The shooter there and minded of a case right here in Bennington. And who allegedly murdered a woman by daylight. On a river walks here in Bennington, January. And the, it was strikingly similar where there had seemed to be interventions that were evidently ignored or when they were and when those interventions failed nobody followed up on them. Again, I think there's your needs screening tool that needs to be there and needs to be followed to make sure that public is safe. I just will see the outcome of both of these cases. I found some really similar information about the guy in Colorado and the guy in terms of mental health. I am, I hate to do this, but I have a 10 o'clock and I have to. We very much appreciate having you with us this morning. And I had no idea it was getting close to 10 times really flew with time flies presentation. Now, the presentation was extremely helpful as we try to make the suit. Thank you so much for joining us this morning, Gina, and thank you luck down there in Massachusetts right across from me so. Excellent. I'll have to come up to Vermont more. I haven't been there in years, but it was very nice to meet you all. If you get to the kingdom, you may be reminded of Alaska to some extent, except you can't see Russia. We can't, we can't see Canada. However, they can see. Very nice to meet you all by the way. Nice to meet you.