 Good evening. Let's get going, because it's 10 after 6. Sorry, I was a little late. The wind is evil. Let's start with introductions. Why don't we start with our guests? Just name, affiliation, pronouns. Go ahead. Yeah. Executive director of the Vermont Moran's Commission. Hi, I'm Chloe White. I'm the policy director at the ACLU of Vermont. And Robin Joy, director of research, crime research group. Karen Gannett, deputy director of crime research group. David Scherer, with the Attorney General's Office. Thank you for coming here today. Go ahead, Jeff. Jeff Jones, ex-PSU, on the ACLU board. Rebecca Turner, with the Office of the Defender General. Jessica Brown, public defender and chairman of the county. Aids hunnest, Red and Longo, Chair Ardap. I'm Aigre Jonas, with Vermont State Police. And I'm here as the designee for the Commissioner of Public Safety. Brian Queer, chief superior of Dutch. Ruben Jennings, president's rights. Heather Simons, Department of Corrections. I'm here from the last speaker. Work off your criminal justice name, also. All right. Moving right along, approval of the minutes. It seems like we met, well, we did. It was like forever. Oh, OK. Taken past. Oh, that'd be non-customer. Oh, I thought that was the case. One of the folks now, chief federal from the Department of State's of Charis is on the speaker. Oh, hi, Pammer. I was going to do it subtly while you did the minutes. Oh, OK. Sorry, I'm burning right ahead here. I hope you all could remember that meeting, because it feels like it was sometime last century. We might as well just go. Anybody got a denda for the minutes? Can anyone remember that meeting? They looked fine to me, but anybody else? Don't all chime in at once. OK. Anyone want to make a motion? Approve the minutes. Great. Anyone want to second the motion to approve the minutes? Second. All in favor? Aye. All opposed? All abstaining? Motion's carried. Minutes are approved as they are. Moving on to announcements. I actually have one. And it's under the heading of mea culpa. I got called to testify about a bill. What bill was this, David? Do you know what bill I was testifying about? Because I have it here, and it has no title. Yeah, it wasn't officially a bill yet. It was a bill about data and sort of formalizing some of the bias-motivated incident reporting stuff that's been implemented by our office in cooperation with other agencies. OK. Right. It says this bill proposes to, one, expand the authority of the attorney general to investigate bias-motivated incidents and forced civil penalties, two, create a working group to establish a system of uniform reporting of bias-motivated incidents, three, require minimum training standards for law enforcement officers to include trainings on hate crimes and bias incidents, and four, require the attorney general to report annually to the general assembly on hate crimes and bias incidents. I was asked to testify on this. I was a little unhappy because I actually wanted to wait until we met. I really wanted feedback from this group because we're in here. So the mea culpa is I was not allowed that. This was all very new to me. It was very funny. They called and said, we'd like to invite you to speak. How's this tonight? And I said, actually, that won't work. And they said, well, that's too bad. It was really fun. And then I was like, well, why did you ask? But OK. So I testified, but I felt badly doing it because I really wanted feedback from everybody. I felt like that would be important. As it happened, they asked Major Jonas as well. And she and I had a disagreement. So I don't know where it is. It feels like a mess to me, but I'm new to state governments, so maybe this is not a mess. Maybe this is just business as usual. My thing was, I felt like when they brought us up, that there was an oversight role that we could fulfill on this working group. And Major Jonas had the opposite opinion. And after talking to Major Jonas, I was like, oh, look, she's right. And it was too late because I had already testified. What did you say? I said, oh, I thought the idea of our being in an oversight position would be good with the working group. I thought that would be perfectly reasonable. And given that we are expanding into talking about data collection and such, that I thought that that would be a good thing for us to do in a reasonable path for this group to take. And Major Jonas said, I think it actually muddies the waters. Is that more or less? And I could have, I mean, I wasn't having a hand about it. I was submitting comments that I didn't really. I'm sorry, I don't mean to. No, no, I just felt like we should start with a working group so that there's standardized understanding of when we're referring to bias-based incidents or hate-motivated crime. And I felt like it, I wanted there to be more discussion, at least, to involve this particular panel in that. Before the legislature just started giving new assignments to this group was kind of what was going on. And I think she's right, and I was wrong. So, but there doesn't seem to be a mechanism for saying that, which is disturbing. You all get paid for this? I mean, honestly. But anyway, I just wanted to put that out there, because in the interest of transparency, you should know that this happened. I may have missed it, but what committee did you appear from? Judiciary. House Judiciary. Senate. Senate Judiciary. And when was that? This was Friday. And what's the bill number? Yeah, that's the problem. There is no bill number yet. Here, you can look at it. This has to title 13, section 1566. If you look at it, that will be a better result. Oh, yes, it is. So I can just say that I've been asked to repeat this. Oh, good. We're all gathered. On behalf of the Defender General, it's my tomorrow and the next day. And probably the days after that. So certainly, it's premature for the Defender Generals to weigh in on a position either way. But we will be. OK. And in terms of your question, I am taking as a question as to how to proceed given that you are placed in this position to speak on behalf of the panel before you get a chance to talk to this. I think David and others can chime in. But we can always follow up if necessary with a written report or letter to the Senate Judiciary to clarify that you've taken this up. And we'll discuss and to clarify whatever statements were made on Friday was just you personally not representing the panelists. Right. And I was. I would personally feel more comfortable with that because I don't like being. It felt very untransparent, which I guess the word would be opaque. It just felt like I was suddenly. You will testify at 8.45 in the morning in Montpelier. I was like, no, I won't. They were, yes, you will. And as the chair. And I was like, but I haven't had a chance to talk to the panel. But that was immaterial. So maybe I'm telling tales out of school, but I just felt like I had to put that out here. Because for me, it was awkward. And I just felt I didn't like the, it felt, it just didn't feel very genuine. Or accurate for that matter. So I wanted to put that out at the beginning of the meeting. But that happened. And I just became a dictator for a few minutes. One page, actually. Just one page. So I wanted to put that out. Then anything else for announcements? OK. I had mentioned we were talking, or I guess we've been talking and sort of had floating around in the back this whole notion of data collection as being something that is probably going to become extremely important for the efforts that we've undertaken and what's going to come up in the report, I would imagine. So we have Robin Joy here to talk tonight about their efforts at the CRG around data collection. And the other thing I'm hoping to do, I put her first so she didn't have to, if she didn't want to, listen to us go on and on and on. I want to start the conversation about the bullet points and start kind of a process of winnowing so we can start getting some ideas together about what we might want to start thinking about running. And I don't expect to get extraordinarily far tonight, but I'd like to start it. And I think we have to. And that's why I had asked everybody to please make some notes. And some people really made notes. Thank you. I mean, that was great. I turned on my phone and I was like, oh my god. And I thought you had sent it at 1 o'clock this afternoon. I was feeling all guilty. So I'm glad it was later than that because I was like going, oh god, I've been a bad chair. But this is great. So that's the point of tonight. So I'll shut up. Robin? Hi. Hi. So for the sake of you that I haven't met yet, my name's Robin Joy. And I'm the director of research for crime research group. We serve as the state statistical analysis center for crime criminal justice policy. We have served in this capacity since actually 1991 when we were the Vermont Center for Justice Research. I've served in this capacity for now 15 years, which is a long time. And what our job is, every state has somebody like me. And our job is to simply answer data questions about criminal justice and to conduct research on behalf of our policy and behalf of our stakeholders on issues that are important to our state. We are the only state where the SAAP is what we're called State Statistical Analysis Center is a nonprofit. And we were designed that way back in 1991 so we could tell government they were doing it wrong without having to worry about the political repercussions of that. And we could tell government they were doing it wrong with data. It was generally the idea. And so we've maintained our neutrality, we hope, in these years. And we provide several services to the public and to our policy and stakeholders. One of the things that we do maintain is the court adjudication database. So some of the attorneys may have contacted me in the past and said, what's the going rate for DUI-3 with this? And I can tell you that in 10 minutes. And so I provide that for free. Anyone who is part of our contract with the Department of Public Safety, so anyone who asks me that, I give you that information if I have it. And then I give you a long list of footnotes about that information on why it's totally useless. Because it's not going to have any information on the prior records of the defendants, not going to have the information on whether it was a negotiated plea or a contested plea. It's not going to have any of it. I'm just going to give you some numbers that will give you some information. And then I'll tell you, don't use it for anything. But it does give you an idea. I guess it does give the courtroom workgroup an idea on whether an offer is within the ballpark or not. It's also being used by the sentencing commission to help kind of figure out some of the trickier aspects of the sentencing commission's roles, sentencing commission's responsibilities in classifying crimes. The other service that we do that we provide is every year we get federal funding from the Bureau or we apply for federal funding from the Bureau of Justice Statistics. This is money that is earmarked for the SACs. It is in statute, so somebody would have to find it to get rid of it, which is good. They don't have time to look at the statute. And it is money that we have used over the years to conduct research that people in the state of Vermont thought was important. We've won national awards for our research. So I'm just going to grab there. Some of this money we use to fund computer programming that the state can't fund because it's too expensive or they're unwilling to fund it. So for example, one year, we used the grant to convert criminal histories from VCIC, which come in an XML format, into a CSV format, which we can now do cynicism studies very quickly. And so we wanted actually a technical national award for our work there. We've done research on domestic violence and case processing and arrests. We've done research on Niber's data. We've done research on DUIs. And in the past few years, our federal grants have been used to advance some of the policy discussions that your group is having and to help provide information. So we have a few current grants that are going on now. And then I'll tell you where I think our research is going for the next round, but want your input. The grants that we have going on now, one is opiate and property prime. We can talk about how our crime rate is being driven by the opiate prices, but we can't prove it. That's a problem. So we can't prove it with the data that we're currently collecting. So the Vermont State Police has graciously agreed to participate in a study where we have the narratives from all of their property cons for a year. And I wrote a computer program, Care of the Feds, that will analyze the text of all of those narratives. And so then we're going to be looking for keywords around opiate use or if there was some kind of substance use in the narrative. We're also going to be looking at the type of property that was stolen, and is this something that's really sensible and things like that. So we're going to try to really get at how the police are recording these property crimes and is there a relationship that we can establish between property crimes and opiates. So that's one thing. And the great thing about this study is now that we've written the code, we can apply it to anything you want. Police departments can use this technology, and I can actually show them how to use it, to even just do quality checks on their narratives, on their incident reports. Are you hitting all the things that you should be hitting so that the prosecutors can do their job? So there's a lot of, right? So this is kind of how we use this money, is how can we create something that not just studies this problem, but also can benefit the state of Vermont in some other way. We currently have another study on the, who is currently incarcerated? So in this we partnered with the Department of Corrections, and we're going to do a point in time kind of study of who is incarcerated. And one of the things that we're going to do is we're going to pull everyone out of state criminal histories. So we're working with the FBI in a prior research study that I did on racing criminal sentencing. What we found was is those out of state criminal histories were the drivers of those sentences. That that was, you know, the prior criminal history in general drove the sentence, and those out of state criminal histories did. And I'm going to leave that thought for a second. But so those out of state criminal histories are really important when we're doing research on how the criminal justice system interacts with the individuals. So this is a study that is not going to focus on the disparities, but is going to look at the disparities. So the point of this study is just to look at everyone in the system, and then what can we figure out about how people got in there, and where can we kind of focus our efforts on reducing the population in general, and looking at the disparities. Another study that we have currently going on, and I understand that some of you may have not seen our report, but have seen the rebuttal to our report is on traffic stops and race. And so what we did here is we got a federal grant to try out several methodologies on measuring disparities in race and traffic stops. And the reason why we did this is we, it's an issue. We want to kind of figure out what is the best way to measure everyone across the state. Can we use one methodology to measure every individual department across the state? Can we use a methodology that makes it so that people can replicate my work, which is important? And can we do this quickly and easily on an ongoing basis so that we can identify departments that we need to kind of go and look at further and figure out why we have this statistical disparity? We modeled this study after the state of Connecticut. The state of Connecticut's hold a commission on like this, but it was solely focused on traffic stops and race and had academics and had practitioners and statisticians and figured out what's the best way to do this. And so they came up with three ways to do it. One is to take out, for those of you that get a paycheck from an agency, your social security, your unemployment insurance. That data is collected and is matched with your gender and your race, and so we can get that data. So my data would show that I am a white female who lives in Northfields, but has a work address of Montpelier, which is where our PO boxes. So that means, at least in these data, they think that I commute from Northfields to Montpelier, which isn't actually correct, but they can think that's for now. But it's a way of building a driving population, right? One of the things about traffic stops and race is that we have to compare it to somebody, right? So who are we comparing it to? So we've tried this in two jurisdictions and it's failed in both of them. And so we can't use this method here in Vermont. One is the report that Benningson released. And you can see my math in that report on why this failed and we can't use this for Bennington. The other agency that's consented hasn't seen the report yet, and it's failed in a very large area. So we can't use this. Part of this has to do with the way Vermont does the census and if anyone really wants to peek out about this later, I'll tell you all about it. So we can't do that. The other thing that we looked at was resident driver analysis. Let's look at the people who live in this jurisdiction and see if there's disparity there. And one reason why we did that is because, well, then we can kind of sort of use census data and see, are we pulling people over your own citizens at a higher rate than we should expect. And this has worked out well as a method. It's very easy to do. There's some caveats to it. We assume for this that everyone who is 15 or over has a driver's permit and is driving, which is not true, right? But this is the only thing I can do with the numbers that I have. So you have to know that there's this assumption. The assumption is that everyone 15 and over has a driver's permit and is driving. And then we look at the census data to kind of prepare to see whether we see a disparity. The third way that we're trying out in this, I think we're going to end up recommending to be one of the better ways for Vermont is something called the veil of darkness analysis. And what this analysis looks at, and I didn't come up with a name, so I'm just saying that. I did not come up with a name. What this looks at, and we're going to go through this in a few weeks, we're going to change the clocks. And so when we change the clocks, the hypothesis behind this theory is that those of us that are driving on the roads at five o'clock this week when we're changing the clocks are going to be the same as the people who are driving on the road five o'clock when we change the clocks again. The difference is it's going to be light out. Oh, it's light out today, but you get six o'clock. Six o'clock, right, six o'clock. And so if people are acting on the explicit or implicit bias, one of the keys is they have to be able to see in the vehicle, right? And so now you can see in the vehicle where you could it before, right? Two weeks ago you couldn't, now you can't. And so what we're looking at, we're looking at a very, we're publishing lots of results so that people can see how we're doing the math. One looks at the whole year, so the change, so you can like, you know, graph out the change between, you know, when sunrise and sunset happens. And I pulled it for, you know, each jurisdiction, so we're getting, you know, so those of you from the south that get sunset a little later, a little earlier than us, you know, I'm accounting for that. But if you take a whole year's worth of data on this analysis, you're not accounting for seasonal driving differences. Vermont in particular being a very tourist state, the touristly Durban state, and especially some areas may get more heavier traffic than others during the tourist seasons, holiday seasons. We tend to drive less in the wintertime for various reasons, right? So what we're doing is we're taking the 30 days before and the 30 days after the switch and looking to see if we can find disparities. And then doing what's called a regression analysis to see if those disparities are explained, what variables explains these disparities. One thing we've done with this study is we have asked and the agencies have consented to provide data that is not required by statute. We do not think that the statute currently as written provides enough information for meaningful analysis. It doesn't include, for example, the date and time of the stop. That's not required. It doesn't include the year make model of the vehicle or the tag of the state or the state of residence of the driver. It doesn't include what contraband was found. It doesn't include a lot. So I wanted to extract more data than they are currently required to provide so I could see what variables kind of float to the top as being important. And so far, I'm gonna say, kind of surprised not being from Vermont is not floating to the top, which was interesting to me. But so far, race isn't floating to the top either. But again, I write with lots of footnotes and getting caveats. For example, one jurisdiction that I'm working on now, I had something that happened and forgiven for geeking out about math. I had something mathematically happen that's never happened before. I had a perfect correlation between two variables that should never have been correlated. And that's because this jurisdiction in this timeframe that I was looking at in those 60 days, pulled over 12 Latinx people, but they were all men. That's not how the population works, right? So when we're looking at regression models, I'm trying to model what goes on in the real world. There are more than one gender of Latinx people. So the model failed. But this is interesting, mathematically for me, but also interesting for people who just want an answer is okay, so we're dealing with this and this was a large jurisdiction. We're dealing with a very small population and can we use math to explain disparities if something like this happens? And so this is part of what, by the time we're done with all of this, we hope to have some sound recommendations going forward on if we're gonna continue to use traffic stops and rays as a proxy for understanding disparities, fine, here's how you ought to do it. And so that's what we're looking at. Can I just ask a question? Yeah. You might have explained this to me before in the past, but so the so-called veil of darkness study is based on the premise that during light hours, you can see into cars and during dark hours, you cannot. But it doesn't really speak to them once an officer stopped a car, whether how it influences their choices from sound searches or questions. And so we did look at the two kind of current models that are out there for looking at post-South outcomes. One is called the KPT hit rate. And the KPT is for the authors, the people who created it. But you're familiar with this, right? So how many searches are conducted on and what's the racial makeup of those searches and how many of those searches are successful that it's kind of contraband is found. There are some really strong assumptions in this model that I don't think have been really kind of said before. One is it's going to assume that everyone in that traffic stop is acting rationally. And rationality in and of itself is a social construction, right? And it's based on a lot of times on white supremacy and misogyny, right? So what is rational for one person is not rational necessarily for another person. All right, so it's a social construction. So it's based on an economic theory. So you need to know that economics usually based itself on a rational person. The idea is behind the theory that an officer will get better at identifying who to be searched and increase the likelihood that contraband will be found. And we'll only engage in those searches that are going to be fruitful eventually. So you'll reach some kind of equilibrium, right? And that there will be an equilibrium. You'll always have 100% hit rates and there'll be all sorts of stuff. So when you see the hit rates published and we did this in the Bennington Report that they released, they searched for African-Americans during the one year that we looked at. They had 100% hit rate. They searched 20 whites during this timeframe and they had a 90% hit rate. If you wanted equilibrium, you would have to start searching more white folks, which if you don't have probable cause to do so is equally as bad as searching anyone without probable cause is a bad thing to do, right? So you have to be careful on what are we really measuring? And it's a useful measure because if there's enough searches and four is not, we can kind of look at, we can use math again to see if race splits to the top. The problem is what happens during an encounter when we get pulled over is not always captured on the ticket. There's a lot of things you can't capture on the ticket. And so for looking at your list and conversations I've had before, if you focus on the decision-making process of individuals within the criminal justice system, so many things go through their minds when they make that decision. Is it even possible to capture all of the data necessary to decide whether or not impermissible factors are at play? And I respectfully submit it's not. And so that's something to be very conscious of. And that one of the other issues, especially like around traffic stops and race that I always kind of bring up is, and this gets to that idea of everyone acting rationally, Major Jonas can pull me over at any time and I will be polite and I will be civil and I will be fine. A male officer pulls me over in the middle of the night on a dark road, I am going to act differently. I'm not gonna be rude, but I'm gonna be a little more fearful, right? And that fear might be translated into something else for the officer observing me. I'm acting rational, I certainly think so. He doesn't think I'm acting rational, that leads to a search, right? It doesn't turn off anything because I'm just afraid that something's gonna happen to me on this dark road. So, and now you can change my gender and my race and so on and so forth. And you can see the problem with using this rationality model as a measurement. So it has to be clear that this is not, not the answer necessarily. We don't have an answer. We just really haven't come up with it. There was the two methods that we're using is this KBT hit rape and then accounting for the light and darkness are people, but we don't know the gender of the officer involved. I don't know the race of the officer involved. So we don't know those dynamics that we know play a part in how we may act and how people may perceive us. Any questions on any of that? That was a lot. And I understand that some members of the committee have seen Dr. Ciguelino's response to our report and somebody else had sent that to us so we appreciate that. But if you have any questions about her criticisms I'm happy to answer them. And I guess my question would be have you seen the report we did on Bennington? Some have. Some have. Yeah. So I just want to explain how that got released. They've had it for a while and our deal with the agencies because we're actually not doing this to, we're really doing this to measure the methods is what we're doing. So we got agencies to consent. And the deal is here's your report about your agency. I don't really care about your agency. I care about the methods and the report that I'm going to file with the feds is going to be de-identified because really I just care about the methods. So every agency that's consented to participate will get their own report. It is theirs to do with as they please. And so Bennington chose to release it. It's not coming on our website. No, it will be soon. So they have chosen to make it public. So we didn't, it's not our choice to make it public. It's their choice. So I haven't wrote the Bennington, the study that Bennington submitted that, you know, it's more the response from Dr. Sguino, but is it a criticism of the methodology study? Like what if you could just synopsize from your perspective what the criticism is? I don't think that, no, I don't think, I don't, her response is 13 pages long. We can tell you what our response to her criticism is, which may give you a hint at what she wrote, but I think for us to make a synopsis of her 13 pages for us. Yeah, no, I don't, I guess that's not curious. Sometimes like we're talking about like methodology. Sure, absolutely. So here was one critique that I only use one year worth of data when the allegation is that I have many years worth of data on my website. And the reason why I use one year it's worth of data is because we actually paid for that special extract to include all those other variables that police departments aren't submitting. So the data that we have on our website for traffic stops and race is the data that police departments are required to submit by law. And we wanted other variables. So we didn't have many years worth of data, we had one year worth of data. Many variables meaning like the type of drugs that were found or the type of contraband. Well, so no, so the variables that we looked at for this study are the year make and model, the vehicle, like the tab of the plates, where's the person coming from. Okay. All right, so those sorts of things that we know in other jurisdictions tend to make a difference and who gets pulled over and who doesn't. So we were just kind of looking at those. One thing that I wish that we'd asked for is a more detailed description from the Spillman agencies of what they were pulled over for. So was it speeding, was it GNL, was it something else? That was one criticism of her report. So I just wanna say that we looked at one year's worth of data because we paid for this extract out of Spillman. Well, and the other piece of that is, and we've had conversations with you many times in other police departments around, they're not, police departments aren't confident in the data pre-2016. So when you start looking at how the police departments were collecting data pre-2016, it's not useful data because of a variety of different reasons. And we actually, when we started collecting the data for 15 and 16 that's on our website, we actually posted a sheet on the factors that affect the validity of the data because that data was in such bad shape. So we couldn't go back any further than 2016 to start with. And there's one, so that was one criticism. She wanted to know why I didn't use the crash data. In Bennington she had analyzed the crash data. She had used Bennington County's crash data to apply to Bennington town. And there were a few reasons why. One, when we applied for the grants in federal fiscal year 2015, we knew that the crash data was crap. As far as looking at the race of the drivers in the not-a-false drivers in at least two car accidents. We knew at that point that the data was not there. So we can't write a grants hoping that the data's gonna come true someday. Race was missing in 35%? Yeah. 35% of the crash was. Yeah. For not-a-false drivers. And I understand that the crash form has been changed and now this is going to not be an issue. Yeah. And the other problem is, is so we got two years of federal funding to do this grant. It's a lot of work. It's gonna take a lot of work to map the crashes to the towns that we want to apply them to and to see. So you can't just take the crash data and say, look, here's the driving population and here's your ticket population, you've got a problem. That may be true, but if a town is giving all of its tickets in part A, let's say a Walmart parking lot for whatever, or that's where all your crashes are happening, but they're pulling everyone over by the school and that school is on the other side of a mountain. Can we really do that? So there's some, it's a good methodology. We just have to see if it works for rural Vermont. And the original authors of that study really questioned like somebody has to do this in a rural area. They did it in Miami-Dade, not in life here. I didn't ask the follow-up question, because I get that, I hear that a lot. Vermont is so small, Vermont is 600. What's the population of Vermont to 100,000? 620,000. Someone made the comparison that our population is about equivalent to Washington, D.C. Now granted, it is not exactly the same type of demographic, but in terms of numbers and trying to find what I'm hearing from the two of you, there's sort of a shortage of data, how to control accordingly, and incomplete information. And the Connecticut's didn't work. And you talked about these other two methodologies, quickly, I don't know what they were based on. Kind of, yes, ma'am. Okay, so my question to you guys as experts in the field, not trying to reinvent the wheel, must be looking other places, why not a place? Again, I don't know how much Washington, D.C. has done on this, per se. My point is, is that instead of looking, where's a rural state in the United States with the same numbers of Vermont? I think we can cross that list out pretty close and quickly. But there are communities where we can find that, right? Like, why not Miami-Dade? Unanimous, like, it's not obvious today, I see. And so that is now the variable of not being able to control, because I mean, I, you know exactly why, but why is that not? Sure, so if you take Washington, D.C., and if I were to count off the top of my head on how many police departments have jurisdiction in Washington, D.C., it would be about six. Which is, you know, but they all have different, kind of, you know, let's just take D.C. Metro. So, what works for one police department doesn't necessarily work for another police department. And we're trying to look at things that are going to work for each jurisdiction in the state. There are some jurisdictions I'm never gonna be able to measure. They're only gonna have two traffic stops a year. They just don't do that kind of volume. But I want to capture enough of the disparity in our state to be able to say, here's a way that I can hold Burlington to the same standard that I hold Northfield. And that we can look and compare those two. So when you're looking at a lot of these traffic stops and race studies, they're just looking at one department. And we're trying to figure out what's best for the state. And that's what Connecticut did. They have urban areas and their rural areas. And they were trying to look at, consistently, can we hold each police department to the same kind of, you know, test? And then which departments do we focus our energy on to find out why there's disparity? So that's one. So that's an important point you're making at the end. Which is that the reason you're eliminating these other jurisdictions is because the conclusion you're trying to reach doesn't support it. And in that sense, I just heard you say the conclusion is how to help out. What did you just say to identify? How do you identify agencies that aren't me that do show a disparity? And how do we work with those agencies to figure out why that disparity is there? So what Connecticut does, and it's a terribly long report that Connecticut wrote even longer than anything I've ever read, except my dissertation. You don't want to read that either. It's, what they do is they then publish every year which departments have a disparity. And then they go work with those departments and they sit there and they map out and is the disparity because of your having directed patrols in this area, which is largely African-American? Do you need the direct, right? And so your presence is there causing more people to be pulled over. You know, why are you having directed patrols there? So there are more question, research in my opinion, good research only provides you with more questions. And so what we were trying to do with this study is to get something that we could say, here's how, here's a way we can baseline, measure police departments with this traffic stops and raise data. This is what we need to do it. And then how do we, and this is for the legislature or for your panel, what do you do with that information, something, right? So when we take urban ideas and try to apply them here, we do have to at least acknowledge that there's a difference in demographics and everything from public transportation to the type of crime, so all that sort of stuff. So as a researcher in a rural jurisdiction, I will always try to make sure that the assumptions that urban researchers make holds true in our jurisdictions. So does that answer your question? It does. Okay. Are the issues you're identifying in terms of methodology problems, then a problem if you're just looking at Chittenden County or a specific department? No, one area that I looked at was in the area of Chittenden County and one of those methods failed miserably. So it's- What does that mean when it failed miserably? It means that I can't, it means that when I try to predict the community population and try to match it up to the data that we know about, the numbers, and you can see it in the Bennington Report and I'll gladly kind of go through and say, see here's where the math doesn't work, so I can't use this. And I'll give you an example from another jurisdiction, Eden has the town of Eden, many of you may, I don't know if you've ever been to, but it's a cute little town. It's got a general store and a school, but they somehow have no employees, which doesn't make sense. Now that store could be like a sole proprietorship, but somebody at the school works there. Somebody at the school we really hope is getting a paycheck and their data somehow isn't getting transmitted to the Census Bureau for this data that I'm using. So if I can't find workers in every jurisdiction that I know has employers, then this is gonna fail, right? Okay. Yeah. Got it. Thank you. Sure. And I will say in the Bennington Report, and I'll make sure, should I send it to you? And in the Bennington Report, Robin goes through every method she uses and describes it in detail. So you will have all the information on, we call it the commuting hours analysis. You'll have all the information. She'll give you the charts. She gives you more information than you probably would ever wanna know on this, and then tells you why it failed. So you have... It's like a long division. Remember a long division? I showed them out. Yeah. So you'll have an idea of how she went through her steps to get to the final result that commuting hours didn't work. And there's one other thing I wanna point out about the traffic stops and race data and why this may not be the place to really kind of... I was hoping you'd get to that. Hanging your hat on, for any reason. And I'll give you an example that... Well, it's a big example. One of our most popular crimes is DUI. But if you look at the data the police departments are submitting, not a lot of traffic stops for DUI. That's not correct, right? We know they're pulling people over for DUI. It is literally one of our most popular crimes. What they're not doing is they're not writing tickets when they arrest the guy. So that whole group of people that are taking up our criminal justice system with DUI loans and so on and so forth generally don't show up in your traffic stop and race data because they never got a ticket. They gotta arrest it. They also probably got searched. Yeah. I just have to chime in that this is something that our audit keeps catching and we have to go back. And because those types of car stops are technically covered under the law that says we need to produce the data for those stops and officers are, because they end up arresting the person they're not issuing a ticket or a warning. And so it's completely unaccounted for and there's a lot of them. There's a lot of them and I wanna, as I said in my next point it is also a very white crime. So you are missing a lot of stop searches and arrests of white folk, white folks by not having that. Which then if you start looking at the proportionality of people who are being stopped in search and you're just looking at that KBT hit rate which I really don't want you to look at, you're gonna always have a disparity because you're missing this whole group of people that take up a huge amount of your business and who are getting arrested and probably getting searched incident to arrest, right? So it's missing from all the data. So there's- You said quickly in court terms. Yeah. You said quickly in court passing that DUIs are largely a white crime. How are you using, how are you assessing that? The Niver's data? So if you look at our data, the Niver's data that we submit it actually tell you the race of the arrestee and I did a report on DUIs. I don't think the demographics changed all that much. But yeah, it's a white crime. White folks are committing it at greater rates. Who is he? I mean this, so what about this real question is how are you deciding someone's white? Oh, so that's the Niver's data? So that's not me or a police officer that is usually with a fingerprint background check or they ask the person when they arrest them. So this is different data. I mean this is a topic that we've talked about a long time ago that one of the things that we have to discuss is like- Sure. In collection of this data. Right. Are these people self-identifying or is an officer deciding what someone's race is? Right, so. In general though, I'm flagging this as a huge issue if you're gonna look at race and traffic stops that this is a huge issue. Missing data, right? If you're missing our DUI data, you're missing a lot of data. Right, DUI ain't really anyone who is ending up being arrested for- The motor vehicle offense. Crime that started with a car stop. That started with a car stop, right? Those, and again, we have worked out a way to go back and get through- For sure the largest police agency in the state. It's been a pain, you know, difficult. You have to go back and look at every DUI and then have the officer remember, oh, that's right, okay. So it's, we've had to create an extra form that we submit, but capture, plus to capture that data. So, yeah. And so one of the things about traffic stops and race, a lot of jurisdictions have honestly stopped using this as a measurement of racial disparity. We're kind of late to the game in doing this. And so we, you know, there were other interactions that people have with the police that are probably more important to capture, but are also more data heavy and require the police to turn things over like their field identification cards. And, you know, and to really get into the dignity of how our agencies are policing our population. And are they filling out an FID card every time they talk to somebody? And what are they recording? Some major metropolitan areas do this. And you can get all that data and you can see, well, this is how the ACLU, you know, finally announced NYPD for stopping and frisking, you know, right, in their illegal stop and frist scene. So, you know, that's a question you might want to ask. If you wanted to hypothetically figure out could you replicate what the ACLU did in New York with one of our agencies? Do we even have, like, are they even collecting the data to be able to do that? I don't know that they are, I haven't really asked, but so the traffic stops and race stuff is, I think, a good, I think there's what the public wants to know and that's important. And if the public has a feeling that, and this is one of the reasons why we do the resident driver analysis, if you constantly feel like you're being pulled over by your town police officers, right, and you're being harassed, isn't it nice sometimes to have the numbers to show? And I was able in these jurisdictions to say, you know, in Bennington, I could say, look, here's somebody with the same demographics, same year making model of the vehicle. I was pulled over twice in a week, right? And I didn't have the, I don't have their names, I can't, it's just an assumption that I'm making, right? But I can imagine how that person feels, right? And I was able to do it with some of the larger jurisdictions too to make that assumption that the same person has been pulled over a lot in this time period, and that's something to at least think about. This traffic stops and race studies, it's still ongoing, and we're very thankful that lots of police departments have agreed to participate so we can make a recommendation. Our next round of funding is coming up and so we're thinking about what do we wanna do with that funding. And there are two things that as a researcher and then I have to disclose my academic bias, which is I am a constructionist, I think everything is socially constructed and then I'm very critical of it. And so, you know, critical race theory, feminism, you name it, I got it in there and I'm critical of all social constructions. And one of the things that I've been thinking about as I've testified in panels like this is what my own research for the Center has shown, that we can focus on these individual decision-making points and that's fine. But the system itself, the criminal justice system itself, is one of the largest, it's like our biggest number one kind of propagator of white supremacy. Like the system itself, never mind the people in it, the system itself, the way we define the laws, the way we use the system. And that's one of the things that your list doesn't cover. And so, are we intentionally or unintentionally diverting white folks from the system and is that increasing our disparities of people of color who are incarcerated? If you think going back to the DUI, we have a wonderful DUI court down in Windsor County, which is diverting people from corrections. That's great. Do we need people to get help? Yes, but you are diverting a lot of white folks. So what does that do for your disparities in other parts of the system? Do our treatment courts only help a certain demographic? And when we come up with these ideas of diverting people from the criminal justice system, using the system to do so, are we unintentionally or intentionally creating something down the line that's going to increase those disparities? The other thing we might get a study to do, and this is out of the National Justice Reform Project, is the use of prior records in criminal proceedings. They're used for a variety of reasons, and maybe it's odd to time to examine why. Unless as a public defender, and I was a public defender for many years out in California, unless you're going through and scratching out every arrest record from NYPD when it was under the consent degrees, and every major metropolitan police department that has been under consent degrees, you're still perpetuating all of that by using those prior records. If the system itself down from, or even take NYPD again for when they were doing their broken windows theory, and broken windows looked at quality of life, again, that's a social construction. And how were they enforcing that? Loud parties in the street in New York and ethnic neighborhoods where that is part of the culture? Were we punishing the culture? Yeah, NYPD was. But, so how do we even do that here? Looking around the room, I'm guessing that most of us have been told that we have been acting in a way that wasn't expected of us. And that wasn't a good thing to the person who was saying that, right? We have a crime for it, it's called disorderly conduct. So how are we doing this? And I respectfully suggest that part of this, part of what you look at is how are the machine of the administration of justice perpetuating the inequalities and not just the individual decision makers at the intercept points? Because you don't have enough to do. Yeah. Yeah, I'm looking for my will to live. I'm looking for my will to live. I'm looking for my will to live. I'm looking for my will to live. I'm looking for my will to live. Wow. Well, yeah, so I'm gathering you kind of have an opinion about the applicability of more data collection to what this panel is doing. And I'm wondering if you would share that. Rebecca and I were in a phone in present meeting for the Sentencing Commission a few weeks ago. And you had said about sentencing decisions and so on and so forth. And can we get more data to help inform those decisions? And then I kind of questioned and said, no, because I don't know all this stuff. And you know, you're going to have to collect the demeanor of the victim and blah, blah, blah. If you really want to get at that decision making process that human beings are making, I'm really happy that people in this time that we're in, people are looking at data as an answer. But it's not, necessarily, especially when you're trying to model human behavior. And that's what you're looking at, is trying to model human behavior. And what is your answer? Let's say we do show that there's human bias involved. More training, right? You already have that answer. You're doing implicit bias stuff. You're doing all of these things. But you actually in Vermont won't be able to measure any impact the implicit bias training has had for many, many years. But I think about taking it outside of the criminal justice decision for a second, is if I think about the Me Too movement. And that was after decades of sexual harassment training. Decades. We still have it. Still an issue. So the trainings didn't help a lot. But as people look at the Me Too movement, it starts to look at the structure. And how did the confidentiality agreements allow this to continue? That's a structure. And so I think that part of the things that are contributing, or that if your answer, I'm never happy where the answer is more training. Because it's not, I can't measure the effects of that for a very long time. I have to test you pre, and post, and blah. And how are you going to know that all that training is paying off? Not saying don't do it. But your list here isn't going to help you. Which list are you referring to? This list. Well, actually, I feel rather before you represent that you're fine, and I have presented it. Oh, I'm sorry. Yeah, so I thought this was from the committee. No. Yes, so I apologize. But as you're looking for, how are you going to measure if you're successful? Is it worth? I'm just. Is it worth hoping that one is and making those gestures? Because this feels huge. I think you need transparency. And transparency in our data systems in Vermont is not currently present. OK. Let me put it that way. And it's not because of anyone trying to hide anything. It's generally because nobody's trying to get anything out of the system. So I think asking for more transparency so that people can look at the data, can do their best to create proxy variables for things that we're not capturing, that's fine. Matt, to your point about identifying who's identifying the race, that's a really important question. If you want to compare to, if you want to see if individuals are acting in a certain way based on how they perceive people to be. I'm going to tell you an anecdote. An anecdote is just a singular data. But how would you, like, I also want to find a way to capture how people are treated. Your solution sort of presents this. But this is the scenario. This is Boston, 1986. And my boyfriend is live. Except he tells me he's 1 8th black, which is fine. And it's a very sad story that somebody in Boston would do that. And I remember thinking to myself, like, this is great. And even Yeltsin a few times, like, could you tell the people throwing eggs at us that you're only 1 8th black and see how that goes? He would never have I finally identified his black in his 20s and embraced it and dealt with the guilt and blah, blah, blah. And that's also an important story. But to him, it wasn't a hate crime. I'm getting eggs thrown at me because I'm with him, because other people are perceiving him as black, even though he's not going to identify as black. So he's not going to come forward to the police and say, hey, I wasn't doing a hate crime. They're just idiots. So how do you measure that? And how do we deal with that very painful situation of how people identify and how do people perceive? So that's another just kind of thing about how do you accept? How do you wrangle this, right? Because he never would have come forward. Right. Anything else? I was just thinking you're a lot more obligated, aren't you? No, no, no. Good. You've said you really like more questions. I do, I do. I do. Yeah, so I, yeah. Jeff. No. Yeah, two things. First of all, I would dispute that no one's hiding the data strongly. OK. And in terms of social engineering, they called us junkies. Now, because white kids are doing heroin and dope. It's opiate tragedy. And the cops are carrying into dope. All black folks know that. Yeah. OK. Now, in terms of if data doesn't work, all the stories I could tell you about how many times I get stopped, that's not going to work for you. So what is going to work? I mean, you put in a $250,599 grant, right? So I want to know what's going to work. Is that number right? No. That's the published number. But anyhow, it doesn't matter. Oh, that includes several other studies that were included in that one. So yeah. OK. Yeah, sorry. Tell me what's going to work. Well, we can still continue to measure the traffic stops and race. The veil of darkness does hold some. I was a trooper. I can sit under a streetlight and tell you that we're driving past me. I can look at the kind of car and make a reasonable guess. If I see somebody coming down here with blacked out windows at night in Vermont with a New York plate, if that was my intention, I'd be suspicious. Am I wrong in saying that? That's what you would do. No, no. I didn't say I'd do it. I said if that was my intention. Right, that was your intention. That was my intention. We try to, that's one reason why we ask for more variables than the statute requires to see if we could capture other variables that were related to that decision to stop. Is there a way to actually mathematically capture how individual people are making their decisions? No, there is not. No, but the weight of all the decisions which comes down on different groups of people. Sure, can be used. Can be used. Yes, and that's why we still think that the veil of darkness can be used as an option, but you still have that caveat that all that DOI data is missing. Right, so we have to figure out a way to. The portion of the DOI data that's missioning is not when they pull you over for bumping into the white line or hitting the yellow line, even though that's not sure, right? Then they just look at you and then they look at you and they make a call about whether they go further. Sure, and one of the things we regret about the study is that we did not extract from Spillman the specifics on what there were stop forces. Spillman, as the extraction that we have, gives us whether it's just a generic motor vehicle offense. It doesn't tell us whether it was going over the lines. The state police are good at reporting that, but I'm not real sure Brad will always. Sure, I don't know, and we didn't get the extract from the data, so I can't qualify. We didn't get the extract from Spillman on that. We just didn't. Valor gives us that information to some extent, but we didn't get the extract from Spillman. It is something that we would recommend to the legislature that we put on there, because you do want to know. And I think that that's where you might get more evidence of disparities is in those charges. And when I get to the Valor agencies, I'm just doing the Spillman agencies now, when I get to the Valor agencies, I'll be able to test that a little bit better. Well, OK, I will stop talking. I'd like to follow up on some of the points that Jeffrey made, and maybe interject a little bit more, optimism that I have towards the usefulness of data in our world. And I'll share this as a way also of sharing my exposure to these issues nationally, where we strategize amongst other public defense organizations who have more of a historical collection pod of data and how they are using that to reveal racial disparities in individual cases. So this is why sort of a premise and an explanation of my questions earlier on, it seemed to me that your conclusions, that it's very difficult to do a mathematically accurate formulation to get to a bottom line conclusion, that's where it's driving at. Public defenders, other defense attorneys nationally, are using, and I'm going to use specifically, San Francisco. We were just trained by the great Jeff Adachi, who passed the Magic Pass in the way this Friday, head of the Public Defender Organization, San Francisco. I've also heard this from federal defenders in the San Francisco area, all using the same data, accessing the credible resources of students to generate stat statistics. The data is this. They have a specific client who they know, race, ethnicity, the details of the stop. They know why the allegations of the stop, everything we would know. And a local police report, the allegations. We know the time and place. We know the charge. The data that's useful, and to tell, developing a comparative look at what is going on, besides the asserted race-neutral reason for the stop that we hear from the prosecution, what is the reality on the ground different, right? What we are here to just analyze, what we can't come up with in the overall system report. We can use data on an individual case because here's what we do. Give us all of the stops in that neighborhood at that time. Let's look at now how many charges were filed related to petty theft in this block in the radius of this neighborhood. How many resulted in we, and this is such incredible data they have, what type of charges, and what specific neighborhood, same neighborhood that your client has been charged in and arrested in. And what was the race and gender of those cases? And then they have this pool data collection. Like you just said, that vehicle make a model that was stopped twice or was clear, like to me, I'd want to run that down. Same, same usefulness of data. This person's black. Here, there are so many arrests. Look at this neighborhood. Majority white neighborhood. You don't get those kinds of arrests. Now, is that because crime is only happening here, and all of a sudden we have no crime here, or something else, then you bring in another layer. As you say, the critical race theorists, the experts, the studies, to bring all of the unexplained numbers. That's how data is incredibly useful. It sounds to me that we have, and some data right now, it's just in transparency, you say, and maybe not realizing what's available for us to use. And our new ones. Yeah, I mean, I think, and I was a public defender of counterclassic counties right across the bay. And so I knew Jeff. You're a neighbor to me. Yes. And yeah, one thing that's beautiful about urban areas is that you've got some of that really rich data that happens in a short period of time. Prior to when Governor Shumlin said that our opiates charges had doubled, he was correct. They went from 30 to 60 in one year. I go to, so that's a day in San Francisco, or half a day. I don't even know anymore. So that's one thing I just always have to be cautious about. There are small numbers. Make some of that data more anecdotal than useful if you want to try to prove racial or any kind of discrimination. When I think about the Niber's data that we have, which is, and Niber's is our national incident-based reporting system, and it covers some of our crimes. It doesn't cover all the crimes. And that is an issue. But this is the federal requirement. It's not anything that's going to change. And that's really detailed. What type of weapon was used? And what's the relationship between the parties? And so that's really, really detailed information. It doesn't cover 90% of what we do. It doesn't cover disorderly. The bulk of what makes up our criminal justice system. It also doesn't cover another one of our very popular crimes in the criminal justice system, which is, again, totally socially constructed, violations of conditions of release. That is consistently one of our top crimes in the judiciary of a number of charges filed and disposed of. And we make it up. I think that's another area of interest. Who is getting, there's one very famous in Bennington County, 300 counts of violations of conditions of release for one person. One person. Right? Because they didn't come in twice a week. So some of that stuff. So I think that looking at what data you are thinking about collecting, or how do we expand current systems to accept some of this data? And what do we do with it? Really, I think it starts with maybe a better question. Like, can you replicate San Francisco? Sure. Is that going to answer Vermont's questions? I don't know. May answer Burlington's questions. I don't know, right? It's going to answer, I'll just pick on my town, Northfield's questions. Well, and I think one of the important pieces is, as Robin's doing the research, we have to make sure that everybody's collecting the data that can be studied. So there's the bill that's going to be in House Judiciary tomorrow talking about actually the ACL. You put it forward that it's going to be talking about data collection for all the different departments. And what should be collected, and how much should be collected, and publicly posting it. And when we go to do some research, we find that the police departments that are submitting data to us for posting on the website, some don't have race, some don't have gender. We've done other evaluations for programs where we don't have gender or race or age. And so there's a lot that can't be done when you're missing that kind of data. And so to make sure that people are collecting race information is really important if we want to look for racial disparities. And getting behind what those disparities are is really important because racial disparity doesn't show any racial bias. You have to be really careful about that. The other thing we've talked about and that I'm really interested in is, and we had to start with Department of Corrections because it's where we could get the data, is looking at. So we're going to be looking at the offender characteristics of the population that's been incarcerated for two years. It's really looking at the whole criminal justice system, not just the Department of Corrections. So we've talked to Department of Corrections about looking at probation data. And they're now reviewing what would be an interesting study around probation. They've never had a study on the probation population before. Or parole. Or parole. Paralysis. We've talked to Willa about looking at pretrial services. We've done some studies for the treatment courts. So as you start backing down the continuum of criminal justice, the criminal justice system, what's going on in each of those places where we can divert people or where we don't want to divert people? Who should be incarcerated and who shouldn't be? And do we have programs set up to serve the population that's at that particular intercept point? So treatment courts, for example, I was the coordinator for 10 years for the treatment courts. Do we have enough population for treatment courts? Probably not in every county. Do we have enough population for regional treatment courts? Probably. Are we serving people of color in some? Are we doing that because they don't have the crimes that fit the treatment courts or for other reasons? Pretrial services. Are we serving people of color? Do we have the data to be able to examine that? Law enforcement diversion. Who are they diverting? We haven't even captured in this state a list of law enforcement agencies that are actually doing have diversionary programs. And we know there's a bunch. I keep hearing about them. There's no list anywhere of law enforcement programs that are doing diversion before they're charged with a crime. So for example, someone comes in and they know they're an addict and they bring them to the emergency room rather than send an affidavit to the state's attorney's office. We don't even know what they're doing there. Nobody's capturing that information. So as we look at all those intercept points in the system, what are they, who's being diverted? And Robin, at one point when we were talking about our next grant, made a good point. Do they serve people of color or people of color excluded? Not because we need them to be excluded, but just because that's the program. Either the population, the crime, whatever it may be. So I think we really need to take a look at the whole system. And one of the things we've talked about is out-of-state criminal histories, which right now we can't get. We're working on it. We got it for one study. And we're working on getting it for the offender characteristic study. But people who come in from out-of-state are much more apt to be incarcerated simply because they're from out-of-state. So who are they? And who are they in the facilities now? And what do you do about that? Was that some, can we have some impact on whether we incarcerated them wrong? And coming in from out-of-state could mean that I'm crossing from New York into Rutland to visit my daughter. It doesn't necessarily. So we have to look at what kind of ties do people have to Vermont? And is that impacting decisions to detain? Or is that impacting decisions to incarcerate or something like that? And then I'm originally from Boston. And so I moved here. I don't have an out-of-state record just anyone wants to look. But I could have, right? Lots of people move here for lots of reasons. And so that kind of interplay. One thing that I don't know if this panel has looked at that other jurisdictions have done is to require a racial impact study for every new criminal justice policy that you're thinking of doing. Just like we do with the congressional business office and so on, this is what's going to cost you. This is what the impact on racial or gender or whatever disparities you want to look at are going to be. That's a lot of work. From a particular policy. Right, but what, right, exactly. Just like right now you have to do the results based accountability rate. Why not at least have that being part of the discussion? There's a bill in the legislature right now. Don't. There you go. No one's done it. See that? Hot damn. Check that out. Anything else that people want to? That's a lot. I know. Sorry. I wish I was smarter. I think it's very impactful. It's a lot to take in. I am available for the public. And for you, if you can call me any time and I will not talk map, I will talk map, I will rage, I will do whatever you want. Can I just ask one follow-up question? You mentioned maybe like 20 minutes ago or so that a lot of other jurisdictions have decided that traffic stop data is not really a way to determine improper practices. Can you say, when you say a lot of jurisdictions, do you mean researchers? Who is? Yeah, so usually researchers are saying that we're not doing the traffic stop race anymore because it's not, right? So with San Francisco, let's rate, if you're looking at retail thefts, you're not really looking at traffic stops there, are you? And you know, this is right. So who's coming into your radar? And what does your policing look like? And for, especially for jurisdictions that are cities that have many more, say, foot patrols or something like that and that interaction is happening on the street outside of a patrol vehicle, then it's those field identification cards that are really important to look at, to see who's being stopped and what are the results of those encounters. But you're saying that the communities are coming together to say, we're not really getting a lot of useful information from traffic stop data. The community is meeting diverse groups of people from different professions and stakeholders of all sorts and researchers. So, all right, I just wanted to make sure. It's not measuring what people wanted it to measure. Because it's not comprehensive enough. Yes, it's not comprehensive enough. So you just exempt, it's really what we need to better capture is every initial contact with the law enforcement. Yes, that would be much more comprehensive. Yes. I just wanted to clarify. So can we stop doing studies based on traffic stop? No, I think it's useful in some aspects if we can sort out the DUI for Vermont, you can sort out the DUI stuff. Do I think it's useful for citizens to know that this one person was pulled over twice in the same week, maybe. It might be useful for the, you know, I think it's useful for a few things. If this veil of darkness does start to show that there's some conscious bias going on or unconscious bias, and that is useful. If citizens feel better about their policing or if all of a sudden it becomes true that, you know, wow, local jurisdiction, you've got a problem, people don't trust you, that's useful. So it's not just to say that, you know, there's disparities and this needs to stop. It's just also your, you know, there's other consumers, I guess, of the research that use it in ways that I may not even anticipate. Maybe useful in a narrower sense. Yeah. Then we've been thinking of. Yes. Or a greater general good sense than you've been thinking of. Anybody else? I'm not trying to curtail things. I'm just trying to move us along. And I just wanted to say, Ingrid, I'm happy to send you some of the stuff that we keep running across where researchers are saying, this isn't the best way to determine racial disparity. It's a way to determine it, but it may not be the best way just simply because of all the nuances of what goes into decision making. So I'm happy to send you that stuff as I come across. Thank you. Yes. Yeah, I think we'll be in touch. Well, I'll send you the report. I will say that I know the report goes into a lot of detail, but it's about 19 pages long and it gives you pretty much all the stuff she said about traffic stop and race and writing. And it's fairly easy to digest. So, and then if you all have any questions about the report, we can come back in or you can just call us and ask us the questions. We're happy to answer them. You mentioned that a lot of the stuff is online. Is that report also online? I just got permission from the chief to put it on our website. Yeah, so I can send, when I send the report, I'll send the link as well. Oh, and I'll disseminate. Yeah, that'd be great. And we just want to, last thing on that study, we still have to work out, because this is paid by federal money, and I'm not done with it yet, but how we would release the data so you can replicate or find me wrong. So that will require it under federal to the grant to release the data. I'm not releasing it just yet until I've finished all the other agencies. And will, you know, as the agencies have their analysis done, if they're willing to release it, we'll put it online. Thanks so much. Yeah, thanks for having us. All right, it's 7.30 and as Rebecca Turner said to me back in December, agenda are aspirational. They're not, yeah, well, so making an adjustment on the fly, I think what I would ask is if you would explain, if you want to present what you gave to us, and then before we break, which is disturbingly not too far from now, I really want to talk about where we're going next, and I want to talk as a group about where we're going next, because I had hoped that we would come in with bullet points, with comments on the bullet points, and that that would somehow, this was my aspiration, that that would give me something to start knocking out some copy that I could get back to you and we could go through an editing process with. I wasn't expecting that we'd be there tonight, but I was hoping that we would start moving in that direction, but let's start with what you presented, and we'll talk about that just before we break. So everyone was so good to get the bullets in their head. I started later and I started trying, and then I realized, please, I'm going to try to provide suggestions at every stage, and then I thought, shoot, let me identify each stage, and I used the racial disparities report that we all used as a guide, and I took their table of contents, and I did the, whatever it was, encounters with law enforcement, I think it was, and then I realized, wait, but that's not exactly all of Vermont's stages, as I see it from a defender's perspective. And so then I started outlining it some more, and then I thought, wait, but here's where all of, I think, there are discretionary points along the way, where recent other biases come to play. And so I started trying to just map it out. This certainly isn't a place where we have an identified problem, right? Let me just clarify. This isn't, this is more of setting what I realized, we as a group have never really mapped out what the criminal justice process is, and where we see decision points occurring where race disparities come into play. We've used that guide as a general guide, but when I was still trying to read it closer for this panel's night's exercise, I realized not familiar to what I work with day to day, and I realized that I wanted to share from the defender's perspective, particularly mine, which is appellate, which is taking a look backwards to find where is the error from the beginning? And this is what I see. And then there's the post-minevolving of appeals that happens after serving the sentence and releasing records to the end. I shared this with you guys as I was building it, because what I didn't get to was what you actually asked us to do, which was provide suggestions, right? And others did here, and I appreciate you having really absorbed it, but I thought that if we are trying to come up with ways and ideas to fix whatever it is, let's at least be full transparency and say where we think that proposal falls within this. We're something similar, right? We can map this, we can draw this, but I don't want to presume familiarity with the system, because we all have varied experiences with the criminal justice system every day, more remote, hands-on, and then there's some perspectives in it, right? And so I think that we can use this as a tool and add a drill from it, but that's what I wanted to just share. Thank you. Thank you. Oh, and as a conclusion, I think that every single point is a potential working point for us for racial despair decision making. So it's convenient and I have, it can come up with lots of suggestions. So that's the quick answer to what's the general suggestion about there? Could there be fixings? I don't know. Everywhere. The question is, is where do we want to focus? How do we want to focus? Where can we get consensus around it? It seems so we can come up with something useful for the legislature. I have some ideas as to that, too. Good. Specific proposals to sanctions on the books, to tweaking, to both sell new ideas. I just, in terms of, again, triage on this to you, is it your intention to look at areas, the results of which we cascade further down the list in greater effect than other areas in the work on that basis? I think, I do wish it was sort of like a pipeline, right, I remember, because I think it does build the problems of layers onto it, so it's not like they come into it on a clean slate. Someone who gets initially stopped based on underlying implicit assumptions that exhibiting anxiousness and fear is actually evidence of guilt versus actually evidence of nervousness and innocence. I mean, that gets exaggerated continually all the way, right? Charging decisions, whole decisions. All of that. I just thought that, so no, I actually think that in layers and compounds each point to claim them. We lost the judge. Yeah, we did. What's your decision? David and me. All right, that's not him. I have to disturb it. Oh, oh, I'll disturb it. And Rob it. Okay. It's not like a cruise. It's not, yeah. It's not him. Yeah, he's coming out of the measure room. So we're gonna. Oh, great, he just came out of the. Oh, okay. I'm sorry, I'm sorry. We're gonna have to go through very slowly. But you said you had some broad areas that you. Sure. Do you wanna share those as well or? You know, I can give examples and not the comprehensive ones. Okay. Encounters with law enforcement. We haven't had the chance to discuss it through the decision from the Vermont Supreme Court that came down Zulu, Vermont, and that came out in January at the ACLU in the House representing Mr. Zulu and we profiled in the history. But that was a bit of a blockbuster that provides case law recognition. Lots of holding, significant holdings in that could be another panel meeting moment to discuss fully, but in terms of what we can extrapolate here with Encounters with Law Enforcement, there they recognize that there is a potential civil remedy for violations of the state constitution where stops are racial, whether it's racial animus. Okay, a starting point then, and I was at House Judiciary in a different capacity, not with this panel, and the questions that the members of that committee asked the legislative council giving a summary of what this decision was. The members of House Judiciary asked, well, can we take this further or can we clarify what kinds of factors it is appropriate for a court to consider when determining this legal question as to whether there was racial animus, such as determine Article 11 Violation under the state constitution. The legislature was asking that question as to what they could do to better clarify what specific factors are relevant for racial animus. That seems to fit into A, right? What do we consider? Is it relevant that someone is sweating? Is it relevant that someone is nervous when interacting with the police? If it's relevant, what is it relevant for? Is it relevant to the extent of a crime suspected or is it relevant to something else? If we wanna make certain, and that's where I think we could come up with something useful because there are a lot of studies and cases and briefing on this issue of what interacting with the police is like. What is it normally, what should be evidence of guilt of a crime? Our recommendation could be if we can't reach consensus on the individual factors that the legislature is so identified to give us guidance in litigating these issues, right? The Vermont Supreme Court opened up a whole new universe that is going to be playing out in civilly but also criminally because there was a criminal side of that, the legal holdings relevant to arguing suppression stops. When is a stop appropriately extended? Again, what's the reasonable suspicion to justify extending? We now know it's the totality of the circumstances. Well, what are the relevant totality of the circumstances? Traditionally, we look at it from an objective neutral viewpoint in the position of the arresting officer. This Zulu decision allows now the subjective, racial animus to be a relevant factor. How does that now play into our decisions to seize property and to release an arrest? All of this, that's just one section and Zulu is one, if we want to take that up again sharing with the panel that the legislature in the House judiciary is specifically interested in that. And actually, a couple of the members approach me later because I made reference that I'm part of this panel of being curious what we thought generally and again, I didn't take that as a task for us to take on sharing for these panels which is there was a lot. It was Coach Christie and I'll try to remove the hussite. Was it Olaan? Olaan, yes, yes. And then the representative from Chittenden, the woman. Rachel? Rachel. No, no, Selene. Selene, of course. Yes. And the ones who are co-sponsoring this data bill that I understand is being presented, I haven't reviewed yet. You've got lots of other things. Interpreter access issues. Tangible suggested changes. We actually have a great statute on the books. Relating to law enforcement use of hearing impaired interpreters. I think it's, again, as a Title III, Title I, it's pre-ADA version, but just focused on the hearing impaired and remedies for when and if proper interpretation isn't provided at that critical encounter in law enforcement. Why just limited to the hearing impaired? Why not, again, in the context of this panel, to language access, language interpreters, right? That's a simple tweak. Can go on and on. I don't want to take out WLN's time. But that is, you know, you can go on and actually just, do you want to chime in, in terms of any other suggestions? I think, my immediate thought reading this was really prompted by a comment Robin made, and it's sort of suggested at different stages of this list. But just in section A, my first thought was, to Robin's point about the fact that we're not capturing cases where there's law enforcement contact, but they don't ever end up in court at all. Like, it would be interesting just to see, like, we're not gonna be able to read any new one's mind and figure out why and how people are making those decisions, but it would be interesting to see data on who's, you know, when it's entirely discretionary for a law enforcement officer, who is just not getting cited at all and is automatically getting referred to a community justice center or to diversion without their name even coming across the state's attorney's desk. Do you know what I mean? So I would add that in section A as a point that we need to be looking at if we can figure out how to collect that information. Maybe I'm just grasping at straws, but there's a part of me that feels like, I'm thinking about what Robin was talking about in terms of these reports that sort of does a study on the racial impact or whatever identity vector you may wanna choose. And wondering if that's like a graspable thing that the panel can actually advise the legislature on. And that that could be something that would cover a great deal of ground that we're talking about now. But having that somehow mandated, I'm thinking out loud. Say it again. Yeah, I'm gonna try. Remember, Robin was talking about a great impact of what somehow making the decision that there should be a study about you're gonna implement this policy, you're gonna do this to actually say, well, what are the impacts of this? So she was talking about how we've done, she used the example of me too. And how there have been sexual harassment trainings in most employee situations for decades and sexual harassment is still occurring. But I was also thinking in the sense of having mandate because maybe as I say, I'm grasping straws, but there's so much here to cover. I'm wondering if there's something in that in putting forth those kinds of reports that sort of focus on what the racial impact is of a particular decision that might be a useful thing for us to recommend. Could you, I'm thinking out loud. I'm trying to follow you, I think you make a good point, but I'm getting the lost, I promise. It's a way of trying to get at a lot of these issues with a certain, with a gesture that is maybe a bit of a blunt instrument, but to look and go, okay, these are the decisions that are being made. This is the policy that's being implemented. Some group needs to look at what the racial impact of this is going to be. Couple of points that I have just on this topic of how we're moving forward and thinking about it. I think that is a good idea, that's one thing that we can do. I also would push back a little bit on some of Robin's points, or one of Robin's points. I can't possibly compete with her on academic knowledge of how you do these things, how you introduce the data sets, how you find comparable jurisdictions and circumstances. And I think that one of her ultimate underlying points are sort of deep point, almost an academic or philosophical point, is that reality can't be counted. And that's probably true, but I also think that's an unhelpful way to approach this. And I think that counting to the extent we can is still a useful exercise, as long as we do the best we can to push errors out. Can't do that entirely, but let's do, I do think it's worth making a conscientious effort to count what we can count. I do think that those efforts make a really big difference in helping us understand what's happening in the aggregate. And just because it isn't a capturing of reality itself, which is impossible, it is still telling us something useful. It's telling us that there are patterns, and it's telling us that maybe individuals aren't knowingly doing something, but we're having results that come out that tell us something very important. For example, the incarceration rates of people of color study that came out in October from Department of Corrections that noted that there are about 8.5% of our incarcerated population are people of color, African-Americans specifically, and we have a slightly over 1% of the population of African-Americans. That's a useful data. It doesn't tell us exactly why that happened, but I do think that collecting those data points where we can and expanding the places where we collect them is a useful exercise, even if Robin's right that ultimately we can't know exactly what's happening in every aspect of the world. If it's first one to correct action, then... Exactly, that's a very useful tool, and a very valuable tool, and I think that the sort of academic point that we can't be perfect in these things is not one that's worth our grappling with. It's like, true, but forget about it. Let's do it, we can, and that will provide very useful information. Again, I don't wanna say we need to swap away or get bad results, but have a concerted effort to do it well and accept that we can't know the world perfectly through the use of data. Can I add to that just real quick? Go, yes. Yeah, okay, it's not like a public comment time, but this whole discussion reminds me of the study that came out of the agency of education that said minorities and kids with disabilities were disproportionately disciplined, and a lot of the criticism, it was like a Vermont is really tiny, it's really rural, like that could mean one black kid at an entire school, and that's disproportionate, but it's almost counterproductive to raise those arguments or to be overly critical of the data because what you then suggest is Vermont is somehow unique from the other 49 states, and that information isn't useful or helpful. So you almost have to take that kind of philosophical or academic criticism and recognize that we're using that information and the information across the other 49 states to inform us in the state of Vermont about how we could be better doing that, instead of suggesting that, oh, well, because the data itself isn't that valid, therefore we don't have a problem in the state of Vermont, and I think that is a problem, because I don't even entirely know that, I don't know, Robin, I don't know if she's really suggests what she seems to suggest, but I would be concerned because it seems to suggest that it's not valid, and therefore we can't like rely on information data, and then we can't act upon the data that we do have, and that is concerning to me. And anyway, I just wanted to share that, yeah, but what it's worth. I want to say that I took away from Robin's comments, I mean, well, it was easy to feel very discouraged by Robin's comments, but I also took away that she was saying sort of big picture, I mean, one way to look at data is if we can have transparency, if we are actually getting data, which people have raised as a concern, so maybe that's something we want to address, like are we getting all the data that should be made public, but so if there's transparency around that, it can have impact from just the transparency of it, just the public knowing what's going on. She talked about a couple of different methods that had failed, and I actually, I did not take at all that she was trying to suggest we actually don't have any problems in Vermont, like on the contrary, she was saying, we know we do have problems, but the methods that we're using or the data that we're collecting isn't reflecting what we anecdotally could sit around this room and talk about we know is happening. And so, like I'm with you, sometimes I just feel so overwhelmed by this whole process, but what's forming in my head is that what we're trying to do is get from the data, which I do think is figuring out how to get from the data that we're collecting or the information that we have to what you're talking about, is on figuring out how responses to that data actually, like what the results are, do you know what I mean? And I'm not convinced that we can't do that from data. I just don't know how. Okay. Can I share one small frame? And we've got one example. Let's look at page one and we'll use my outline. E1 to E1A, that's trials, specifically jury trials, and specifically the stage of jury trials that's at the beginning, which is when you're just choosing your jury, jury pools, who are supposed to be represented at the people you live with, right? Are you, right? And so, there is a study, again, something I am happy to share and I'm not exposed to at this racial public defender conference in Baltimore, a study that shows one person of color, one person, not male, not female, one person of color who is a member of that jury pool, not even chosen, or just in the pool itself, starts equalizing out rates of conviction where the defendant is black and white. That is phenomenal. Now, the question to me then, assuming that study is valid and can be applied here, I'm assuming, what are the demographics of our jury pool? Do we know, do we track race data? Who's going in? Let's look at Chittenden. We can talk anecdotally and ask the public defender how many times she is in trials, right? Where, when you're selecting a jury and going through, how many times do you see any single person of color in our most diverse county? Anecdotally? And I, this is gonna sound hyperbolic, but I'm just gonna say my first reaction to that question is, I can't think of a time I've ever seen a person of color in a jury pool, in a jury that I've been picking. Anecdotally, we may be not reflective, but in an accurate, you may miss some heels. We can present a proposal that we should be collecting race data on our people who are going to the jury pool because we have a concrete example in this one instance. Another suggestion on this panel is instead of trying to come up with a comprehensive list at every point. Right. One of the morale-boosting moves of one of the sentencing commission was we found one little thing, was quickly presented, proposed, voted on, and we could identify it, and then it was sent to the House and the Senate Judiciary Committees as we voted on this one piece. It was a sentencing commission, right? And that was phenomenal. What was a unanimous positive example of how we could try to consistent with the mandate. And we're gonna have to, because... I'm just wondering. Can I say one thing? As someone who did not do my homework at all, it won't surprise you to hear that I have not digested all everyone's bullet points, but I would be interested in digesting them to identify areas of overlap. I would agree. And maybe that's a way to identify some small pieces that we can agree to focus on and make recommendations on. What was interesting about what Rebecca said with the sentencing commission, the issue that was presented essentially unanimously to the... Just lock the doors, son. It's fine, it's for the library now. I've got the key for this. Oh, okay. We'll be fine. I bet we can get out. We can get out. We can get out. We can get out. We can get out, we can get out. The issue has not been debated, if you will, for at least the last two sessions is something I've been presenting to committees for the last two years, and it just didn't go anywhere. And it was kind of surprising that in this group, not much bigger than this, that they took that issue. There was further discussion about things they couldn't agree on. And we ultimately said, okay, if we all agree on this, then let's present that to the committee and we can continue to argue these other issues. I think that's great. It really, it made a difference. My reaction to Robin was kind of like a roller coaster. At the beginning, I said, oh God, not more data and numbers than that. And then it became riveting to hear her talk about the different methodology and the results and so on. And as she went along, then I got discouraged. But ultimately at the end, she said, and I think David was, I think saying that was, look, let's take what we can do and don't necessarily expect it to be the answer. But another piece that will help us understand as we're looking at a bigger picture. So it was kind of interesting. So then that gets us where we need to be next, which is we need to do, as you were saying, and look at the bullet points and start trying to identify areas of overlap. Is that? I think as homework, you didn't do it, right? I did not take my homework. So as homework, you could take all of this stuff and you could come up with the overlap. Do you want it to get done? I would take it off, but I would not. Let's all do it. No, I'm just kidding. But seriously, I think that is a good next step. I think that's the next step. So I'll also point out, if we go according to our schedule that's coming right up, so we won't waste too much time. No, which is a lot of them. So we have a next step, good. Everyone's on for that. Good. Everyone will do it. Good. I think gentle reminders help me talk about that. Okay, I will continue to send out gentle reminders and try to keep them witty. And we will meet then on the second Tuesday, as usual, March. Do we have a place, David, or? Not that I'm aware of, but I'll let you, we might. I will send a hand-funker and let you know. Okay, great. I will stay tuned. I will send that out. Everyone has all the bullet points I have had submitted to me so far. So you've got them. If there's a question, please get in touch with me. I can get them to you immediately. Well, you know, recently immediately. And they'll be more to consider. So the homework, though, in terms of overlap, can you just say anything? We're gonna be looking at broad areas that seem to be coming up from everybody's thoughts. And just for a grand, I will try to, you know, that's concrete proposals here and there that I did throughout there. Great. I didn't do my homework completely. I will always share that. I'll try to put that together in two weeks. Wonderful. Share with anyone. So we will do this together in two and a half weeks? On the 12th, yeah. On the 12th, right. 6 to 8 p.m. location to be announced. Everybody, thank you very much for getting the points in that you did get. I realize I was being a bit of a nag, but I appreciate the effort. And certainly understand that everyone's quite busy. Thank you. Anything else for new business? Cause that's basically where we are. We combined all these things very organically. Now, all right. Thank you. We're adjourned. Thank you.