 think tech Hawaii the rule of law my name is Ben Davis I'm sitting in as moderator today for the irrepressible Chuck Crumpton and our topic of the day today is three-part technology, technology and more looking at some of the interesting things that have been developing in our world today. So I'd like to introduce our panelists for today. First there's Professor Vanille Randall who is the emerita professor of law from the University of Dayton School of Law and she is really the repository of the most amazing research resource for racism, race in the law at racism.org and then we have with us also Daniel Rainey. Daniel Rainey used to be the chief of staff of the National Mediation Board. He is the editor of the International Journal for online dispute resolution and fellow of the National Center for Technology and Dispute Resolution at the University of Massachusetts and so all things technology he has been involved for a long time with it. So welcome to you Professor Randall and the Daniel and so our topic is technology, technology and more so let's turn you to you Daniel a little bit you know last five months or so seen so much talk about chat GPT and AI and it's going to change the world and all this stuff and you know is this technology technology what would you call it? Well I think your term technology is pretty applicable really you know it's I think there are three levels that people are worried about freaked out about really in terms of the use of AI on the very top level it's that AI is going to be smarter than we are and it's going to cause all sorts of damage and that's not a real fear actually in the middle there's the there's the very real fear that the use of AI technology is going to create false information that is so good that we can't tell it from real information and that's going to do real damage and the bad guys are going to do what they're going to do no matter what we might do in terms of laws and regulations and that's a very real fear but on the very lowest level for mediators arbitrators and lawyers there's the fear that it's going to put people out of work that the AI is going to take over the day-to-day functions that a lot of people who are doing practice in various areas are engaged in and that's a very real short-term fear. Now what I find interesting is there have been a lot of calls for regulation and a lot of calls for creating so-called guardrails around AI which is a good idea but most of them have been geared toward de jure approaches that is legal approaches in various jurisdictions and generally speaking I mean from my point of view that doesn't work what is honored in one place may not be honored in another place it's very difficult to create a legal approach of the jury approach that crosses borders because one thing we do know is that the current practice of arbitration mediation and the law across borders all the time and so I'm an advocate of approaching all of this from a standards point of view and the reason I do that is because people use standards not because they have to but because they work and so you take ISO for example if I'm looking at a product and it has an ISO designation what that tells me is that the people who made that product made it to a certain set of standards that I can go look up I can understand and then get reviewed every now and then to make sure that they're current if we could approach the notion of AI for the law and for other types of dispute resolution as a standards issue and create standards that are workable and that people want to adhere to the bad guys are going to do what they do no matter what they're going to be out there you know no matter what do we do in the way of law or standards but if we talk about this in terms of best practices and standards that we can agree on and people can say oh okay I'm going to use those I'm going to create to those what it basically does is it creates an economic incentive for people who are creating AI to be responsible so if I'm a court system or I'm a lawyer or I'm a mediator and my organizations say okay we will buy your product and we will use your product if you create it to these set of standards that we think are within best practices if you don't do that we won't buy your product we won't use it and I think one of the things that's undoubtedly coming along is that we are going to see not just chat GPT and some of the other large-scale generative AI we're going to see a whole bunch more very targeted AI from very narrow pieces of the law and mediation arbitration practice and that's where the rubber's going to meet the road in the short term for practitioners so that's a it's a long way of saying standards is the way I think we ought to go I sort of think that the I think the standards thing is important and I think one of the standards that I think as a long-term technology use person I have been using technology since 1970 and have grown as and have grown with the technology in my own sense is that technology the current AI that is one garbage in garbage out and so people will have to be responsible for for checking stuff that that that that you can use it but you can't just use it without it's going to change the nature of a lawyer's work the nature of a doctor's work that the doctor it's not it's not going to be a substitute for a doctor or lawyer it's just going to mean that the nature of how lawyers work I love it I am using it to help me dress up and I really see the benefit of it but I also see the problem with it and one of the most significant problems is built in racism that we tend to think that if something is race neutral that it's not racist and we tend to forget that AI is just people's programming and they program in their biases and they use their biases are based on their their programming is based on biased databases and and so when you ask you know AI to generate a list of authors who are in in some way that it's there the bias for white authors is going to show up because the database is biased and and and there has to be a way to address that issue and I'm not sure how you do that but I think this I think I like the standard thing by saying that you're you know it has to be proven not to have a racial bias in the areas that you're working in. Well I think that's exactly right and one of the reasons that I that I really am a standards kind of approach would take a standard approach is that one of the standards that I wrote I helped write the standards that are out there for online dispute resolution that the ABA uses that I so it's going to use the National Center uses that one of those is transparency the thing about AI and any of the other things that we're talking about is that the data set is the important driver I hate to use the term simple but the programming is not all that difficult the program has been around for a long time it's the availability and the access to the huge databases the huge data sets that are themselves biased I could give you all sorts of examples but you know what I'm talking about so one of the things that I would suggest is that one of the standards that we think about is transparency of the data set because if the data set is corrupted if the data set is unknown it's a blank box so to speak then then we're at the mercy of that data set but if we but if the developers are transparent about the data set they're using they're transparent about how they choose the information that the AI learns from then that moves us in a direction that gives us a little bit of control about what we're using I know that you're familiar with the in fact the poor lawyer who's in front of the judge today being pressing his penalty for using chat GPT without checking out the sources well that's one example that I'll give you another example then I'll shut up for a while in Virginia there was a well let me let me back up and say that we've been using something we've called AI for a long time a very simple AI in a bounded universe it started out in e-commerce where you know there are very few things that can go wrong I didn't get it it wasn't what you said it was it was broken when I got it etc there are a few things I could do to fix it I can give you a new one I can give you money back I give you a replacement so in that bounded universe you can set up a decision tree that asks the consumer what happens and they tell you within a certain range are you satisfied if we do this and give him an option a certain range and boom you don't have to have a human being involved at all we've been doing that for a long time what's different now is that we're moving into an unbounded universe or at least in my opinion unbounded universe there are there are laws and there are procedures but there are also something in decision theory called equifinality that there's many ways to get to an acceptable solution and so we're operating with AI now in a universe where it's there are so many options you have to be very careful about the data set you give it the example I was going to use in Virginia is that there was a sentencing program that was optional for judges and basically it took in information about the person who was who'd been convicted and the judge could choose to use or not use the suggested sentencing that the system came back with well in one sense you can't fault the program because what they did was they took historical data and they accurately fed that historical data in the problem is that historical data is extremely biased and so what happens is that there was a study asking whether the judges who didn't use that program were more biased than the judges who did use the program and the and not surprising to me the judges who ignored the program were less biased than the judges who use the program because the program itself was based on a bad data set on a biased data set that's what we have to protect against you know I think one of the things I think that's an excellent point I think one of the things that we in terms of the law our opinions are structured the legal opinions in which we base our decision making on is structured around time when writing and book print was expensive and and so I think for instance I believe that we would have better legal outcomes if fuller facts were included in opinions if that that's the opinions right now and they may be moving towards this but opinions are structured around the judge the judge decides what opinions are relevant and the opinions are structured around that well that means a bias creeps in are if the judge doesn't think race is relevant then race is not mentioned and so that you can't really do an evaluation of how race is impacting decisions because so many cases don't even mention race I think that one of the things that having a broader database will that's pretty help is if we say look you don't don't all facts go in you know and then you you then you underneath that take out the facts in your opinion that is relevant to how you make the decision I think that that would help researchers and people like myself to be able to go into databases and make an evaluation of how a particular characteristic is always frustrated me in law school that the only time race would be mentioned is in some very limited cases and that you could really tell how the race of the judge the race of the attorney's defendant plaintiffs was affecting the outcome because none of that information was in the database and so you because it's not the database when you get the AI and AI is based on those databases and someone act to generate an analysis of the impact of race in the law in a certain area is that you know skewed in some kind of way because of the way those opinions are are written I said those kinds of things will have to change and lawyers lawyers and judges and the legal system will need to take account of race and gender sexual orientation and religion much more than it has in the past in the reporting let me ask you all a question about that is that I see you all kind of talking to the the the construction of the type of AI that is there and I see how standards can can work in and the ideas that you're both making on that one of the things I was wondering about what about the market for the AI like if there are a hundred products out there are there certain ones that will generate certain kinds of responses that will be more quote-unquote popular if I give and end up becoming the de facto standard you know well that's that's why I that's why I am a standard advocate because it correct your language a little bit in the marketplace and it becomes the dominant theme in the marketplace it doesn't mean that it's that it's standard what it means is it's popular but so if we could construct a standards approach to creation of AI databases and data sets that is transparent that is all the other stuff that we might want to put in there then then that gives us a basis on which to choose which of those programs we're going to use and so it's it's and I think to to the point that was being made before there's a new skill that is coming down the pike that's going to be very important it's it's already exists but it's going to be even more important when you create a good database a good data set that actually represents fairly and equally the universal stuff that's out there that's a huge data set in the law and so the way you ask the question of the AI is extremely important and not very from now in law schools you're going to find people are going to be teaching how to ask the AI a question that is an appropriate question because that's going to help you get back the information that's going to be useful to you that is going to either not eliminate bias but it's going to at least put a shadow spotlight on bias and allow you to deal with that information in a way we couldn't deal with it before AI has I mean AI is we're talking about it is not any really different than the research skills internet research skills that we've needed west wall research skills that you needed you know you if you don't ask the right question you don't get the right information back and law schools have not been that good about teaching I think they may have gotten better I retired ten years ago so I'm not sure what's happening in law schools anymore but I know when I went to law school the whole idea of using a database I mean West Lawn Lexus but that teaching the research skills necessary to use those kind of databases was not a high priority I was going to say I feel like the person who learned the horse and buggy approach to doing research with these little booklets called shepherds eyes and things and all that as opposed to the kinds of technologies that people you know use with these these technologies that are there today and so if I could say something I've heard from use kind of garbage in and garbage out of the data sets that I'm hearing garbage question in garbage answer out as you know is kind of in this and so there's like this package of things that have to be developed the skills including having the standards to really do this right if I could say it like that so to move from our technology to the trick knowledge part of our discussion so there's there's a risk for some trick knowledge being in here and explain to me what you mean by trick knowledge I'm not sure I understand that what I'm trying to get at is that you know some of those issues like misinformation happening and people believing things that are not true because the question has been put in a certain way as opposed to another it's the use of the technology to kind of trick people okay to a certain extent and that made me think to go along to some of the ways the law gets used to kind of trick people and you were talking before about this case down in Florida that's just come up with what is it the stand your ground rules down there or maybe the stand your ground I mean it's been maybe 20 years or so that since states have been and almost 30 states have passed some form of stand your ground rule a self-defense rule that basically says that you can use deadly force and you don't have to retreat if you reasonably believe that your life is in danger now the problem with stand your ground laws is the many states adopt what a subjective idea of reasonable believe meaning that it the the person just has to believe it and on some level it's reasonable for them to believe it even if it's not factually true so they have to believe that they're going to be hurt and even if it's not factually true there's from their point of view it was reasonable the objective standard is that it goes to the reasonable person you don't look at what the individual in the situation believe you look at what the reasonable person believe down here in Florida we have not only a subjective standard uh so that uh the but we have a the law has been modified to put the burden of proof on the sheriff uh so so you can't so the typical thing is somebody shoot somebody they claim it was self-defense they get arrested they get to use their defense and trial whether that defense is self-defense or stand your ground okay well so if i'm trying to imagine somebody knocks on a door and it's the wrong door i i gotta think that the uh the door to door salesperson business may not be a very good business down there in Florida i mean well it might not be because well the what happened with the the neighbor shot through a closed door and the and everybody it was upset because there wasn't an immediate threat of rest but what they didn't understand is by law there could not be an immediate threat that that that once someone says that they were standing their ground then the sheriff has to collect enough facts to say that that shooting was not justified yeah and and and what happens is and what has happened is is over the years disproportionately black and brown asian and native people who try to use stand your ground can't use it because the because they say wait a minute your fear was not justified your behavior was not justified and so no and so then they get arrested and they go to trial and then they they can still try to use stand your ground but they have to go through the trial process uh they did they've been they did ultimately uh arrest the white woman who shot the black mother because it was a closed door a closed metal door with no glass and they shot through it uh in in that killed her but in terms of this whole so the how do you use the law how do you research the law how do you uh show the how how how will databases how can databases be used to change the law are our ai how can ai databases because that's all i think of them it's just databases that can generate something for you and you then have to use your brain power uh to understand what's being generated check to make sure it's proper modify it uh in some way so how can that be used to do something to try to get florida to at least go to an objective standard uh and to take off that part about the sure not being able to arrest someone uh do you think that that kind of uh use of ai technologies like a statistical type stuff could be a helped in in this kind of setting to improve the law not make it tricky but make it actually more well one one of the ways that i think and it it this is one of the ways i think like i think that uh progressive people are not as aggressive about generating bills to get uh and using well i don't know what i'm saying that was not enough information uh i think that it could be used to generate bills in a way language for bills that could then be used by organizations to introduce and that the the work that is paid to generate a bill is so enormous that non profits can't often do that because they don't have they are working in a way that they don't have the financial backing and they don't have the manpower it seems to me that ai could do that and someone with obviously i mean this is why i say work will change you couldn't take that and give it to someone you should not take something but it's a drag that then could be used by someone uh a volunteer lawyer or someone to uh to go for from that so uh yeah i think i think that's the one way that ai could be used i think that a good database showing uh statewide i mean and this is already uh can be used uh to show state how statewide who's being arrested who's being charged uh who's not being charged uh and uh what's the circumstances under which they are being charged and not being charged okay and any final thoughts Daniel just a couple of very quick things i know we're running out of time but one is that that if you if you talk about using an ai program well the question one of the assertions that's been made about the technology and justice for a long time is that the use of technology and will increase access to justice and my response to that has always been well maybe it could but it also could cause problems if if for example using this standard ground example if i simply took the information that we can have already in cases everywhere you know millions of them many as you want to throw in there about how standard ground cases have been handled i all i'm doing is i'm building in the biases we've already got so i've got to find some way to ameliorate that problem exactly i do that am i getting access to justice i don't think so i'm getting access to a version of what we've already had and so as we go forward as we look at those applications that ai that we're going to want to use in the law or anywhere else this whole notion of how transparent it is how they're building the data sets and how they're allowing the act to manipulate the data sets is absolutely key because if we do nothing but build big data sets we're going to build big bias data sets we have to do something now i agree and even if we build non-biased data sets we have a racist system and and and to some extent ai doesn't eliminate those systems uh that they operate they operate within a system and and i agree with that so we have uh come come to the end of another interesting sessions where we we see on the one hand the possibilities of this technologies but the worries about it and uh ways that we can try to adjust that uh what what the technology brings out to actually work towards access to justice even if we question the system in which the uh results are coming out so one more stimulating moment in the rule of law in the new abnormal i thank you uh professor randall thank you uh daniel reyney for joining us today and i thank those who are watching this for joining us today and uh we wish you all good godspeed i'm then trot so virginia so i don't know if i should say mahalo but i'm gonna say mahalo anyway you folks out there in hawaii