 Welcome in Aloha. Thanks for joining us at Think Tech Hawaii, rule of law in a new abnormal, whatever that may be, changing literally from day to day. And we have with us today two past chairs of the American Bar Association section of dispute resolution, preeminent conflict resolvers, David Larson, now a professor at Mitchell Hamlin School of Law, an innovator of the New York online dispute resolution program, which has exponentially expanded access to justice in those courts. And Ben Davis, with background experience in Europe, the US, professor emeritus from the University of Toledo School of Law, and now visiting professor at Washington and Lee School of Law. Welcome aboard, gentlemen. Okay, Ben and David, as Marvin Gaye once asked, what's going on? How about if we start talking about AI, whatever they call it, artificial intelligence, although I think it's just human assisted intelligence. There's been a lot of stuff going on about cat GPT, I hear. There's also, I think, some company that's offered a million dollars is to let their chat bot or whatever it is called do not pay or do the argument or tell in the ear of a lawyer, the argument for the Supreme Court of the United States. And there's all this hullabaloo about all this technology. So it's kind of fun, right? Looks to me, David, you correct me if I'm wrong. This looks like a lot of marketing going on. Yeah, a lot of marketing going on. There's a lot of consternation in the sense that there are limitations with this kind of technology that we can talk about. But one thing it's good at and one thing it was trained to do was to provide the answers that humans want. It's kind of geared towards a human satisfaction index. So the answers that it produces sound good to us. They're not necessarily correct, but they're written in ways that appeal to us and they sound convincing. So there's at least in academia, there's a great concern that students are going to go to that to write their essays and write their short answers. Yeah, yeah, I think the real distinction I've seen people talking about is just to let people know that it's not thinking. It is aggregated. But if you, I think, that are the person reading it who is doing the thinking in terms of evaluating like a human reaction to something, you know, and you say, oh, what a brilliant thing that was said here, but the machine's not doing anything except aggregating various things to come up with what sounds good to your brain, right? It's kind of like us, you know, the way that we get played by the Craig computers of Facebook and all that stuff. I don't know. It's kind of funny to watch. Well, you know, they've got a program called reinforcement learning with human feedback, which makes it a little different than some of the other programs we've seen so far. So I think the kind of danger attraction is this appeal, this kind of kind of intuitive appeal to the way the answers are composed that they just sound good. And we may have no idea about the subset of content, whether it's right or not. But this sounds good. And I think that there's the real possibility of spreading misinformation. Yeah, the footnotes thing. I heard something recently from someone, which was that there was a faculty meeting at some university where they were concerned about this chat GPT thing. And the people from the math department, this was great. Well, the people from the math department, you know, put in some, I don't know, some formulas, you know, to do the math formula dissecting thing that those mathematicians did. And, you know, it looked great. Okay. It was gibberish, though. So it was emblematic of what's going on, folks. There's a website called Stack Overflow. And it's for coding questions and answers. You know, so it's for technically trained people. And just recently, they had the band GPT because people were posting so many answers to questions that the community had raised written by GPT. And so many wrong that moderators had to close it down. They said, this has created so much confusion. There's so much inaccurate information. You know, our reputation depends on getting reliable information. So they actually had to temporarily ban it. Oh, that sounds like cancel culture to me. I'm sorry. So yeah, they did. They canceled the chat GPT temporarily. Well, you know, it's interesting how can you do anything about it? You know, I think there are ways and technology to watermark things. So that's been one proposal that GPT should watermark everything it produces. So we'll always know where it came from. You know, I suppose, and this goes way beyond my skill set, that you could design some kind of software that might be able to identify sources. You know, I think that probably is challenging to do, but I don't think it's impossible. Those attempts, I think, are already underway. Yeah, there's a, so this interesting is like false positives both ways, right? So like, it could be the false positive where, or maybe false negative, right? So the false negative saying that this thing was not written by chat GPT, right? And then have a human written thing that is looked at as a false positive saying it was written by chat GPT, you know, which is like a test of the testing abilities of the ability tester. I guess that's what it is. I don't know. It just, it's a whole world. I just wish that students out there would keep in mind, you know, that that task of actually thinking it through and trying to write it out yourself and honing what you're trying to write, you know, things like that. It's a real skill that you have to learn. I once once fly fishing with Justice Saundra Day O'Connor. I mean, it's just amazing. She was a friend of the dad of a friend of mine, right? And I asked her, well, if you were in, if you had kids in law school, what would you advise them? And she said two things. First, learn to write. Learn to write, you know, those legal writing class type things, learn to write. And then the second was look at the equal protection jurisprudence of the Supreme Court. That's where the dogs were buried. I was like, okay, so, you know, that learning to write, learning how to, you know, write that sawdust with butter, you know, it's something that check GPT might sound and feel good, but you know, it's not you doing the actual graphic. So learning how to do that is a good road to go down. But what can you tell people always want to find the easy way sometimes, right? Yeah, you referenced false negatives. Could you imagine the frustration of the student who knows other people are searching out AI approaches and solutions to assignments? And that person doesn't. And so they do it themselves. And then they get called out with a false negative when they know that it's like, are you kidding me? You know, I spent all night writing this thing and somebody else said 25 minutes. And it's like, what is it says law about bad money, kicking out good money, you know, like that or something like that. They did all these people doing these cheating things, zip out and wipe out the ones who are actually doing what's supposed to be done. Amazing. Amazing. Now, I did, you know, since we were in Black History Month, I want to tell you, I did do a chat GPT. Okay, I went on the site and, you know, I asked them, I think it was something like, you know, how can we get, you know, after the Tyree Nichols thing, how can we get the police killings of Black Americans to stop, right? And it generated based on 2021. So it didn't include the Nichols thing, this kind of pablum thing that was like, pretty much everything that anybody's ever said about this issue, right? You know, and I was like, Oh, great, you know, like, it's like, you just send this to your Congressperson saying, this is what you need to do. You know, it's like, literally everything you could have met, you know, and to thank you, Chad GPT, I'll sound like I know what I'm doing, you know, but you know, it's not really sort of what comes out. It's like the political will to do anything of that are in the list of things that are there. They're all perfectly eminent degrees, you know what I mean? But it's like, is there really political will to deal with it? I don't know. So for certain purposes, then would GPT, Chad GPT be adequate sufficient? I mean, was that an adequate answer for the moment? Well, you know, I mean, sort of like a general middle of the road kind of grabbing everything on the internet, I guess, or up to 2021 and laying out the proposals that have been made was kind of the way it was presented, like seven or eight proposals. And I've, you know, over the years heard them all being evidently reasonable proposals, you know, there was a friend of mine who said that, well, the one thing it didn't speak about was bystander liability, which is apparently something where somebody doesn't do something, they're going to be held liable for the not responding to the person in distress, which is not really an American thing. But in some countries in Europe, they have that kind of liability, you know, so, but, you know, Chad GPT wouldn't see anything like that in America. So it wouldn't be part of the proposals that are out there. But, you know, it was interesting that it could do this kind of summary that sort of, and maybe it's me being seduced, as you were saying, by the way it was written, you know, I'm going to fully stipulate to that. I said, yeah, this sounds eminently reasonable. I could say dear congressman, please see what you need to do here. Well, maybe that's a, you know, maybe that is a legitimate functionality of this is, as you mentioned earlier, the aggregation of information, not necessarily deciding for you what's the best route to go, but aggregating all the possibilities and putting them in front of you. Yeah. And one thing that I saw on the new resolution listserv, somebody did was they took an old exercise, which is, I think it's called the ugly orange. And basically, there's two parties competing for these very special oranges. And you have to disclose to the other party that you need the skin. And the other one needs the juice to optimize the situation where one would get the skin and one would get the juice and everybody would be happy, but that we have to disclose what your need is, right? You know, the interest kind of thing. It was fascinating that Ted GPT talked about, well, you can look at interest, but it didn't come to that one. You know, I mean, it's like the big one that you're, I had to actually go check the problem again to make sure it was the same one that had that sort of solution to it. So, you know, it was like, I was like, oh, there's a limitation. You know, it gives you a pablo of it's wonderful and it's true. But it doesn't get all the way to that thing, you know, where how about this, you know, generating an option, you know, interest, I don't know. But so who's behind the growth of these AI mechanisms and who are the entities? Who are the people? Are theyemic? Are they private industry? Special interest? I'll tell you, one thing that's going to propel the growth of this is that Microsoft has now invested a reported multi-billion has made a multi-billion dollar investment in chat GPT. You know, if Microsoft is getting into it, then it becomes maybe part of the office suite, you know, that all of us have. And now suddenly, you know, it's as easy to access as any tool we have on as easy as Word. You open Word, you open chat GPT, you know, and that becomes a tool that is so available and so easy to get to in your suite of tools you use every single day that it really grows exponentially. So this is an example of a well-funded company that's bought into it. And that's sort of going to propel the growth. Yeah. Yeah. I mean, you know, it's always, I don't know, the last 20 odd years or 30 odd years, it's been always the new, new thing, right? This is like the new, new thing that, or actually, I guess, if I go back to the MS-DOS, right, and the personal computer, right, with IBM and the personal computer, if we go back, was that 40 years ago, something like that, that was the new, new thing. At that moment, then, you know, the Apple, the new, new thing, and then the software and the folding in software and the windows and all that stuff. We are watching another new, new thing, and we'll see how, who it eats, right? Well, the new, new thing that's, that's a close parallel is Watson. Remember when Watson came out and beat Gary Caspar in playing chess and Ken Jennings at Jeopardy? And we thought, this is it. You know, it's now a sentient entity, and it's going to change everything. And how much to hear about Watson today? And who do you know that's using Watson? You know, they tried to deploy it in different ways and they tried to deploy it in particular to healthcare. They thought, this is going to be great. This is someplace where it's a lot of data-driven. It's going to be able to be eventually a diagnostic tool, maybe better than humans. But when they tried to try to use it, you know, they had difficulty accessing the records and they had doctor's notes that go every which way. They couldn't integrate them. And they had a lot of difficulty and most people that started building kind of health diagnostic programs with Watson have abandoned them. It just didn't work. So that's an example of, you know, the promise was so great. And at least with that technology, it really never came to fruition. Well, that's one of the reasons I salute you because, you know, with that whole New York state thing, you know, what an amazing project that you did in terms of taking something and making it something that could work because, you know, you're going from, we would call it Greenfield or whatever. I don't know what the exact term is, but it's not like anything standardized really, right? You know, and it's like, let's figure out the first level of how do we kind of standardize things a bit, you know? And then how can we then move to the next level if you get to something that's going to be accessed just, you know, that's a huge process that a lot of these folks who are looking for the fast buck, you know, are not going to go down. I mean, getting those doctors to standardize their presentation, you know, I would say that there's some things that I think have been successful, like the pandemic, you know, the telehealth aspect of being able to contact your doctor about this out of the other and getting a response back and some apps and things like that, or getting results that come from this test that sets to that app to your doctor and the doctor in common and the app, you know, those kind of things sort of building maybe the future, not sort of trying to rebuild the past, but building into the future. Those, at least for me, personally, I like them and how they operate for me. Yeah, you know, the things Ben says burst on the scene, but then the reality of making them work in a sustained way is a whole another level that sometimes isn't part of that headline. You know, and that's been the situation in New York, too. It's been a process and you run into challenges that you didn't anticipate because no one had really ever done this before. And for example, when you talk about online dispute resolution, defendant engagement is a huge challenge. You know, how do you get the defendant to engage because basically their position is if you can't, you know, if you can't find me, I'm never going to have to pay. So why wouldn't they voluntarily, I mean, some people want to resolve their dispute and get on with their life, but other people just want to avoid it. And you get defendants engaged in an online environment when they say to themselves, I don't know why I would do this is a challenge. Yeah, yeah, human nature, human nature. I don't know. Can I switch to another topic just for some fun here? So I don't know if you've been watching or people out there have been watching this stuff about the advanced placement African American studies program of the College Board that ran into some headwinds with our governor DeSantis down in Florida. And so the College Board is now amended its version of that. I just want to say there are a couple of different things that struck me. One was it sure shows what capitalism is. You know, in other words, if you want to get these AP courses into Florida, you got to do it this way. Oh, I'm principal. What am I my principal is? Let me find a way to get it into Florida. So let me move this and adjust that. Is this okay, Mr. Who doesn't like this stuff? You know, that that I found kind of a I understand it as a business model. I just found it kind of sad as a because I saw this list of all the various authors who had been eliminated. Okay, because you know, trying to comply with the complaints that the state of Florida was making and it was really quite disturbing. Okay, but that's one thing. But the second thing about it is I've heard people say things like we don't do CRT or we don't do DEI, you know, diversity, equity and inclusion. That's another big thing that's going on down in Florida. And it occurred to me that this is like the 2023 version of the 1950s. We don't serve colored people here. I mean, it really feels like that if you if you can have that frame in your head if you're old enough, you know, it's like, really, and I'm, you know, I was happy to see a really a video recently came out from a 11th grader, you know, God blessers, young ladies who was like, taking four AP courses was looking forward to this class, and was really upset with the governor for kind of playing these mind games on what the content should be. You know, a nice 11th grade, a little young white lady, you know, it was really good in Florida saying, hey, don't mess with my education with your political game. And I think that that's really, really encouraging. But I'm just not sure if there's enough people who are heading institutions or willing to put their necks out, so to speak, when you've got a bully who's kind of the governor, right? Yeah, well, you know, it goes to the heart of academic freedom. I mean, the idea is that it's hopefully through education, you're going to be exposed to diversity viewpoints. And there should be a kind of a human evolution here, as we hear different voices and some resonate and some don't, but we learn. And that's where we lose. We lose that diversity of perspective and our ability to grow. And that's the most frightening thing to me. Yeah. Oh, yeah. I mean, I agree 100%. You know, this sort of like party line. Remember that? That was an old term that I can vaguely remember from sort of talking about Stalinist days and things like, what was the party line, you know, and the changing party line and authoritarian regimes and, you know, how people would go from X to minus X in a second, depending on what the party line was, you know, and it's like the idea of having students in America having basically being taught party line versions of things. I find really, really, really troubling because, you know, the ability to just think and see different angles and see different ways is something that sometimes it's not really encouraged because it can make people disruptive, right? But on the other hand, you get a lot of creativity from that that leads to whole new things as opposed to sort of cookie cutter responses. And you end up with with our Congress. I mean, I've seen some recent comments being made about this whole what the debt ceiling stuff, it sounded like ChatGPT had generated the answers, you know, that's like, you know, Republicans defending what they did under Trump by saying, well, we increased spending by $10 billion in the four years he was in, you know, as opposed to the $400 billion increased by, I said, God, I heard that from that guy, then I heard it from this other guy and this other guy. I mean, I was like, ah, the talking point. Just, you know, like this ChatGPT talking point kind of thing. Anyway, I just wanted to raise that to point out that I hope people will take on in favor of academic freedom, including the various professors, etc., even possibly even presidents of the universities that the idea of like leaving the academics to the academic academia, you know, leaving preserving tenure, there's a whole issue about tenure being looked at again at these schools. You know, there's, we've been through this. I know there was, when I was at Toledo, I'd heard the story of some guy in the 30s who was writing a lot of labor stuff. And there was a lot of business interest types who were really upset with him at the State University writing sort of pro-labor thing. And the university was categorical about academic freedom. Leave him alone. We're not going to step in on him. And I think that kind of spirit, which is not in 60s, I'm talking 30s, okay? I, you know, it's something that I, I hope that there is still some spine somewhere for that. Not sure, but I hope there is. I taught in Mississippi for three years at Milsep's College, and I don't know how it came to pass, but for some way or another, I found myself at a breakfast with Jerry Falwell of Liberty University. And, you know, and he was talking about Liberty University. And I remember him talking about his faculty. And he said, he said, you know, when I say stand, they stand. He says, when I say sit, they sit. He was, my faculty is in line. I thought that was just chilling. That was freaky. Yeah. If your faculty is not, is hurting your faculty is not hurting cats. You got a problem. You know what I mean? I think personally, you know, with, you know, you have the diversity of views. There are people you disagree with among members of the faculty, people write about all kinds of things. But that, that fertile environment for the mind is something that I think falls, comes down also to the students and the ability to be better at whatever it is they're studying, you know, coming up with new ways to think about things, you know, at least that's me. That's a really important connection between these issues is that we've had kind of an informational war between clear disinformation, intentional disinformation, and fact checking. But what we're talking about here is AI that offers not just fact checking, but analysis evaluation that takes into account different data, different perspectives. And the whole point of the Ron DeSantis approach, the Trump approach was a single perspective instead of values would dominate and would to the exclusion of all others, even their consideration. They can't even go into the analysis and the evaluation. So I think what you brought up is really scary. And in our last minute or so here, David, thoughts about anything? Bill Gates offers some possibilities. Can you imagine, though, if the Koch brothers, Mike Lindell or others took that same approach and gathered those same resources and made it a competition? Well, I think it's incumbent on us to do what hopefully we're doing with our commentary on social media that we're telling people that, you know, anybody can be a micromultinational and you can post something and have global reach today in ways you couldn't 20 years ago because of the internet. And so we have to pay close attention to the source of the information. And even if it's written well and convincingly, we've got to be a little bit skeptical of everything we hear. So it will just take that same approach to things like chat, GPT, you know, it's just another form of technology and distribution of information and aggregation, not unlike the social media that we've been talking about. We've got to take the same approach to this kind of technology. And that's a great point because the question then becomes, is it going to be a multi-perspective problem-solving analytical tool and resource and maybe take the place of the fact-checking? I would love to see a debate where you put candidates up there. And then at the end of their positions, you get a short chatbot or AI summary of what a multi-perspective problem-solving analytical evaluation would look like. Yeah. You kind of have a BS screen flashing as they're in the comments. Because after the war, is it going to be a single perspective dominant tool or is it going to be a multi-perspective problem-solving analytical tool? Right. Well, it could do it also like at the state fair where you had the person sitting on the chair above the water. And so if the BS flash, they'd be dropped into the water, that would be one. There have to be consequences. So there you go. We've all sat in that chair before and we all hit the nunky boot. So Ben, David, thanks so much. Any last words on AI and where it needs to go? Just said that it's neither intelligence or artificial. It's all human in the background. There's always somebody pulling the strings like in the Wizard of Oz, with the prompt, with the choices of the norming levels, all that. So don't think it's the computer told me kind of thing. I guess my closing comment is pay attention. You know, you may not have a computer background, a computer science background, but don't think this isn't happening. And don't think that people aren't taking control of this technology. And we need to pay attention to what's happening, what's being implemented, what is behind that technology. And we can't just duck our heads. Gentlemen, thanks so much. This has been a perfect example of what AI, at its best, in a human dialogue might look like. Can it be that kind of multi-perspective problem-solving analytical resource for us humans that we need it to be? That will be the challenge. Thanks so much. Welcome to Think Tech Hawaii. Come back and rejoin us. We'll be back in a couple of weeks. Thank you so much for watching Think Tech Hawaii. If you like what we do, please like us and click the subscribe button on YouTube and the follow button on Vimeo. You can also follow us on Facebook, Instagram, and LinkedIn, and donate to us at ThinkTechHawaii.com. Mahalo.