 much for joining us on Think Tech Hawaii, much as I know you folks love to hear and think and listen and talk about the politicians. We're going to skip that today. We're going to talk about generative artificial intelligence, the tools it offers, the risks it poses, and some thoughts on how and whether some kind of constructive balance might be possible. We have with us David Larson, former chair of the ABA section of dispute resolution and esteemed professor at Mitchell Hamlin School of Law in the Twin Cities and Jeff Portnoy, leading First Amendment lawyers, senior partner at the Cade Shuddy firm. Not everyone remembers but Jeff has done his time as color commentator on University of Hawaii Sports as well. So gentlemen, what do you think? We'll start with you, David. What do you think are the most important points, priorities to keep in mind as the AI revolution inundates us? There's a lot to talk about. If you haven't played with AI at all, I encourage you to do it. I haven't used it a lot, but I have played around with it. And I got some surprising results. For example, I asked AI to write a story for me about a Swede who didn't like Ludovisk. And if you know Scandinavian traditions, you know, Ludovisk is well interested Scandinavia. So, you know, a Swede that doesn't like Ludovisk is turning his back on tradition. And it wrote this actually sweet little story about this individual who didn't like Ludovisk and his neighbors would try and encourage him to try if they were all very respectful. And then near the end of the story, some new neighbors invite him over for a surprise dinner. And they prepare Ludovisk in a different way. And he decides he likes it. Happy little Andy. But it actually was written as well as typical TV fare. And all I could think about was that, wow, the writer's guilt is in trouble. Because to generate a simple story, I thought it did a pretty good job. So then I asked some other kinds of questions. Now I asked that about for photo of action, and about the students for fair admissions, the recent case, it couldn't, it couldn't answer it said we don't I don't have enough recent information. So it could not respond to that. Then I said, can you tell me what cases I'd like to I should have on a on a labor law syllabus? What are the what are kind of the fundamental cases, the greatest cases? And so it produced a syllabus. And for the most part, I was getting the kind of the big cases you think we would get. But then I had a couple, like environmental cases that had nothing to do with with labor law. So I guess the first thing is that there's great potential there. And for me, what I got was that kind of the creative side of it seems in very promising. The analytical data difference side was a little more worrisome, that it didn't seem to be able to be as the turning about what kind of data was reliable and accurate. So my first reaction is, approach it with caution. You know, don't rely it, we all heard the story about the New York Attorney, who had chat GPT write his briefs and turn them in turned in that that the briefs actually cited what look like legitimate cases, had proper citations, but none of the cases existed, they're all made up. You know, so I know he was brought up for ethical proceedings, I don't know what the sanctions were. So I guess my starting point is, recognize that there's great potential, but be cautious. So Jeff, have you tried it? Have you tried anything on chat GPT? No. So I come at this as a complete neophyte. I did have an interesting little encounter with it kind of anecdotally. I had a service technician come over the house to hang some photographs after I had some work done in the house had to put the pictures back up. And we spent some time talking, you know, got friendly and he told me about his background and what he's doing here, et cetera, et cetera. And we started talking about being a lawyer and, you know, are things being taken away from lawyers. And I said, I don't know, not what I do. And he said, Well, you better be careful because, you know, you're going to be out of business soon, friendly. And I laughed. And he said, I'm going to send you something tonight. And you'll see. And what he sent me, he has a couple of properties, apparently, where he's a landlord. And he was having trouble with a tenant. And he sent me a letter written by AI, the fictitious lawyer name, to the tenant, threatening them with all kinds of sanctions if they didn't pay their rent. And he says, This is what it's going to do to a lot of what you guys do. So, you know, I am as fans who watch me on your shows occasionally know a computer illiterate. And I'm not even sure what non artificial intelligence is, but I think there's a lot of hope. But I think there is a lot of potential damage that has been and will be done by AI, if it is not, and I'm not sure anybody knows how to do this, properly controlled and monitored. And I know, having spoken to two professors that there's real problems at the university level, with students literally never coming to class and then writing papers, that you know, they probably even didn't prove free. But you know, it's interesting, there's always someone else willing to step into the void. I'm told now that there are AI sites or whatever you want to call them, which can tell you which previous AI document is not accurate. So they're monitoring in competition, each other. So that's all I know at the moment. And if that's interesting, of course, we may be getting situations of unauthorized practice of law and medicine, you know, as people produce documents that sound very compelling, very realistic in the art. So but yeah, that's interesting. As kind of a first draft, you know, I think for a lot of lawyers, the idea of they're going to research a subject and maybe ask chat GPT or Bard, you know, or something else for a first draft, and see how it reads check, check it all out. I think it can probably be helpful to doing things like that. But yeah, there are lots of problems. You know, so artificial intelligence goes beyond just chat GPT, it goes on to a lot of different areas. You know, it does a lot of decision making for us. And one of the concerns about that is that when we defer decision making and directions to AI, there may be bias involved here. And there's a lot of statistics and research about the fact that, you know, AI is dependent on who creates it. And all humans have bias. So there's going to be bias in AI from the humans are creating it. And if you look at who are the what's the demographics of the primary research researchers tend to be white tend to be male tend to be from higher socioeconomic classes. They're not people with disabilities. Now it's a pretty homogenous group. So to the degree that, you know, you can't make absolute generalizations. But to the degree that when you have a homogenous group, there might be certain sensitivities and inherent implicit biases, that's going to show up in the in the AI. And that's problematic. Well, I know our law firm is prohibiting the use of AI. And if someone's doing it and we catch them, they're not likely to be in the firm very much longer. Like you, I know about that case in New York. I've read the decision of the trial judge, which is very pointed and very direct, recommending very significant sanctions to having it turn it over to think the Bar Association in New York City, I'm not positive where. But it's an example of what can go wrong. And where a lawyer just completely gave up only all of his or her responsibilities, never even bothered checking to see if the case inside it even existed. And I think that's the risky take. But you know, if you're a senior at a university and need to write a paper to graduate and the night before the paper is due and you haven't gone to class for four months, then you're going to go to AI and you're going to take your chances. And it's going to take a very smart teacher and a very, you know, smart set of eyes, or maybe even another AI site to say this is garbage. This person didn't write this. It's not only not written by that person, but it's not accurate. That's the risk you take. Yeah, we're hoping that they're going to be like maybe watermarks or something to somehow identify. I know what some professors are doing is requiring some in class assignments that that you don't have the opportunity to use AI. That's going to set a baseline. So you have people do some in class assignments, and then suddenly on assignment number three, it's this wonderfully eloquent piece. That's going to least raise some suspicions that can you explain to me why you had this tremendous improvement this week as opposed to last week? So I think there's some things we can do. But yeah, that's a real concern. And Jeff, you talked about the fact that your firms kind of monitoring that and prohibiting that. That's probably a good idea because we don't know where that data is going. It's not really clear what protections there are for it. Who could access it? There was one case where it was revealed that some chat GPT users were able to access what somebody else's research than the results they had. So I think there's some very legitimate privacy concerns about if you do use it, all that's getting stored. The things you ask it, the things that are getting produced, you do it in your name, those records are being kept. Where they're being kept with whom they can be shared? We're not sure. So again, right now, we're in these very early stages where some important questions need to be answered. Well, I'm sure that there's tremendous potential for AI maybe solving the secrets of the universe and whether there are aliens living on our planet that we're not aware of. But there are other things that I think are best left to the human brain. And that's where I think we may be going off the highway, at least onto the shoulder, if not into the ditch. We see all the positives and all the ills of social media. That was the big thing. And now AI is moved into that that field. And so, you know, with a lot of advancements over history, there's a lot of good, but unfortunately, sometimes there's some bad. And I think AI is falling into that category. And in the law, for example, as you know, technology is always two steps ahead of the law. So by the time the law catches up, technology is obsolete. So we'll see how it deals with with issues of AI. But I mean, it has, as I say, real potential to solve some major problems in the world. But it also has real potential to create those problems and exacerbate it. It certainly can. What it can do is, you know, digest and interpret data really quickly, mass amount of data much faster than you can manually. So that's a that's a great tool. You know, it can be a great time and resource saver for repetitive tasks. It can take over lots of those kinds of things. You're seeing that all the time already. In customer service, when you call in different merchants and retailers, you think you're talking to somebody maybe aren't. I mean, it may be an AI directed conversation. So those things are happening already. And I think that it can be a real productivity saver. But on the flip side, people are going to lose jobs. One reason that unemployment has gone down through these past couple of years is there's been really a pretty explosive growth in relatively low skilled low wage jobs. There's a lot of those jobs now. Well, I think those are the jobs that are probably most vulnerable right now to artificial intelligence. So to the degree that those jobs are getting replaced by technology, we may have a lot of a lot of economic problems. In terms of unemployment. Who even knows if I'm on this call? For you. Chuck may or may not be on the call. I mean, you know, we've seen what people can do with AI by putting people's faces and having them mouth things that they never said. And, you know, we've seen the evil that that can cause and has caused. And it's just the beginning. I mean, it'll be robot talking to robot, but it'll look like you and I talking to each other. And people can do that right now. I've seen a whole documentary on it. It is really, really scary where I may not be saying one thing that I'm saying right now, and it may not even be me, even though it's my face and my mouth moves when we see what's happening with animation. But unfortunately, it's happening now throughout the Internet. And so, you know, we know what happened with Obama and how do you put a stop to that? What do you do? What if what if the president is the is somebody phonies up to president getting on AI and saying that a nuclear bomb is coming and panic ensues? I mean, we're only one step away from something like that happening. Yeah. Yeah. Well, you know, and as we approach another presidential election. And you're not supposed to bring that up. No, you're not supposed to bring that up today. Well, just, I'm not going to talk about the candidates, but I'm going to talk about the threat of those deep fake fakes that Jeff's talking about, where you're going to present somebody very convincingly, saying things that they don't, you know, they've never said. And that's and, you know, our ability right now to kind of fact check and monitor that isn't very good. So I think there can be some serious chaos as we go into this and this so our current election season. You know, another thing that's that's a little worrisome, maybe more than a little worrisome is another thing they're doing with AI is is autonomous weapon systems that, you know, that can be deployed that ostensibly can make the decision of what targets to hit. You know, they can determine is that a civilian side? Is that a military side? Is this somebody that's a combatant? Is this somebody that's injured on the field? Is this somebody that's trying to surrender? You know, can it make those kinds of determinations? But we clearly are, we have autonomous weapons who are moving much more towards that world where robots can take on not just warfare, but all kinds of dangerous tasks. I mean, to have robots go on the coal mine, that's probably a good thing. You're going to save some lives doing that. But the idea of robot warriors who actually can identify and eliminate targets is this kind of disturbing. Well, you bring up the economic potential problems and the loss of jobs. And, you know, I was on the phone today with a call center. I'm not sure I would have been better off with AI. But having said that, you know, there are thousands and thousands of people, all of them who lower economic rather of our society who have those jobs, manufacturing jobs as well that are all being taken over. Now, is it new? No. I mean, we went through the industrial revolution and we're going through other revolutions every time the century mark changes. But, you know, it's very difficult to really figure out what the long term ramifications are. And so certainly we're not going to be able to solve it. I don't think anybody figured out yet how to solve it. And it's just going to be trial and error. Well, you know, in terms of job placements, we know that new jobs will be created and these are speculative assessments of people are predicting how many different jobs will be created. Yeah, that's fine. But it's not going to be a one to one situation where if you are currently flipping hamburgers, you're going to be able to take the next high tech job that's available. There's going to be some training issues there as people in low wage jobs are replaced. Are they going to be qualified to take the new jobs that are created? And all that's going to do is really increase our social and economic inequalities. Yeah, I don't have much more to add here, Chuck. I mean, it's a brave new world. And Alexander Huxley was correct, you know, how many years ago? Sixty. And he never even envisioned this. But I guess you can argue it's all in the name of progress. Well, you know, another thing to think about is that that China, which is a very authoritarian state, is using technology in all kinds of different ways to control this population. And they have a very extensive facial recognition program where it's not just in public public buildings. It can be in public open spaces, can be in schools on streets, can be they're putting this everywhere they can. Well, what's that that allows them to do is then monitor everybody's movements and everybody's relationships and everybody's life. And they get a pretty good idea of exactly who you are and where you go. And I think most Americans are very uncomfortable with that idea. So what are some of the things that do we? Well, let's back back up a little bit. Do we understand how generated AI really works? Is it building word sequences? Is that the way it works? Well, that's the problem. And that's probably a lot of science when you're trying to when we have social scientists like lawyers trying to regulate hard sciences. Yeah, it's challenging. You know, can we do it? I would say right now, no, laypeople don't really understand how this happens. You know, we hear the word algorithms. Do we really know how those work and how they're built? In a simple way, maybe. But yeah, I think we're still at the stage where the regulators aren't well positioned, or at least the ostensible regulators are well positioned to actually control this. Well, just example is a good one because the one about the lawyer who put in a chat GPT analysis that had cases that didn't even exist without bothering to check that. Apparently, that is what happens is if it hits a break in its linguistic pattern or chain, it will just make stuff up to fill it in. Well, I guess we're not getting any judges anymore, either, right? We can just submit the two briefs to a part official intelligence and ask them to issue an opinion. You know, that's an issue being discussed now in the Spear Resolution field, the ADR field, is that, you know, are we going to have chat GPT and other artificial intelligence platforms resolving disputes on private disputes? Are they going to be doing divorces? Are they going to do property disputes, the kinds of things that you don't go to court for, haven't been going to court for? It's artificial intelligence going to take that over. I think that's a really interesting question. I did, I actually did put a question in about my family members are fervent. It's not true. My family members are fervent Trump supporters and Biden supporters. And they are frequently in arguments. What can I do to bring them together? And it kind of brought out a whole kind of sequence, kind of recommendation as to what, you know, somebody in experience and he'll say are kind of predictable to speed resolution steps. But if you don't do dispute resolution, you haven't studied and you're not a mediator, you're not an arbitrator. I think you'd find this this explanation that suggests and very helpful. So I don't know if that's going to start displacing independent neutrals. I wrote an article way back in 2010 by an artificial intelligence robots and avatars, the demise of the human mediator that actually got a lot of downloads, still getting downloads on SSRN. But that kind of was a I hope was a kind of a recent article looking ahead to what might happen. I mean, I don't think it's beyond belief that two parties could sit down before a computer agree in advance to present their arguments in their cases to some artificial intelligence device and agree to be bound by the decision. I mean, I can see that happening. Certainly shinker and faster. You know, that's one of the advantages. Again, artificial intelligence, you could get that quick quick answer. You know, if it's programmed properly, you can get a lot less human error too. Certainly, that's that's an advantage if we talk about manufacturing processes. The question, you know, if you're doing justice, for example, one concern you would have is that to the degree that we rely on data and to the degree we believe that some of this data may be biased or have some implicit bias. That's just going to be perpetuated and it's not going to be perpetuated. It's going to exponentially be increased. It's just going to get worse. And and that's a that's, I think, another concern about about decision making by artificial intelligence. So what if we took all of the presidential candidates and we fed in all the information and data that we have on each of them and asked generative AI to spit out is evaluation of. It's predictive presidential performance evaluation for each of them. Would it affect anybody? Would it change any minds? Would it be helpful? Maybe I think the answer is going what 40 percent of the population believes already. The answer is probably yes, if they like the answer. I mean, that's the problem, you know, you want to hear what you want to hear. And when you hear it, you don't care if it's true, false or completely made up or whatever. So but that's a great thought. But, you know, you mentioned the word intelligence and having watched the Republican debate yesterday. I don't think those go together. Well, and you've got six of the eight candidates standing up there saying they would back the former president, even if he were convicted of any of the felonies he's charged with. I mean, that just that just sounds like classic cult behavior that no longer can I think independently. You know, I'm so wedded to our leader that, you know, I'll follow it to even my death. You know, I think there's probably people who believe that. So that's that's very unsettling. But in terms of, you know, right now, if you feed it in, is it going to change people's minds? I think it might be interesting to do and kind of fun to do. But I don't think we're at the point where it's going to change people's minds as to as to how they're going to how they're going to vote. But I think it'll reinforce them if they like the opinion and the conclusion and they'll discard it if they don't. I mean, that's what's happening in lots of other areas other than artificial intelligence. You know, all you have to do is watch cable news and know what they're pandering to. So, you know, it's a very, very difficult, I think imponderable area that is just going to continue to develop and and we'll just see whether it brings more good than harm. And right now, I don't know. I really don't know. I think it's it's an unanswered question. Well, you know, there's money to be made, you know, and if history is any less than if there's something that where there's money to be made, it's going to be done. So I think this is going to be done. It's inevitable. So it's challenging as it is. I think we're going to have to do everything possible to try to try and get ahead of it and try and regulate it. You know, I think one of those things we can try to do is just try and make things as transparent as possible. You know, how, you know, how, how does this work? You know, what criteria is this process using? Who's putting it in? You know, at least keep us informed as best you can as to what's happening behind the curtain. I think that's one thing we can probably certainly improve on. Well, I hear you. We're having enough trouble figuring out how to monitor and keep track of what's going on on the internet. So it's going to be interesting to see what happens with this. But it's a fascinating topic. And again, I admit I'm a kind of an intelligent person. When it comes to it, I just don't have any interest in it. But I was fascinated by what this service person sent me. It looked like it was written by a lawyer that it had three years experience handling landlord tennis. Yeah, yeah, it's a powerful tool. And as we wrap up for last thoughts, one of the questions is as compared to what our media provides in the way of information, perspectives and analysis, maybe we're not so bad off with something like generative AI who with a more comprehensive and possibly objective data source and analysis. Well, I hear what you're saying. I think it's a very valid point that I just don't think there's any answers right now. Like anything new is there's so much more to be learned. And AI is just moving so much more rapidly than other technological inventions over the past hundred years. And it's outpacing, as I say, the law. It's outpacing regulators. It's outpacing the ability to tell what's real from not real. So I repeat what I've said over and over. I think it's got potential for great good. And I think it's shown potential for great harm. Yeah, and I just want to encourage people to stay engaged. You know, it's difficult and challenging and unsettling as it is. If we don't learn as much as we can, at least try to be involved with it and involved with the regulation, someone else is going to do it. And they're going to do it for us. And I'm not sure I'm going to like the person that's doing it. And I'm going to agree with them. So I'm going to I'm going to work as hard as I can to to learn about it and at least to stay engaged. Hopefully all of us will be able to keep in mind if we can make it a tool that serves communication and understanding rather than one that directs and dominates or abuses it. There may be a balance. Now, I have to add, I have to add this. I know we're not supposed to, but you mentioned Melting Minneapolis. All eight Republican candidates were asked to raise their hand if they believed in global warning. Only one, Mickey Haley, even had the guts to raise their hand halfway. Right. And she got booed. Yeah. Yeah. Yeah. Anyway, great seeing you guys. Well, that's what I asked the AI. It's climate change real. And it said, definitely. And then it went through with a long explanation of why it is. So so that that was kind of gratified. Well, that's that's the reason then for 30 percent of the population not to believe AI. Maybe that's good. I don't know. Gentlemen, thanks so much. All right. Bye, everybody. Thanks for joining us. AI and our future. Take care. Aloha.