 Hello, everyone, my name is Shelly Reed and I am the manager of the Legal Services National Technology Assistance Project. Thank you for joining us today. We're going to be discussing ethics of AI and my co-panelist, Ellen Samuel is here from Justeck and she's going to be providing her information as well. So thank you for joining. I hope it's very informative for you. If you have questions, you're welcome to throw them into the chat and we'll make sure that they get addressed throughout the presentation or at the end. So let's get started. I've already introduced myself. So I'm going to turn it over to Ellen. Oh, I guess I need to share my screen so that you can see the slides. Thank you, Shelly. Well, while you're doing that, I am Ellen Samuel. I am the director of consulting at Justeck and we are a justice focused legal tech consulting and managed service provider. And I am a licensed attorney in Illinois and my background practicing law is in legal aid. So I worked at Prairie State Legal Services for a long time. I'm a former supervising attorney of the Telephone Intake Service and then I'm also a certified information privacy professional. And I'm so excited that we get to talk about the ethics of AI today. Alright, so this is our agenda, what we're going to be talking about today. Hopefully you all saw Shelly's great presentation from a few weeks ago about defining AI and really an introduction to AI. This is going to be delving a little bit deeper into that. We will touch briefly on the definition of AI and some of the other lots of acronyms in the AI world. So we'll be touching on those. But if you haven't seen that presentation, it is available on the LSN TAP Zoom channel. So I really highly recommend that you go check that out after you were done here. We also talk about understanding how AI works in legal practice and then legal ethics and AI. We'll talk about through some real world uses and examples. We will hit on privacy and data security, talk about some best practices, and then conclude. And as Shelly said, please, if you have any questions, pop those into the chat. We have lots to talk about. So pardon us if we go quickly, we want to make sure that we hit it off. So we're going to start by doing some definitions of AI because there's a lot there and if you have a better understanding of what AI is, it will help things make more sense. So artificial intelligence just really means that a computer is able to mimic or stimulate human problem solving and learning. And when they talk about machine learning, it's just using AI in a way that allows computers to improve how they work by learning from the data that they've been given. Generative AI is a kind of machine learning and it can produce new content based on the data that it was trained on and the content can be in many forms. It can produce images, text, voice. Lots of different things can be created through generative AI. GPT is OpenAI's large language model, and that's what we're hearing the most about over the last year. And a large language model is simply a computer that's been trained on a very large amount of data. And it's been trained to recognize human language and then create or generate written human language. Chat GPT is OpenAI's model that was built to interact in a conversational manner. So that's where they get the chat GPT. Other companies have models like Bard, Bing, Lama 2. Then there are models that are trained to produce imagery such as Dolly, mid-journey. There are models that are trained to create voice from text and many, many more options are, I mean, they're being created every day for using generative AI. And I am going to throw a link to the webinar that Ellen mentioned into the chat so you have a direct link right to that getting started with AI webinar. So if you're more interested in hearing a little bit more on definitions. And just to jump in real quickly, all of the pictures are actually generated by AI for this particular presentation. So these have never been seen by the world before. Just kind of cool demonstrations of what AI can do. And you'll note that Ellen has also included the prompts or what she put into the AI system in order to create that image. So one nice thing about this is if you have to create images for, say, you need to do social media or presentations, the images created by AI are copyright free. So you don't have to worry about that when you're using them. So for lawyers, there are many applications for the legal field. And one of them is alternative dispute resolution. In that field, they're using AI for predictive analysis and AI can summarize large data sets. It can generate insights. It can guide parties to understand the weaknesses in their cases. And also they have automated negotiation systems and enhanced case management. These can help identify critical elements that can flag issues and they can make recommendations for procedures that will help the dispute get, move along in a more efficient manner. There are also models that are built to help create asynchronous and virtual mediations in a more fluid manner. There's, attorneys can also use AI for legal forecasting. And this is to predict case outcomes by analyzing historical legal cases, identifying patterns and cases and trends. And this enables the AI system to predict the outcome of future cases based on similar cases from the past. So this can be really helpful in determining a strategy for a case. One nice thing about these systems is they can identify inconsistencies in conflicting info. So if you feed the system with the data and ask it to look for inconsistencies, generative AI is really good at pulling that information out. It also helps to reduce oversight. So if you have reams and reams of data, you put it into the system and it can, you can ask it to provide you a summary so that you don't miss something in your analysis. There is a caution to doing this, especially in the legal forecasting and the prediction of case outcomes. Because as historical data may be introducing biases or, you know, discriminatory practices, inequalities that have been going on for centuries in our country. So we have to be careful to make sure that we are monitoring for that and so that we don't continue to perpetuate those biases and those inequalities. Also, there's a caution because AI misses the nuances of human behavior. So it's just not designed to catch, you know, how humans behave and we always have to use some oversight of what the system is recommending to make sure that it's going to be acceptable to the parties in the dispute resolution, for example. I think, Ellen, I think you were going to talk about some calendaring ideas. Yeah, so one very exciting area of use of AI is case management and calendaring. We all know how important an essential calendaring is to the legal profession and how important statutes of limitations and, you know, court dates are. We can use AI within our case management systems and to organize case related documents and track those deadlines automatically, right? So the AI powered tools can analyze case documents and automatically identify important dates, deadlines, and milestones. And put them on the calendar so that you make sure that you're alerted to those and you're ready for that next court date or something that you need to provide to the court or, you know, whatever it is in that particular legal case. The AI can also do internal case monitoring and Shelley was kind of talking about that a little bit. You can monitor the progress of ongoing cases and it can update the statuses based on the completion of tasks and deadlines and milestones. A lot of case management systems currently have kind of a dumber version of this where you can set up repeating events or cascading events based on one thing happening and then, you know, conditional logic something else is going to happen. But these AI systems can really do that more efficiently faster and using more prediction. A lot of exciting things in the area of workflow automation as well. We can create workflows for common case case management tasks that are really administrative work and work that a person does not need to be all that intimately involved in and have the AI do some of these things like document review research even and drafting. Again, we want to we're going to talk about some of the caveats and and Shelley said some of them are already but we do have to be careful using these tools but they can really save a lot of time and, you know, may allow us to represent more clients more efficiently. As, as Shelley said there's a lot of emphasis on predictive analytics so I there are programs out there that can analyze all of the court written decisions by a particular judge, analyze the facts of your case and then determine whether that judge will make a judgment in your favor or against you so really interesting things happening in that area as well. We also have tools for collaboration and communication. So I'm sure you have all seen the AI power chatbots and virtual assistants that can help facilitate communication not only between internal team members and but also perhaps with clients or maybe in the, you know, pro bono world or the pro say world, being able to provide legal information in a more accurate and efficient way as well. And then resource allocation. So, really, the systems can analyze the resources that we're using internally and our for firms and help us determine, you know, we have this case handler open this paralegal could use more work and kind of predict that to make sure that we are using our internal resources efficiently. There are also systems available for contract review and due diligence. These are really interesting where they you can feed a bunch of contracts into these systems. And based on pattern recognition and different types of language, they can be trained on those contracts to create new contracts to review old contracts right like who wants to read the term 800 page term of service for whatever tool that we're going to use or that the AI can look through those things based on old contracts and kind of flag some of those issues. Shelly, were you going to talk a little bit more about forms and documents. I was so automate AI is really good at creating automated forms and documents and document assembly is something that has been, you know, almost a thorn in the flesh for for the legal field for so long there's so many different document systems and products. And it seems like we're making you know we're automating all these documents but we're not making any progress. So I'm hoping that with the use of AI is as the systems, you know, internalize AI, we actually can get these created and perhaps you know, it will make it so that the organizations like the courts when they're issuing these forms that must be filled out, they can issue them in a fillable format, ready to go as opposed to, you know, as I did when I was in law school I was a law clerk for a legal aid organization and I went through and, and made fillable forms from all the Pete all of the PDF documents that they were using, but now with the systems will they'll be able to automate them almost automatically. There's also great documents automation, you know, in word, we are had tools available in word where you can create blocks of text, but now the AI is going to be built in and Microsoft has released it and it's rolling out to everyone, where you can use AI to say, you know, put these together in this kind of document, and it will it will create the document and really save a lot of time in that document creation. Of course it's going to take, you know, human review to make sure that the document that is created is accurate. But as we train the systems, they will, you know, be able to do it more easily and more efficiently than we've ever done it in the past. I can't wait till they can make something where you can automatically do a table of authorities like it can look through the document find all those. I mean, I know it's on its way, but that is such a pain in the butt to do right. That'll be amazing. Well, maybe the one place sorry, the one place where people do lose jobs because there are companies that that is their main task is, you know, if someone is submitting a brief to the Supreme Court, for example, they send it to another organization that creates a table of authorities and a table of contents for them so. Yeah, yeah, it'll be interesting to see how companies pivot to embrace the technology or not and see how that affects kind of the legal market. In a similar vein, talking about doctor document review and discovery, I, you know, based on the inquiries that we've gotten discovery discoveries been around for a very long time right. And, but more and more legal aid firms are seeing the importance of using e discovery tools and they're becoming more and more affordable they've been so expensive that really only those you know very very expensive private law firms can afford them but now things are becoming more affordable and we can use these for our litigation and it really puts us as at a disadvantage to not use these tools if the opposing party is using those tools. We've been using machine learning and e discovery for a very long time this is kind of old hat for discovery and I'm sure you've heard of tar which is technology assisted review in the discovery, which is a system where we use a super we provide machine learning to feed in information that is going to be relevant to a case. And then we can process, you know, thousands hundreds of thousands of documents through the machine learning process, and it can pick out relevant information it can pick out things that are privileged it can redact things that should be sent over, you know that that can like social security numbers and, and other things it can affirmatively find those things. It's fascinating. There. This is a, an area that has been using this kind of technology for a very long time so really interesting whether where they're going with that. And then Shelley we're going to talk about marketing. We're in charge of communications and marketing for your firm. For example, it's really great to use these generative AI systems to create outlines to maybe start something you know to get something on the page that's always the hardest part for me is starting a blank page. So these generative systems are really nice to get started. I don't know that I personally would trust them to write things for me they are still kind of stilted and kind of choppy in their language, but they're great for creating a basis. They're really awesome at summarizing reports, say you know this is great for for attorneys to for document review of course you want to use a trusted system you wouldn't load something into chat GPT for example because of personally identifiable information, but you can load PDFs into these generative systems and ask it to summarize a document for you. So this is really good say, you know, the marketing office needs to create an annual report. Just throw all kinds of stuff in and have the AI system create a report that may be list all of the different kinds of cases that the firm did during the year. It's also great for again, as we mentioned before creating copyright free images for use in materials. So there's a lot of opportunities for marketing using generative AI, and the same tools that we use for marketing can be used in other ways in legal work, you know, as I said, so it's it's a really exciting time for creating sort of creating materials. And if you are writing materials, say you work for a nonprofit or you know legal aid organization and you do family law, and you want to create maybe a coloring book for for kids whose families are going through some kind of family law issue. You know, you could put into the generative AI and, you know, ask it to summarize the process, and it's a really great way to quickly make materials that can enhance your work. Another field that we are that is making great strides for use of AI is in translations. And, you know, I personally am comfortable using AI for my own personal needs. But if you're doing translations for a legal firm, again, you want to have someone who has the conversational ability to take a look at the translations to make sure that they're accurate. But AI translation is, I mean, sorry, translation is very expensive. So for organizations who do need to produce materials in multiple languages, AI translation is a great way to start have the AI translate, and then you pay an expert to come in and verify that the translation is accurate. This is significantly cheaper than having an expert do the translation from scratch. Always in that when you are doing translations, you want to make sure that you're meeting the Department of Justice language access standards. So I know that this is a huge, a huge area of debate. So that's why I say always have someone verify that the translations are accurate. Absolutely. And you know, just kind of as a side note, it's kind of fun if you're learning a language if you're checking on your writing in a particular language. ChatGPT can have conversations with you in different languages, which is really fun and correct you on your spelling and your grammar. So you always want to check and make sure that it's right though. For other kind of more operational issues for billing and cybersecurity, we can use these tools to predict attacks and identify anomalies, and perhaps even put together regulatory and compliance reports. So you all have to put together all kinds of reports every month and the AI can remove some of that data processing and identification that we're doing as people and take those tasks away from staff so that they can be using their skills for things that are more valuable to the organization. Speaking of a daunting task, legal research and compliance, Shelley, tell us about that. And this is kind of where we're really starting to dip our toes into the ethics of AI, because legal research with AI is problematic in part because case law has been behind paywalls. So the AI has not been able to be trained on all of the case law that's available. Also, based on the training data used the large language model may not be up to date. For example, chat GPT was trained on materials available up through 2021. So there's nothing after 2021 in chat GPT 3.5. So there can be big gaps in knowledge depending on the system that you're using. And also, well in two years is a really large gap if you're talking about case law I certainly wouldn't submit a brief that you know has case law that has it is more than two years old without verifying that there's something more current. Even now, as we have Westlaw and Thompson Reuters, they're rolling out their AI power research tools, but there are still problems with these tools with hallucinations. So, even though I've been told that hallucinated cases in Westlaw system, for example, are easy to spot. That's still kind of a problem for me to think that these bastions of legal research are are going to allow their, you know, to going to release these systems that still can hallucinate cases. So it's going to be interesting to see where that goes over the next few years and how they're going to correct those problems. We also need to understand that legal research in an AI system is not what we would truly consider research. AI models are not going to a book and taking a look at what's in there, you know, recording it, and then, you know, you can give a citation and then the next person can go to that same book and that same source and reproduce the material. AI systems take what they're trained on. They take the prompt or the question that you present, and then they predict answers based on their training materials and based on what they pull from your prompt. So it's not true research. The answers are actually just a prediction based on the knowledge in their database. So be careful as you think about research and using AI. It's not really the same thing. And also, think about the system that you're using. I'm more comfortable using Bing, for example, because it does provide citations for what the answer that it provides. You can actually go and verify the information because it gives you the citation. These legal research tools like Westlaw and Thompson Reuters also are going to, you're going to be able to pull from the original material so that you can verify your citation. The other thing is if you want to use the generative AI and you have your case information, if you're using an open system such as chat GBT, you do not want to put any kind of personally identifiable information into those open systems, because we have a duty of responsibility to our clients to protect their information and putting it into these systems would then allow those systems to use it for future answers. And that's the last thing we want is your client's information to be spewed back and then answer to someone else. So think about the system that you're using. I think we're going to see very soon systems that are legal systems that it's going to be okay to put in personally identifiable information. But not all of the systems are there yet and you want to verify if your if your data is being used to train the GPT. Yes, absolutely. One area I'm really excited about is pro bono. For all of, if we have any pro bono directors out there anybody on the pro bono staff. Imagine where you could take a narrative written by your pro bono attorney and have the AI. Pick out case outcomes, pick out important information, pick out the case, the person's name and what what happened at the end of the case and when the person was community, when they yours communication and then also format that in a way that's easily you can easily report on that you can easily put into your case management system you just plop it in right or have it plop it in fascinating things that we can do for our volunteers. And, you know, again with creating trainings and supporting these pro bono volunteers, really, really exciting things that we can use the the AI for. And I kind of headed myself a little while ago talking about the coloring because this is something that I'm really excited about it's it's a pretty new feature for chat GPT for. And, you know, I just have so many ideas for how we can use this in the legal aid field, you know, by providing very simple to understand it could also be used with, you know, teenagers. Foster power in Florida is a great example of how, you know, an organization is empowering foster kids with information, and a coloring book format would just be another way to provide information that's consumable by a young market. So, lots of fun applications for generative AI. So, some of the other things that are happening with legal specific programs that there's some large language models for legal and co counsel by case text is one. Amto AI is a system that's used for legal drafting for IP attorneys. There are legal assistants whose sole purpose is to help research draft, help a document review and summarization and case text system is one of those. Smith AI is one of those. Gideon is one that's coming up in the document automation. Otter in teams. Now, do you have you use that Ellen at all. I have actually so two separate products but Otter actually can come into your meeting and take notes and do summaries and pick out who said what fascinating fascinating stuff so that you don't have to you could spend your time concentrating on actually listening and learning and instead of taking so much time note taking fascinating products really love that they can do really good summaries and can do agendas and action items and stuff like that. Very cool product. Yeah, I've seen a lot of excitement over using generative AI to record telephone conversations and to record meetings with clients and things like that. It's something that again I would be caution I would caution you on because you need to you need to consider your state rules and state laws on recording conversations, you know whether you need to make sure that the other party knows and is aware that the other party is being recorded. So just consider if you're using these tools which can be really great for a lot of things. You have to think about the other ramifications you know what other laws or rules correspond. Lots of really interesting things going on and they can save a lot of time. As, as Ellen said, you know it can produce action items, you know, for staff meetings, it's a great way to record staff meetings and, you know, have an outline for any anyone that missed the meeting for example. There's also a lot of action going on in e Discovery with generative AI, and it's not anything new AI has been in place in Discovery for at least a decade or more, and they're using it to comb through data. By powering, by powering the prompts and the training that's entered. These systems are able to really pull out more and more detail without as much human work to go through, you know, and products such as CS disco, logical, relativity, ever law, Microsoft's purview, lots of really interesting things going on with these products. And over the next year, these are some of the products that we're going to be featuring in our tech assistance office hours demos. So we'll be able to see them in action and see what they are able to do. One, one little note that I saw that I think is really fascinating is that they're using these systems to predict parts that are missing in photos, or if you know I have a partial photo. Because the lawyer in me thinks well, you know how can you prove that that was what that's what was there. So I think it's really going to be interesting to see how case law determines the use of AI in photo prediction over the next couple of years. One of the areas that we've talked about previously is research, and one of the new research tools that's not even available to use yet is Harvey AI. It's marketed as the gen AI for elite law firms, and they just raised 700 million in funding. And to my knowledge, no one has ever even seen the product or use the product yet. So that's pretty amazing that they can raise that much funding without there being a product on the market. Blue Jay is a product being used by labor and employment attorneys for their AI research and analysis. Lex Mackina has a legal analytics for law firm law firms and companies that helps them to create, you know, to create successful strategies and win cases by compiling, cleaning, enhancing, using machine learning. And they also very specifically state that they have a team of in-house legal experts that review all of the materials going in and the results coming from Lex Mackina. So another area we're seeing a lot of movement is in contract review. Diligent is a company that has a product there. They do more than contract analysis. They also market their product as a project management tool. Log Geeks is automating legal work, redlining and negotiating contracts. And they're saying it's faster and cheaper than lawyers. What is it, though, really? Anything is faster and cheaper than lawyers. And then there's another, I guess, robot lawyer. So AI lawyer is saying that they can provide affordable legal advice. So that's pretty interesting that they're going there because, you know, that's kind of a dangerous area. And they empower consumers and lawyers with AI driven solutions for all your legal needs. So pretty interesting. All of the things happening in the field with generative AI, you know, new products basically are being released daily. That's great. So as over the past year, as more and more people have tried these different chat GPT and other gen AI products, we're seeing the results of those hit the courts. And I think we've all heard of the motto of the avianca case in the southern district of New York and where the, the attorney filed a brief that contained, you know, fake cases, you know, the chat GPT hallucinated and provided cases that look real. And then when questioned by the court, they doubled down and said that they verified it by using chat GPT. And we all know that that was not really acceptable. And the court has fined them $5,000. There's also been an attorney in Colorado who has been fired from his job and also sanctioned and I think also lots the ability to practice for over a year for citing fake cases created by chat GPT. And his name was Zachariah Crabill and we have that linked on our database that we're going to talk about shortly. We also have just recently heard about Michael Cohen's attorney David M Schwartz, citing fake cases, and nothing has been said yet, but I'm betting that there was a generative AI used in creating, you know, in finding those cases. Somebody in the chat actually asked what is what what does it mean when a large language model or AI hallucinates. Do you want to define it a little bit basically hallucinate means it makes it up. And when, when a generative AI system makes something up, create something out of the blue. It is very convincing that it's real, you know, it, you know, it will site it will have, you know, the court citations, if you're talking about a court case. I mean, it will create a court, it will have dates, you know, kind of like what we expect to see so it can, and when it any hallucination can be very convincing all of these systems do. You know, you've ever heard the term fake it till you make it. Well these large language models are faking it and they're making it I mean enough that attorneys are going into court with these products. And it looks it looks incredible like you're like yeah that was the case right of course that's what happened but it's completely made up. It's fascinating and there it can hallucinate they can lose it all kinds of things right so you need to. I asked, I asked chat chat GPT to find me all inclusive resorts within the United States and it found one on Turks and cacos and I said is Turks and cacos in the United States and it said yes it's in the United States like it just makes up sometimes or it's just wrong right. And this place didn't even exist it just like it made it up complete. Another cautionary tale here is not in the legal world but it's probably is going to have legal ramifications of Samsung engineers were using chat GPT, the free version to fix source code so they were putting source code into the engine and it said this is where you got it wrong it's fascinating all kinds of computer code it can correct and look at and find those things where people used to have to like go like oh where did I put in the wrong thing right it can find those for you really quickly. They were also using chat GPT to summarize meeting notes where they were talking about top secret projects. Relating to do hardware and new things that they're providing chat GPT is not a safe place to put confidential information per se the purpose of it is to learn from things that is being put into it and other things that are around on the internet so anything that you put in there could potentially be used to you know in the future and like Shelly said earlier you if you put something client related in there it could potentially spit out your client's name at some point, or that information, it's not safe. It does there there are terms and conditions for chat GPT in particular, where they say that you can turn things off so they will not use the tool for learning or, but that still makes me very uncomfortable we're going to talk about how to get to the ethics part, but be very careful the Samsung's people got in deep trouble for for using the tool without understanding how it works. And that is kind of a there's a question in the chat about is everything mined by AI tools, for example is an info is info private to the organization using the tool. And this depends on the tool being used any publicly available chat GPT large language model I would not put personally identifiable information into them. If it is a closed system created for your organization's use, then, then it is more safe to use, however, it that those results could show up in in responses within your organization if that makes sense. But a closed system is a safer option than anything publicly available. I hope that answers your question Dina, or I hope I pronounce your name is correctly also. If it doesn't let me know in the chat and we can talk about it some more. One of the big stories in the news on AI is the do not pay debacle, you know, once marketed as the robot lawyer do not pay is now calling themselves that they are saying that they use artificial intelligence to help you fight big corporations protect your privacy find hidden money and beat bureaucracy. So they backed off of the robot lawyer scheme. But if you remember, the whole community was horrified with by their offer to pay someone to allow the robot to argue before the Supreme Court. You know, multiple lawsuits occurred after that. And, you know, they've kind of won some of those lawsuits, which is interesting. But they backed off their robot lawyer scheme a little bit. That's a fascinating case and dovetails nicely into legal zoom, which I'm sure they've been around for a long time and they've always kind of skirted the line between providing legal information and providing legal advice. They are now marketing a new tool called doc assist, which is supposed to combine the power of AI with their expertise in legal technology to review contracts and do other things there so I, I would just watch them because that's an interesting case of where they use technology and they really really skirt that line. We leave the side I did want to point out one really interesting thing about image generation for these models is that they cannot spell. There is something about text that they can't write, you could tell it in quotes spell this this way you can see where it says citations you could make a picture of chat GPT making up citations, it can't, they can't spell so if you're trying to figure out, at least now, if any of a particular picture is AI generated. Look at the spelling. Look at the hands, look at the eyes, those are things that the AI can't do yet but they will eventually. That's definitely in pictures try to avoid. You're just not going to get text that's usable, at least now. One of the things I think is really interesting is the discussion of whether information provided by generative AI systems. Is it legal advice, or is it legal information and I think we're going to see court battles over that as we move forward. Adam Harden of pro bono net has done a survey, which was really interesting and he surveyed attorneys and asked them, you know, are these chat or generative AI generated answers are they legal advice. Are they legal information, and the results of the survey were that more people found it was legal information. If the system provided the disclaimer that they are that you should, you know, you should consult an attorney or this is not legal advice, which is I thought kind of funny. That was not how I based my answers, but I thought that it was funny that more attorneys thought it was not legal advice. If the system gave that disclaimer. That is super funny and actually that reminds me of something where that was happening while I was preparing for this presentation. One of the prompts I put into I think barred and I can't remember exactly what I said, but it kicked it back as saying that it was against the law of service for acting asking for legal advice, which was really interesting because I was asking it to make a picture. I was not asking anything related to legal advice, but they're trying, I think with legal advice medical advice to kind of flag things to make sure that these models aren't allowed to do that but maybe they can, right. Okay, I was going to say I'm sorry I tried to copy something into the chat and it's putting a picture so we'll make sure that everyone has these links and all of the materials available to you later. I just wanted to point it out that AI struggles with depicting ethnicity as well. Very true. A recent AI image tool I use with one of my photos generated some horrifying versions of me. If you'd like to put them in the chat we'd love to see them. Thank you. It can't do glasses either. Yeah, it's all kinds of moving on to legal ethics and AI. As we promised, we wanted to talk a little bit about the rules of professional conduct and how they, what things you should think about as a lawyer or as a legal professional and in using these tools and determining whether you, you can or should use these tools. We're going to hit on this very quickly as you can tell Shelly and I could probably talk about like have a whole week long seminar about this stuff. I wanted to put out some things to think about when you're considering using these tools. Number one, and again, the rules of professional conduct vary based on states. We are lawyers this is not legal advice you need to make sure that you are reviewing your own states rules of professional conduct. This is based on the ABA rules which most states have adopted whole cloth but some have made different changes numbers and have adopted different things. The first really important thing to think about is that we as legal professionals are required to be competent and the drafters of the rules of professional conduct, made a special change in the rules a few years ago to add a comment to the rules that say that we are required to stay up to date on the benefits and risks of using particular technology. So that's going to be your comment eight. So really, although it's not written into the rule, the really the main language of the rule, it does not say that we are required to be able to understand the technology we're using and the consequences of using that technology. If you yourself are not able to do that then you need to be able to hire someone who can give you competent advice about how to use these tools and the risks of using them. So that's all I wanted to focus on was the rule about confidentiality and this is a huge concern as we saw in the Samsung case. You should not be putting in confidential information to a system that you are not sure is not closed and particularly meant for your organization or for legal a legal specific tool for your organizations if you don't have a policy on AI, we really need to start getting those in place so that our lawyers are paralegos are legal professionals understand what they can and cannot use these tools for. It is a breach of confidentiality to take a brief with client information and plop it into the free open version of chat GPT. You just you should not be doing that because that information is going into their system and is being used for learning right so that is I would say again not legal advice but I would say that is a violation of your duties on to maintain client confidentiality. Number three, 3.3 is goes back to the hallucinations of chat GPT now it does say that the lawyer shall not knowingly make a false statement of law or fact but I would say and I think the courts have agreed that a lawyer who uses a system like this and does not check the citations pretty much is knowingly making a false statement of law or fact so if you are using these tools without checking the information just like we're required to check our cases that we get off of Westlaw and Lexis to make sure there's still good law. If you're using these tools you need to make sure that you're staying on the right side of the rules professional conduct by making sure that what you are providing is correct right otherwise also that's a malpractice. Just to send out information into the world that you have not checked. What interesting rule to think about is this 5.3 which is the supervision of non lawyer assistance or assistance some of it. Sometimes it's I SS I as T and sometimes it's T as so like an assistant depends on the state. Are these tools non lawyer assistance and we are using them for assistance how far does this go. It is intended for for I think for people but we need to think about how this rule is going to affect legal practice and how we're using these tools. And then again as we've mentioned before, many of these models have been based on discriminatory backgrounds right on information that is not reflective of our world and our society as a whole. So, the ABA has actually said that may not may violate rule 8.4 prohibition against engaging in discriminatory conduct using bias AI platforms. How do you know if an AI platform is bias. That's a great question. We've done some research into the issue not even the creators of some of these systems understand the information that has been put into them and the bias that the systems have so that's going to be an ongoing issue that you need to think about what upon what information are these systems being fed information and are they based. Also the federal rules of professional conduct if anybody does any federal I'm sorry the federal rules of civil procedure if anybody does any federal work here we're talking about rule 11, which again says that you are certifying an attorney is certifying that to the best of their knowledge information and belief after a formal inquiry that thing reasonable inquiries at the facts and the law are correct and anything that you are filing so again we need to make sure that we're doing our I would say due diligence to make sure that the information we're being provided by these systems is correct. And I tell you we're going to talk a little bit about the state bar and the ABA. Yes, so California and Florida have actually are have actually worked on proposals about the use of AI and California's taken a really balanced approach I think they've said that the existing professional conduct rules cover issues presented by gen AI. And they are working on a what they call a living practical guidance resource, and they're also talking about what kind of education is needed for law students to be prepared to practice in this new world of AI. Also they're working. The proposal recommended that they work with the legislature and the California California Supreme Court on whether they need to define the unauthorized practice of law. More clearly, when considering the usage of legal gen AI gen AI products. They, I think that this approach is really rather balanced. And, you know, I do recommend you going and taking a look at their proposal, and I'll give you a resource on how to find it in just a moment, but compared to Florida, they're taking a little bit of a different viewpoint. Florida's proposal kind of anthropomorphizes generate generative AI and makes it almost to the level of a legal assistant that needs to be supervised. So for example, if you have an AI powered chatbot on your website and it's overly friendly, it could create a legal relationship with someone, you know, using it on the website. So, you know, that's kind of troublesome to me that they could find that a chatbot could create that legal relationship. But one of the things or some of the things that they say you can do is make sure that the chatbot clearly identifies itself that it's not a lawyer. Make sure that the chatbot limits answers to questions that provide factual information like our office hours and things like that. And make sure that the chatbot is not offering any kind of legal advice concerning the clients or the prospective clients actual matter. So, you know, it's a little scary that they could find, you know, in the future that an attorney who had a really good chatbot, you know, actually created that legal relationship. So we'll have to watch and see what happens. The another thing is Florida was very specific that any gen AI generated voices or images that lawyers must be really careful to make sure that those voices and images do not create an erroneous impression that the person speaking or shown is the advertising lawyer, or a lawyer who is an employee of the advertising firm, and all of those messages should contain a conspicuous disclaimer. So I think that's really interesting because as we move forward, these systems now look like a real person, they're becoming able to sound like a real person. So now Florida is suggesting that we need to put a disclaimer on any materials created with these generated images and videos that it's not a lawyer. So we're running out of times, but so I know we have to do a we might have to do a part two in January. I think we only have about five minutes left. So another thing that Florida mentioned was making sure that or they compared the use of AI and in the sharing of that info with the destruction of computers. So if you have material on a computer you you must make sure that it is disposed of in a responsible manner. So they compared that to using AI and putting information into an AI system which I thought was kind of interesting. The ABA has a task force that is working on the current and emerging issues in AI and providing practical information that lawyers need to stay abreast of what's happening in AI. And again I'm going to try to drop this in the chat, but we do have it linked in our database that we're going to be giving you a link to here shortly. So let's move on, I think to the next piece. Should we, since we only have about three minutes should we just jump to the database so everybody knows what it is and then maybe we can if anyone's interested I'm sure Shelley and I would be willing to do the rest of this. Next time. Good. So, so Alison tap has created this database of what we're calling the artificial intelligence information database, and it includes all of the, there's a link on the slide. To the database and it's available on our website if you go to the resources tab and then click on artificial intelligence it's listed right there on that on that page. But if you need to know cases that have talked about AI if you want to know what state bars have issued guidance or have proposed guidance like California and Florida they're listed there. If you want to know legislation regarding AI, it's it's being added, we add to it on a daily basis. So we're hoping and everything that we've talked about today, all of the news stories and the resources are listed in this database so we're hoping that it's going to be a useful tool. And if someone mentioned in the chat about having a, you know, a folder of artificial intelligence standards or usage policies, and that may be something that we can just add to the database as opposed to having a folder, you know, just link right there to the policies there. But we also, Alison tap is working on guidance for the usage of AI by staff of law firms and legal aid organizations. Thank you. I think we're at time. We have two minutes one minute. Let's see if there's any questions that we can answer really quick. Yeah, if anybody else wants to throw any questions in the chat. We're on mute. We do have more information about AI policies and considerations and data privacy protections data security and best practices. So I'm sure we can hit those again for another presentation. So Maxine asked if there are other products other than Microsoft that have been vetted and that you recommend. We would recommend that you look for legal specific products, because those are designed thinking about the, the, the things that lawyers need to think about. So that would be a starting place. I mean, we're not going to come out and say use this product. You know, that's not our place, but we can certainly provide you information on helping you make a decision for a product. But I think a good place to start is legal specific products. And then, John, it was not a dumb question about hallucinated cases. If we didn't answer your question, let me know. And we can get there. Let's see. I am not seeing any, any other questions and anyone else. Kylie did say, are there any examples of folks using AI and a way to create new challenges or side step work in justice. If so, what are some things we could do now to preemptively mitigate. So I think what we're going to see is self represented litigants, perhaps on the opposite side, using AI to create legal briefs and things. So if you've ever had to, you know, write a response to a self represented litigants brief, it's going to be even harder now with AI generated briefs because it's going to look good. It's just when, you know, when you get start digging down into it, the first place I would say is always, always, always check citations, you know, that would be the first place to start if you're working with a brief and, you know, Westlaw and Lexis all have both have, and other systems have things that you can load a brief into to verify citations so that might be a good place to start. So I thought we were going to run short today so I apologize. Obviously it was not the case. So thank you so much for being here we hope that we have given you something to think about if we decide to have a part two to this, we will certainly push it out to the community. We're not a part of Ellison taps listserv. We, we highly, you know, we invite you to join us so that you get news about our events, but of course they're always available on our events page on our website ellison tap.org. And thank you once again if you do have questions I'm going to stick around for a few minutes after and be happy to answer them then, but I am going to will pause the recording now. And then if people have any questions after you're welcome to ask.