 Welcome, everybody, to the summertime edition of law.mit.edu's task force on the responsible use of generative AI for law. Thank you for being contributors to the task force's report, which is currently in draft form. And if you were a contributor, you were invited to this forum. And what we hope to do today is to have a relatively informal discussion, a little bit of an update of where the task force is at. But also we want to hear from you and provide something a little more human than a Google form as a way for you all to express your views and for us to perhaps have some discussion. Just for the sake of sanity and good order, people are sort of on mute right now. But if there's something that you would like to say, you can pipe up, I think the easiest way is you can put your hand up, but also pipe up in the chat with what you'd like to contribute to questions, comments, ideas, and then task force members and Damian can unmute you. So with that, let me say I'm Dasek Greenwood of law.mit.edu, and I am delighted to say that we have gone through what I would consider a very legit process in this MIT task force to explore the topic of responsible use of generative AI for law. We started with the members of the task force coming up with our own best guess of what might be in such principles, and that got us to version zero dot one. And then we had a larger concentric circle of quite a few people who we requested feedback from many of you I see in this very zoom. And then at that point we felt comfortable enough having battle tested it a bit with a wider circle to do a very early, I would say release of a draft, which is zero dot two that you would see right now and that we'll go over shortly on law.mit.edu forward slash AI. And for that release, it being published publicly, we invited public feedback and review through an open Google feedback form. And that has turned out to be a really good thing to do. So there's been a wide variety of perspectives and vantage points and great examples that we've received. In addition, we have also jointly with Stanford's Codex. Thank you to Megan held a in person feedback session as part of one of Codex's generative AI events two or three months ago. And that was incredibly useful as part of the reason why we're having a zoom today is because there's something when you get people to talk that is just magic that you can't get through a Google form. Then we did a second one. Thanks to where is the Robert Mahari of MIT is part of an event that he led at MIT where we similarly had a kind of live feedback session and that was invaluable as well. I should thank Olga Mac who can't be with us today for co-facilitating the in person session with me at Stanford and Aileen and Olga. And Aileen is with us, another task force member for co-facilitating the session at MIT. And we've also done some presentations to bar associations and got good feedback and a real interest in taking this work forward as part of their own future rule making. The one I'm familiar with is the California Bar Association where they also kindly made me an advisory member of their, I'm going to ruin the name, but the committee that deals with professional rules of responsibility. And also with the American Bar Association where I spoke on this and other topics at their annual meeting I think last week and they have kindly made me an advisory member of their presidential task force on generative artificial intelligence and that will be the vector for us to share these ideas from this whole community and also our draft into the ABA as part of their rule making. But it doesn't stop there. Damien has also told us Damien real who is with us and who's been a terrific contributor that he's also done terrific work kind of funneling these ideas into the Minnesota State Bar Association. And the thumbs up indicates that it's been they've been receptive to these ideas and note that receptive to the messenger as well. And so with that I just want to say thanks again for joining us today and I'd like to hand it over to Megan Ma who will do a quick walkthrough of where we're at right now and also has a really cool announcement to make about how we're dividing and multiplying the environment within which the task force operates. So with that Megan it's all yours. Yeah thanks so much Dazza for the introduction and for setting the stage here. I feel really kind of proud of our task force that we've kind of gone through these iterations and are able to arrive even at version 0.2 to be able to publicly share. And I just wanted to say that since we've announced kind of this public feedback and kind of requesting sort of you know what your insights might be we've gotten close to 50 responses and that goes to show just how significant this era of generative AI is becoming for the legal space. Not only the legal implications from a professional responsibility standpoint but also how you the use cases and sort of how to integrate them into our legal workflows. So it's incredible to see so many faces here even for our kind of public forum today but also the comments that we received. I also want to give a shout to Stephanie who I see on camera. She was the one who sort of really kind of put our task force in the work that we're doing and kind of trying to get more eyeballs on it. I think the more the merrier and so kind of that's a good segue for me to say that the task force went from being the MIT task force to now a joint task force with the Stanford Center for Legal Informatics. So codecs where I work. So we're really thrilled about that. So in going forward a number of our workshops that we host on generative AI and law will include kind of questions related to professional responsibility and what really is the evolving role of the lawyer. Is it in the training of our lawyers is in the training of our students. All of those issues will come to head kind of as a specific part of our coming events. So we're really thrilled about that. So this is all thanks to the enthusiasm of all the faces here. Of course to Stephanie who kind of helped put this forward for us and also to the task force members who really love doing this just on their free time. And so we're very much appreciated. So now to the meat of the content. So for those of you who haven't actually already seen the principles or have seen a version of it but didn't kind of look at the where it's housed. I'll just quickly kind of run through that again. So as I mentioned it's on law.mit.edu forward slash AI. And the latest version still is of June. So we're hoping that the outcomes of not only kind of the information we've already received through our Google form that will also kind of put together some of the comments and thoughts from today's forum as part of kind of version 0.3 and we want that to be kind of the most comprehensive to date. As I mentioned we started off actually basically with just the seven core principles out there. So as a reminder these are the seven principles. Duty of confidentiality, duty of fiduciary care, duty of client notice and consent and you'll see an asterisk there because that very much is an open question even amongst our task force. Duty of competence, duty of fiduciary loyalty, duty of regulatory compliance and duty of accountability and supervision. These are sort of the version 0.1 and then kind of version 0.2 what we try to do is better enumerate or qualify what we mean by each principle providing what we saw as an example that is consistent with the principle and an example that is inconsistent. And kind of from behind the scenes this took a very long time even across our task force. We were very careful that even the examples that we give we don't create kind of categories or sweeping categories by what we mean by that. We wanted to be pretty sensitive to how these technologies will continue to evolve and just wanted to find kind of ways in which you know a broad sort of example but not quite kind of tethered to this current state of the technology. One thing that we continue to debate about was the client notice and consent. I think many of the sort of task force members were divided in the idea of you know what are the limits of consent here knowing that you know this technology is bound to be integrated deeply into our workflows. I think regardless of whether our decision to use legally specific AI I think that's one point but all of our existing office tools such as office 365 or Google workspace they're going to have some form of generative AI directly integrated into them. So one way or another we're going to be exposed. So the question is in terms of consent is you know what exactly are we disclosing here and is it just the content specifically if it's generative content such as you know I used it to draft part of the contract or I used it to draft you know certain other documents or other you know use cases. So I think this is where kind of a lot of our open questions are and that we would love to get your input on and I think the other open sort of question around what we see with version 0.3 is should we get more granular in terms of what exactly we see as viable use cases for these tools. I think in coming out or coming out of still the fascination of the tool and large language models itself I think it's still a little bit of spaghetti on the wall to be honest in how people are using these tools. There's of course kind of techniques around prompt engineering but what is the role of legal prompt engineering which I know DASA and Olga have been really kind of pushing forward. What will be kind of the future of the engagement models with these tools is it kind of if it's integrated directly with say office 365 is it just a button that we're faced with and that we need to understand and peel back the current almost so if there's a button that says professionalize and you know you have a couple of bullets and you click that button and it basically transforms your you know couple of bullet points into a comprehensive contract through a button. What does that necessarily mean so those are some of the questions that we want to tease out I think going forward and I guess the last point that I often think about is obviously this came out this is especially important in light of the MATA versus avianca airlines case which I'm sure many of you already know and I think for this it exposed you know this imbalance between trust of the tool perhaps some overreliance on the tool and against kind of what we see as the future of professional responsibility and I know that many of us here recognize that that was somewhat reckless behavior on all fronts in terms of how it was used and so obviously many of you are here today because you know that that's not the way that you will be using these generative AI tools so I think I will pause here as I've said a lot and I want to open the floor in particular to a few questions that we had actually asked as we solicited responses for feedback on the task force I will kind of leave them up here in particular we're asking of course like are there other duties that we think need to be included is there any input particularly jurisdiction specific so we actually didn't we it's largely tethered to the US in particular some of you have commented that it's not too far off from the ABA but it's an extension in some cases but what happens when it interacts with something like the EU AI Act or other kinds of regulation that we need to be aware of yeah so I will pause here Ben great thank you very much that was stellar tour de force as as always Megan and may just ask before we die oh so this is a prompt as it were for everybody that's joined today if you have questions or if you have ideas or comments that you'd like to share one thing is I will invite you to do it I think the simplest way is to put something in the chat so we can have an idea of what you want to say and then we could we'll take you off mute just please be aware that that this is being recorded and what we'd like to do in a I think very much an MIT tradition for these types of task forces is operate pretty much in the open and so what I'd like to do is take the recording of this forum and put it on the task force website so if you prefer not to have to not have your likeness or voice on a video like that then don't come off don't do that and just go ahead and use the form or you all have our emails I make sure that all the task force members were on the on the email so you can hit reply and we could take it out of channel but for this forum if you'll indulge me please I want to kind of keep in the the open transparent kind of legal hacker tradition of MIT and but don't let that slow you down from sharing your views and other channels but if you have views that you don't mind everybody hearing come on off mute and let us know and to give you a little time to let that percolate I would like to ask if Shana my erstwhile co-chair Eileen Damian or any others in the or actually Stephanie come your an honorary member in my view based on especially your stellar contributions to Stanford and the great article you wrote if any of you have anything that you want to say just to to frame things before we get into it now's a great time and also we'll get people half a second to to see if they can bubble up a contribution well and thank you does that and Megan and for really taking a leadership on today so we definitely appreciate it as many of us have been on vacation so thank you all for being here as we dive into AI I mean AI has been around for decades and I ran the Watson legal practice for about a decade and it's really fascinating to see the really maturity that AI has taken even over the past year and one of the things that I did want to mention would love to hear thoughts about is even in 2015 Watson alone so Watson is an AI program that IBM had had over one billion users and a majority of those did not know that they were using it so I think some of the challenge that we have here is many people will not know they're even using AI because it's just part of their day to day so what is it that we can put together put in place in regards to our principles but then also education and knowledge that we can take to the marketplace to make sure that people don't become complacent and just start to rely only on the technology really leaving the human behind some of the best projects that I worked on with IBM Watson at the time and even now today with generative AI is those where they don't take the one and done response instead it's a build out a kind of an exploration of how to use that generative AI and keep on getting better and better and better so I wanted to mention that having been you know it focused for a decade that was one of the issues that I that I saw and I'd love to hear from everyone their thoughts on that because I think we will get to it to a very near and dear future that we won't know what we're using honestly you're here this is Damien I have something to add to Sean is really good comments that that largely if AI is doing its work it's invisible that is we don't know we're using it and also the definition of AI changes over time if you know somewhat in 1990 showed the google maps and that would have been amazing AI until it's not AI it's just software so this that's a prelude to largely my comment to whether client consent is necessary do I need to get client consent or to notify my client that I checked over the first year associates work do I need to get client consent to say that I use spell check do I need to get clients consent to be able to say I use Grammarly or another grammar check those were AI right until they're not and so really I think we need to think about as as we go forward that AI is not this point in time but it will become just software as we go forward so we don't want to make rules that will seem silly in five years anybody else from the task force have any any things to say no there's no pressure but if so now now's a good time before we dive into the mosh pit of open conversation I'll just throw it a quick note of gratitude also to you to you and Megan for putting this together and Stephanie as well and yeah I'm excited to hear what people have to say we already have people timing here uh in on a chat so yeah nothing beyond that and as I'm trying to unmute I know Stephanie go ahead yeah sorry yes I'm in an airport so I'm very loud so I will stay mostly on mute um just thank you for including me in this I'm the reason I wrote about this is because I think it's very important so I mean I've been writing a lot about AI and generative AI in the last six months and I'm also familiar like Shana said that AI is nothing new but this really has taken on a new life of its own and I do definitely agree with the point that a lot of people don't necessarily understand that they're using AI even when they are you know I see here a lot of people say I would never use AI say my I wouldn't hear people say I would never be in the cloud and like they are so I'm just really curious to see how this plays out and it's just really interesting for me to hear what the industry is thinking what they think the issues are so with that I know it's very loud where I am so I will mute myself and listen to you all and thank you again so much for for publishing that thoughtful piece and for all that you do it's about half of how I know what's going on with legal tech um okay so I see that there's a why don't we get started with the the forum part of the forum uh and uh Cassie Burns I saw that you had a a contribution or you've got the floor well thank you Dazza I appreciate it um I and live largely in the e-discovery world and I'm sure you all know that we've been using AI and machine learning for a while via tar and things like that in litigation and in an adversarial way um so you know something that we're doing within the EDRM group which has built out a e-discovery reference model and built out information governance models and things like that there are conversations about working groups around AI you know whether it's AI ethics or AI bias and we had a call last week and something we talked about is you know providing guidance to people on you know the potential of AI bias in and in writing a white paper and so the recommendation that we kind of talked about was well maybe we can continue the spirit of EDRM and build out similar types of models where you know you're talking about maybe general use cases you know for us we're we want to stay within the role of e-discovery since it is EDRM so you know there's a difference you concern with with AI bias in civil litigation of big corporations fighting each other and using AI there versus you know maybe criminal investigation or criminal matter and AI being used on data that could potentially be you know bias could be applied and wrong you know access to such social justice issues so you know our thought was building out and this is a very early stage it's kind of like a model that is you know takes into those elements of what how are you using the generative AI or the AI what kind of AI are you using what where did that data come from you know is it very kind of droll business data corporate data versus you know is it is it data surrounding an internal investigation and how people communicate and and you know in the vernacular of text messages and in what sort of sentiment analysis are you doing and potential bias associated with that so and then building out you know potential you know areas of risk associated with those different levers so we have some applied scientists from reveal and and relativity working with us on on that so so it may like feed in well to that of course our focus is very much on electronic discovery but I'm sure you know we like any law firm and any lawyer out there has legal we have legal it uses of generative AI and I think it it may be a nice collaboration so here here it was that Aaron bite by the way from relativity was he perchance the apply wasn't on that but I'm hoping he will be I do know Aaron he and I are fairly friendly so um yep um awesome and John treder Nick also comes to mind as uh yeah Jerry boo is on it there was a smaller group again we're in very very early stages but the thought is to do a white paper you know I think having a white paper but being able to like pop out some high level you know models to use and again I think you know education to people I think is very important I'm also a member of the academy of court appointed neutrals and there are you know people who've been you know court appointed master or special masters that have been partners for a long time and they're very afraid of generative AI and don't even want to talk about it like whenever I raised it as this could be something we could do for the judiciary is you know offered training just awareness like knowing the difference between AI and gen AI and Doug Austin shared a really great article about the issue with this you know the standing orders and and using AI generally as you mentioned earlier um and I think that there's a lot of education but there was even fear of this other member of the organization of like I don't even want us to train or talk to you know the judiciary about it because there's such a fear of it commoditizing the legal profession there's there's you know I think just having practical discussions about what what they are how they can be used and it's not going to be great for everything and you know what you actually have to work at it it's not going to be perfect the first iteration out it is an iterative process so I think having those real conversations with people is very important again those of us in e-discovery we know it we would never use an active learning project and use the responsiveness rates or or rankings in the first iteration and be fine with it and and assert to the judge in the opposing party this is great we just you know got rankings once and we we didn't train anymore we would never do that so I think it comes to you know use cases defensible processes and what you're doing to validate you know whether it's the underlying data or the outputs so very process driven so have they mentioned any other fears besides just more of the jobs being taken over I mean I think in e-discovery it's definitely client data and things like that but I mean the quote to me was literally you know I don't want my career to be commoditized I you know the fear is clients and business people will see I can just use AI to to do this legal work and legal work will go away I mean I I personally don't think that and I think you saw similar things whenever tar started being integrated in e-discovery like in 2008 you know late aughts early 2010s there's a similar thing the issue is you need people who understand the technology that that's really I think the important takeaway so the more you learn how to use the tool the more value you can bring to your client there's still going to be very complex issues that you have to deal with that AI is not going to be able to manage you know at the end of the day it's a tool that should be making our lives easier and taking some of the road work away maybe first draft of things I mean that that's I think you know some of the fear right there and I also and I also think that there's like a lot of over hype about AI I think a lot of people talking about it just kind of like in the world online they talk about how great and easy to use but they're really not talking about how much they have to work with it to get the end product that they need so that's not helping things either can I just comment on what Cassie says about fear is so I'm a plaintiff's attorney and I've also done a public defender for 11 years I can't emphasize enough this fear element that's preventing attorneys from using all of these tools and all of these language models I've spoken about this topic to many lawyers and it blew my mind how many attorneys had not even tried using the language model because they were just afraid to try it and they would later privately email me and say can you send me the website for this and I'm like oh my gosh um I think it's as respectful as I can say it's insane how many attorneys are not even trying and have no idea what we're even talking about and I think that in terms of you know this task force like attorneys having a responsibility to learn this it is such a cop-out to say I don't understand it I think there's even some kind of malpractice arguments you could make on the criminal side specifically you know on the criminal side where you've got machine learning being used to develop risk assessment tools for bail facial recognition unless attorneys realize that you can challenge these predictive models and how wrong they can be it's it's it's malpractice and on the plaintiff side too plaintiffs I do civil like I said I use it almost every day for either you know preparing for depositions for openings for closings you have you know so many tools that we have at our disposal and we have such responsibility as attorneys to to both educate attorneys on this topic and like people need to jump on board it is just like I said I spoke into so many explain how this works and it's like right if I could Debbie just a hi I'm MJ Wilson Billick I'm a privacy cyber AI lawyer at an international law firm ever showed so long so one of our big concerns because I mean I've seen demonstrations of gen AI but I think the concerns we have and they're very substantive concerns is around confidentiality of the data when we enter it and and we've looked at a number of systems and really I think not just confidentiality but preserving privilege and I believe one of our comments the first comment we made on uh to to the principles uh was that some of the systems that we're seeing um we'll have uh we'll have the the the producer or the the producer of the system uh maintaining the data for 10 15 30 days uh while they use it I think not to train the data not to train the model rather but to optimize their system and to check whether the responses that are given uh are correct and so we're concerned that that kind of a process will be uh impacting on a privilege for us and so we're looking for a platform where we would have more control over that and we wouldn't be having say lawyers behind the scenes that are hired by the platform provider uh not having that those people uh having access to the to the prompts and and the data that we're entering into the system and I don't know if anyone else has that issue that they have been concerned about uh but the issue of of um privilege is one that we would like to see the the principles um kind of address maybe as part of the confidentiality issue um but because uh I think we we do just service to our clients you know if we allow uh data to be entered into the into the system that's confidential uh and that we want to keep privilege but we lose it for some reason uh down the road because uh we haven't been careful about the systems we're using and just to say we also have right now we're in a process where we're saying you can use this to chat to your other gen AI systems but subject to our three rules you know that you can you can't enter client data can't enter personal data uh you know and you have to check your outputs so um you know that those are pretty standard I think terms of use at this point um so that's one of the things I wanted to raise I don't know if anyone has any thoughts about uh using uh the challenges of maintaining privilege in the way the current systems are structured and we are starting to see the platforms addressing this just because they I think maybe a number of other firms are raising this issue uh with the platforms I can at least say that um I'm aware just through our feedback form that you're not the only person to raise that as a question so I I'm aware it's an active question I think it deserves and requires further analysis and I believe you also pointed out the terms and conditions is one place we can look to begin to complete the facts necessary to do an analysis to start to ask what are the implications for privilege but I think that's uh it is early days um I'm unaware of this happened in uh like I don't think it's metastasized to a to an actual challenge uh yet where people have um you know uh basically claim privilege and had that challenged um and so therefore it's just a perfect time to begin to look at what would be the safeguards of protections needed to be able to maintain privilege in the face of a challenge when generative AI has been used and I think um hey gang like let's put that on our to-do list for versions three shall we because it's like there's been popular demand Mary before you go um off mute you also post some I don't want to pressure you but you'd post something interesting in the chat as well I was wondering if you'd like to address it related to principle three and third party rights I don't I'm happy to I raise that yes we have we had some question about what was meant I think in the principles I have as six of my I don't know where it talks six my bad sorry it's okay the duty of regulatory compliance and respect for the rights of third parties applicable to the use of AI applications in your jurisdiction and we just weren't quite sure what was meant by respect for the rights of third parties in that context yeah I mean I would love to hear how the members of the task force become this as well if you use but my view on this is I've always regarded six as like the junk drawer in the kitchen or something like we crammed a lot in six in terms of like so there were various like drafts where sometimes we split out privacy and intellectual property questions and and we at this point we've collapsed the accordion to you know regulatory compliance and you know like with whatever applicable law regulatory and legal compliance and then you know some of I think were the nod to that so that it wasn't lost in the in the um contraction to just one brief principle was there are IP rights um that can that are we already have like any number of litigation examples of people testing of what their continued IP rates are there are questions I don't know that it's got to litigation yet but about personal identifiable information that can come out and then adjacent to that there's um questions of you know in this in the style of or or maybe rights of celebrity or persona that come out in different ways and so anyway there that was some of what was animating that but I feel like this is somewhat still unsculpted territory in six and so to the extent third party rights exist that need to be taken care of as part of the responsible use that's our placeholder now and I think we need to surface those examine them and think about what is the best guidance and that's still very much a work in progress and I just answer Mary's question there are companies such as opaque systems that are addressing this issue for security so that you can use LLMs at your company with your information staying confidential so you know I don't see companies who are using these large language models haven't certainly done this yet but we certainly are working there are companies that are working for it like opaque systems which want you to be able to have these secure rooms for your data and utilize all of these large language models yeah that's what we're we've come to yeah thank you yeah one of the ways that we were possibly interpreting the idea of rights was one that we were mentioning earlier is one of bias that ensuring that in fact what we're how we use the tools it's not going to encourage bias and how understanding how the tool the tool is trained I just wanted to add that in there just wanted to add that in there one of the ways we were thinking that might might be why it was meant well if it wasn't it might be next because that's a very anything else for many of the task force members before we move on to the next okay let's practice a new way oh I'm going excuse me oh I was just going to say sort of building on Debbie and I think also Sam's point is that right now because sort of open AI was a front runner here and whether you say Anthropics Claude model I think that's an argument behind open source models and the fact that say Llama 2 is becoming incredibly powerful and actually pretty comparable to the competence of GPT-4 for example or even or more similar to Claude 2 I'd say that that kind of mitigates some of that I guess fear or uncertainty around the potential leakage of client data just because you'll be building a custom LLM within your own like instance and so there is definitely more exploration in that space I see Sam Hardin had something on the privilege question so before we move to another topic Sam did you want to contribute that I don't have to discuss it but I was just saying you know I think we're going to see a lot of vendors in legal technology saying they have an AI product and not disclose what model they're using where they're sending the data just saying that they're using generative AI in some way and I think that you know this isn't a question for the task force really but I think we should think about you know are they do they need to be obligated to say you know we're using you know open AI's API or Claude's API or another API and we're sending your data elsewhere or you know we're processing it in-house for you and keeping it separate. Yeah thank you for contributing that that really does get back to almost like a third rail that we've we've somewhat stepped on that Megan identified as part of the framing of this session and it relates to the question I would call it of consent and notice and just how that plays out we've got I think the widest variety of views more there's no principle that has got a wider variety of views than that one some on one side believe it's critical to actually get consent and it almost has a feeling like with first amendment or FOIA like you know the best disinfectant is transparency and actually outright explicit specific consent with this stuff to deal with some of the unknown ramifications and possible you know second order consequences of using this on the other side of the spectrum we've heard very loud and clear probably numerically more folks have said you know we use all kinds of tools that's part of what we do is as you know trained professionals we don't you know get into detail on that and and we shouldn't and we won't and then there's a lot of gradations and different axes between those two points and that one of the things I was hoping that we might hear different people's views on not necessarily your positions or anything or nothing binding but just different ways to look at that and any thoughts on where that might most beneficially come out at the end because we do want these guidelines to be as good as possible we want to get them out sooner rather than later and we want to say something that's beneficial on that point I see a hand and I see Damien's hand and that means hopefully we'll hear words from Damien's mouth words from Damien's mouth indeed so so really I thought a lot Sam is very smart obviously and I really think highly of what he said thinking as somebody who's building these tools I think okay if I'm using say an internal lot model that I've built within my systems and I'm taking user input and using the model internally that's probably less concerning to the end user because I'm not not actually pushing it up to GPT-4 or something else like that so really I wonder if the the objection that Sam is having is with which third parties are you providing my data so it's not necessarily which model you're using but it's with whom are you sharing the data maybe that's a clarification on that point Damien you said it better than I could have said it thank you from Damien's mouth and his brain okay I think notice is very hopeful I mean you know one of the one of the voluntary commitments of the you know the AI companies made at the White House was that they would work on watermarking or you know some kind of evidence of that this was an AI generated output because there is such concern around deep fakes and all that so actually think it's what could be helpful to have notice yeah Mary Jane one of the things I wanted to mention I'm so glad you brought this up so at IBM they created a special font that was specific to anything with IBM Watson so anything that wasn't actual human created would have this font so maybe that's something I've been on quite a few of those White House meetings too and I kept on thinking like I need to write a letter in just to mention that because maybe it is a font or something like that I mean we think about you know it's similarly to CRISPR Cas9 if you've worked in that in the medical field where when they create something that synthetic that goes into our body there's a light feature that happens so it is kind of like almost like kind of like enlightens the things that aren't that aren't native to the body itself so a font would probably even work also I mean except for the bad actors they could probably get into it but that's an idea that would be an interesting watermark I have one comment on that and I've seen like I know the content authenticity initiative I think it's part of adobe I can't I always get the a wrong but there I know working I think on some sort of you know embedded watermark or metadata but I think something that's worth considering you know that's a potential good option but a lot of times the work product we have isn't only human creator or only generative AI created it may be an initial draft that then gets worked on a lot by human I mean so I think some really great use cases of generative AI are hey we have a client that we've worked with over and over and over again they have their specific style of RFP responses we're going to instead of manually going and pulling from their case files and manually copying pasting editing we're going to build a bespoke model for this and have it do first edit and really it's like the first edit job you know and they're not going to do final copy or anything like that but so the first version is going to be heavily edited massage finalized by humans so at what point you know is it it's AI created but human heavily touched it all the way to get it out the door so I think that's something again gen AI tends to be binary discussions but it's really very like hybrid mixed intertwined so you know accounting for that and these discussions I think is important that was actually my point as well that that there's fully human created fully machine created but almost everything will be in between so will I say that these three words are generated by machine maybe but that seems silly thing number two is to the point of fonts that works well enough but if I paste as plain text into a word document of course those fonts go away so all all that's to say is that these are important problems the solutions are really hard and now it is my incredible pleasure to it invite Liz to speak and this will be the first time I've actually heard your voice although I feel like we're all friends already thanks to LinkedIn and everything else so Liz I see you have a contribution you have the floor oh my goodness I wasn't expecting that I was just contributing to the chat hey everybody um I work um just by way of background I work in legal education so in particular practical legal training as well as I consult to firms on tech but from a human centered design perspective um so I guess some of the um and comments and I guess my like I've on LinkedIn I've been promoting a lot of content about hunting and generative AI really to help lawyers get comfortable with um I think this new tool because I'm experiencing when I speak to people in person I see a lot of fear a lot of resistance and a lot of I look at them I the way I experience is that when they talk to me it's just completely like that's a low quality thing to do I would never do that that's um you know what do you mean that's going to replace me and sort of that mixture of kind of ego and fear all at once that seems to come out um so I think that um I just wanted to make the comment that we need to expand upon I think or get the message out that generative AI is a tool and it will allow us to flip our time because my experience of being a practical lawyer was that there was so much pressure in getting through the tasks in turning through tasks that you often missed that opportunity to stop and think and really consider what it was that you were doing and generative AI is the perfect accelerator to help you turn through that tasks and give you that thinking time it might not necessarily mean that where you'll you have to actually get through something quicker per se but you actually have time to properly consider and the and this is affirmed by a recent study that was conducted um I'll share it later but basically it found that law students high performing law students using generative AI their performance declined whereas low performing law students their performance was elevated and one of the things was that perhaps they didn't the high performing students they didn't understand how to properly integrate the technology into their work and then they also kind of abdicated to the AI and didn't apply their own high level nuanced critical thinking um to what they were doing and I think that's a message that really needs to come out is that it's an opportunity to spend more time in that area where humans do make that valuable contribution and that is in the gray in the nuance in the critical thinking um and that it's not something where we just stop but it's also not something to be afraid of so I guess that was all I wanted to say you're here thank you very much it's great to hear your voice by the way and uh I've got a hat for that I think we should all be like super impressed it's currently 3 42 am where I am in Bali at the moment so um and you'll be next let me make just one quick comment on that which is you'll you'll note that this task force is on is focused on kind of risks and harms and you know um and so forth and it's that's a critical thing for attorneys to and for everybody um to be aware of um the emphasis at law.mit.edu with respect to generative AI is really on the beneficial use of this technology um and how um powerful it can be profoundly powerful to help attorneys practice at the top of our licenses to help supercharge the better performing law students and attorneys judges paralegals everybody and to bring up the floor of performance for everybody and we think there's opportunities to do that it gets back to one of the um contributions earlier about the duty of competence um or that was my interpretation is there a possibility of ineffective assistance of counsel or malpractice um when people aren't using this well and you know that's sort of like the sharp end of the sword I think the I think the the the carrot um is is what we can get out of learning about this technology and how to use it well um and it really is about thinking about the prompt and thinking very critically as you suggested Liz about the outputs what it means is it right is it wrong does it support our clients priorities and interests and how can it does it open new channels for theory of the case and all that great stuff um okay so uh who was it that just was about to speak when I made that comment um I can't see your face anymore uh where did you go there you are um okay yes uh you've got your hand up and you have the floor and could you say your name I'm not sure how to pronounce what I'm sorry I'm not sure who you're speaking to but usually when someone say your name I found that it's usually me hi everyone um I my name is EJ I'm really thrilled to to be here I really I love this discussion and you know first of the task force I really appreciate what you are doing I think this is really important work um I just had three quick points to make um I was wondering how much and I'd really love people's thoughts on this I know like a lot of the challenges that we recognize when people are you know when we're trying to encourage attorneys to adopt the use of AI responsibly you know the first thing people talk about is you know it's competence they're you know they're scared and of course there's a lot of truth to that you know they're not comfortable with tech and I'm wondering I sometimes wonder how much of it is also a function of the fact that to some extent it's not necessarily in their interest and again I'm thinking back I was a product manager when you know Tara was coming up and I recognized that at the time then even people who understood the benefit of this you know this use of technology for like discovery recognize it well you know even though you're not replacing lawyers you you're still not going to need as many lawyers right and you know and to the extent that you have a business model that's dependent on you know billable hours um how much how much of some of the friction is based on things that people won't say um you know and you know just something to think about you know because I I based on my experience of machine you know when you know with Tara I don't necessarily believe that it's all about the lack of competence you know like attorneys are very smart people attorneys can learn domain knowledge when if they pick up a new case and they can pick it up really quickly so I don't know that you know competence or comfort the technology is the complete story I just don't know how to measure some of the other things that they they don't say um and the last one I mentioned and I hope this is not taken as criticism because I really love this forum I'd love to be part of this conversation again it's something that may not be as important to everyone in the room but it's definitely important to me so I hope you don't indulge me um this is important work and when people talk about some of the um you know there's a lot of there are a lot of forums that talk about some of the nefarious impacts of AI and of course people like me for whom there's not as much data training these models we have it impacts us differently and just optically um you know walking into the room and being the only black person in here I'm not criticizing okay I'm just worried that perhaps your effort may be undermined by some who come in and just so you know could get there not interested in the issues that affect me so just something to think about and especially you all being at MIT there's a lady named Joy Boa while she's uh yeah she is actually in her lab with uh Robert Mahari and I um the algorithmic justice league among other athletes you know that's one person that you know I I love to be part of the conversation but I haven't been researching this issue as well as much as she and perhaps others have been and so be uh not only lovely to I mean I'm not doubting that perhaps you're also getting those perspectives I just worry that there might be people who might just discount the value of your work by just coming in and just optically so just something to think about love what you're doing and um thanks for thanks for um letting me join this conversation thank you on that last point um well all your points were welcome on that last one I I think you're right I don't think we're getting a full um spectrum of views and I think we need help um we've done what we would like the best I know how to do which is insufficient which is to make it public to put it in the press to anyone I did it anyone that contributed um got invited to this um but I think we need help and I think there's a gap and I would ask for help I think joy might be a good person to start with um I mean I'm happy to help but I really do think you already have access to somebody that who might be who might add more value than I do I know since she got famous she doesn't return my emails doesn't I'll do my best with her but I but any this is an open call and an invitation um for assistance um and thank you for saying that um well I see that we're two minutes um from our promised closing time and um speaking of learning card lessons uh I've learned not to let these things drag on beyond when I promised it would end especially in the middle of a work day and so uh we're going to start to close up now um thank you very everybody for for joining us and for contributing um and uh and for those of you that weren't comfortable contributing in this format um I do and if you have um further questions or comments or ideas um please do um take another swipe at the feedback form or hit reply all to the email invitation and um and the last thing I'll say is um I'm coming to Europe uh the legal hackers international summit is in Madrid um the first week or so of uh September and uh I'm planning to maybe go to Italy and possibly one other place so to for those of you that are in Europe we'd love a European perspective and um let let me know um if there's people at your firm or in your community somewhere that's easy to get to um in from Madrid and uh maybe we'll uh we'll bring the show to your town and uh and do a field hearing or hear more from you and so with that thank you again everybody for your generous contributions and your time and we'll do our part uh to to absorb what you've said and try to reflect and support it in the next version of the report so with that thank you again and go forth and enjoy the rest of this beautiful summer day thank you thank you