 OK. Welcome, welcome, everybody. Hi, I'm Daza Greenwood from MIT Media Lab and also executive director of law.mit.edu, which is the convener of today's workshop, the eighth annual MIT Computational Law Workshop. I just want to start by saying, having done these things since actually the late 90s at MIT on this topic of law and technology, I honestly believe this is the best program yet. And that's owed largely because of a breakthrough with widely accessible, generative artificial intelligence and its applications for law and its impact on law and legal processes. How could generative AI, tools like ChatGPTP, be used in a legal context? You could use this type of tool for a contract. Well, wait a second. What kind of contract? A first draft of a contract. Let's just start with a warning. That standard should probably be in parentheses, but it does not go without saying. And I think it does not go without emphasizing that this class of technology is not perfect. In fact, it's deeply flawed in some ways. It provides inaccurate and false information. And that has a risk of relying on it too much. And just using the first draft as the last draft, for example, it also could raise other legal, higher level policy issues with misinformation. It also has prejudices and biases that were brought in through the training set. So beware of those, propagating those, which can be deeply embedded within the results. And biases is particularly interesting in a legal context, which I'll come back to in a moment for fiduciary duties, but attorneys are one of those roles that owes fiduciary duties of loyalty to our clients. And that means putting the client's interests first to the extent that the training data includes prioritization of corporate interests or a consumer interest or some particular government or cultural kind of interest, which can seep in as part of the bias, that may or may not be the same as the client's interests that we need to put first. So becoming aware of and a savvy consumer of these outputs as an input to us doing our job is critical. Okay, standard. Oh, and the last thing I would just say on this is something that I'll put it right in the chat here because these are really words to live by. This is a quote from Sam Altman, who is the head of OpenAI that provides ChatGPT. ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important for now. It's a preview of the progress. So part of why, so now having said that, the reason these warnings are important is because stuff is amazing. Oh, like I was saying at the start of the workshop, we've just experienced a sea change, like as a major threshold moment in terms of the capabilities that are now widely available and that have particularly good application for legal use cases. Can we get the next slide, please? What kind of applications? I mentioned contracts, only the first draft, statutes. If you've done the pre-reading, you will have seen my back and forth with ChatGPT on fiduciary duties. And it came up with what I've written a few federal statutes in the US in my time. And it came up with a very good, I would say, first draft of a statute for the particular context that I provided it. A complaint in a judicial context, deposition questions, a brief, basically anything for a first draft, but it's not just drafts of documents. And if lawyers were very frequently document paradigm oriented, there's also processes. And I think that the biggest ones might be with legal processes. So for example, legal triage is something that Suffolk University Law School has been doing where individuals can speak in plain language and the AI can figure out what the relevant context is, can surface the legal issues and then get people to the right person to help them. Consumer rights, we're gonna hear from Joshua Browder at the end of the session is doing remarkable things with interactive live real-time usage of this technology integrated into things like chat bots by companies, but his tool is representing the consumer interests, getting into a bot versus bot context there and so much more. Next slide, please. One of the really interesting things here that, and some of the, Megan and I have been working on a lot and you'll hear more about it in 2023 is the late, what you might call the latent knowledge or the capability overhang that happens when you take all of this text and all these, more than one corpus is a capora from all across humanity and you put it and you vectorize basically the words and the phrases and the concepts like in linear algebra. Interesting patterns emerge that we're here to for unknown and that could is a new source of knowledge and it can be used very productively in lots of commercial and academic and governmental and other use cases. There's so many possibilities. We've got some great speakers so I'm just gonna skip across this for now and let's go to the next slide. There's a lot on that last slide though we'll come back to you later this year. So legal engineering, meet prompt engineering. You know we love legal engineering at law.mit.edu and you can look at our media page to find some deep dives into what we think legal engineering is and why we think it's so important. Prompt engineering is a phrase you may have heard Megan just went into some of the details of it. We think that there's a subset of prompt engineering that is particularly useful in a legal context and when I talk about a legal context that really gets us back to a concept which also resonates in law from evidence which is relevance. And so one of the critical things to get great results from a prompt in a legal context is to design the prompt so that it provides the relevant context. So by way of an example and actually do you mind if I screen share for a second Brian? Yeah, just a moment and let me get out of that. Yeah, there you go, you got it. So here's just an example. So for deposition questions you could ask it, give me deposition questions and I put a link to this in the chat but if you tell it things like the purpose of the deposition the specific cases in the parties involved some of this other relevant context here in the context of a deposition it will give you much better ideas for questions that you could ask. Similarly, when I said a draft of a contract you could say give me a draft of a contract to buy a used car and you'll get back something that's pretty good but if you were to ask it I want a draft of a contract for a used car by individuals in the state of California and include the make and model, the purchase price there were any warranties, et cetera, et cetera include this simple plain language in the prompt then you will have composed the prompt in a certain sense legally engineered it to make sure that that relevant context is supported and reflected in the draft that you get and that'll make that draft all the more valuable. One of the things that I, the reason I posted this on our workshop GitHub repo is because when I was writing this and I was thinking what do I say to the workshop participants about what's relevant in these different contexts? One of the things I did which is a new go-to for me since a month and a half is I went to chat GPT to ask it what context would you need in a prompt in order to get the best contract? These were actually answers that I got from chat GPT saying the context for these different things and I'll tell you what, it was better than my draft like these are all twice as long as the examples that I had provided and they're all quite good. The last thing I'll say is that prompt engineering so mostly what I was just talking about was prompt construction or just prompt grammar and syntax and semantics in a way which is important. You could say that's kind of legal engineering we craft words. The deeper engineering here is it and we'll see with Jesse Han and the other people have been doing this is to be able to sort of integrate the prompts as part of a workflow that can be automated. So there's inputs at certain points. We get an output that becomes an input for another part of a process. We can actually engineer generative AI at certain points in a sequence of a workflow that's even deeper concept of prompt engineering. And then the deepest is something I've been calling prompt plumbing which is a much lower layer of the infrastructure. You can use approaches like Lang chain which does some really interesting it kind of takes summaries of the big blocks of text vectorizes it and carries the context forward. You can do things with much greater amounts of information than what happens just through an interface like chat GPT where you run out of tokens. We'll get much more into all that later in the year.