 OK, welcome, welcome everybody. Hi, I'm Daza Greenwood from MIT Media Lab and also executive director of law.mit.edu, which is the convener of today's workshop, the eighth annual MIT Computational Law Workshop. I just want to start by saying having done these things since actually the late 90s at MIT on this topic of law and technology, I honestly believe this is the best program yet. And that's owed largely to a breakthrough with widely accessible generative artificial intelligence and its applications for law and its impact on law and legal processes. So put your seatbelts on. This one is going to be a doozy. So with that, let's get right into it, shall we? Hi, everyone. So we are so thrilled to see the incredible turnout and excited that so many of you are deeply engaged in this topic. It seems we are at the cusp of serious advancements in human machine collaboration. If we are to consider, then, the roles and subsequent possible use cases for generative AI, how could we see machines as partners in our legal processes and practice? As we hear about the experiments in ways in which generative AI are finding integration and accelerating our everyday workflows, how do we appropriately account for and mitigate risks and harms of use? On the other hand, how could we be evolving our skillsets to not only enable more efficient practice but also unlock more creative and critical capabilities? If we could move ahead a couple of slides. Yes, sorry, stop at the human. Next slide. I frequently think about the Human Diagnosis Project. This is a worldwide effort created and led by the global medical community to build an open intelligence system that maps the steps to help any patient around the world, in effect, a crowdsource consult. For those of you unfamiliar with this reference, consults are typically a term used to describe conferring with multiple doctors at once for their opinion on whether or not this may be, indeed, the right diagnosis or treatment plan. The Human Diagnosis Project mirrors this process, but as opposed to a single consult, their tool enables multiple simultaneous consults in a matter of minutes and verified by knowledge source from medical experts at the world's leading institutions. Interestingly, the key driver behind the technology is fundamentally the steep collaboration between human and machine. That is, the success of the project is owed to contributions of human expertise to continuously refine the tool's competencies. Widespread positive testimonials from users have shared how the system improves their diagnostic reasoning, not only allowing them to produce differential diagnoses more rapidly, but their ability to think more critically and across highly disparate cases. Evidently, I share this narrative by means of illustrating that we may find inspiration from the Human Diagnosis Project at the advent of generative AI. Perhaps many of you in the audience can agree, medical and legal do share a few similarities, in particular, that knowledge management plays a monumental role in the success of the practice. A direct correlation in this specific sense is the idea that we form legal diagnoses, whether it be the act of redlining and contractual review, argument development in case of termination, and discovery. The notions of issues spotting, fact-finding, risk analysis altogether contribute to a diagnosis. A key difference, of course, is the importance of language as a core element of the field. Next slide. And so at the wake of GPT-4, I have been reflecting on what it means to have a conversation with machines. More importantly, what can we learn from human to human communication that can be applied to human machine communication? Linguists have been reflecting on notions of communicated meaning through the lens of pragmatics. Pragmatics is largely regarded as extra linguistic considerations relevant to conversational appropriateness. What is meant may be inferred by what is said on the basis of principles, such as cooperation, informativeness, and relevance. Next slide. And so the introduction of cognitive pragmatics or a cognitive system view disrupted the broader field of pragmatics by considering the mental inputs and outputs of communication. Cognitive pragmatics is interested in the structure of dialogue derived from a shared knowledge of an action plan. Next slide. Bruno Bara, a renowned scholar in the field, describes how cognitive pragmatics manifests through conversation games. He defines a conversation game as a set of tasks that each participant must fulfill. In short, this translates to Party A produces an utterance, Party B builds a representation of its meaning. The hope is that this representation is a reconstruction of Party A's communicative intent. As discussed, conversation games are intended to be communal, a simultaneous effort to build together. It predicates on some form of a mutual shared premise. In an ideal game, the speaker can predict how the receiver will reconstruct the meaning of the utterance and the receiver comprehends the speaker and is in fact capable of reconstructing its meaning. However, a key element to conversation games is that the receiver will always react and respond to the speaker, even if the receiver does not necessarily understand them. Accordingly, a conversation game will continuously reset until a congruent representation of meaning is achieved. A conversation game can then be highly ineffective if no shared understanding ever exists or can be reached. In order to mitigate issues of interpretation, the idea is to create a collective belief and to use utterances that are elocutionary acts. Elocutionary acts are a term put forth by J.L. Austin, a philosopher of language to describe words that express what is done and to be done, so actions. Some examples include assertive, interrogative and directive statements. In legal context, elocutionary acts are no stranger as the notion of lawmaking frequently relies on the use of directives. Next slide. So why does this matter? There is a powerful analogy to be made between conversation games and how to speak with machines, otherwise engaging with large language models for dialogue. Folks in the audience are likely already familiar, but one of the significant steps that has led to the release of chat GPT is owed to its predecessor, Instruct GPT. Instruct GPT applied reinforcement learning to fine-tune GPT-3 to better understand written instruction. Its ability to respond to user instruction, learn from human feedback, enabled progress in the contextual richness of its outputs, though still far from perfect, a much closer alignment to human intention. Similar to the conversation game, the fine-tuning of GPT-3 to human instruction can be regarded as a parallel to the active use of elocutionary acts to mitigate misinterpretation in human conversation. Therefore, it is no coincidence or surprise that when speaking with machines, we have been perfecting the art of elocutionary acts, namely directives or instructions. Moreover, with the onset of increasingly powerful generative AI models, came a rising interest in prompt engineering. This is seen in the development of publicly available prompts to test and experiment with various competencies with chat GPT, such as these browser plugins that help discover, share, and import prompts while using the tool. One of the clear patterns that have emerged in prompt engineering is representational nature and the use of embodiment. Numerous prompts begin with act as or pretend to be. Other prompts come in the form of specific requests. In either scenario, we see behaviors that are highly performative and elocutionary. So returning to our initial ask of the workshop, what use cases do we see for generative AI? The patterns with which prompt engineering have emerged suggests that legal tests most amenable involve those that are related to execution, a first cut, a first draft, generating existing boilerplate. Next slide. Yet more complex raising tasks such as issues spotting, enabling creative multi-perspective construction of arguments, collating and inferring meaning at scale, fall short if we are limited to the use of elocutionary acts. We will require additional fine-tuning beyond user instruction, but to user negotiation, user critique, user perception. Complimentarily, prompts must sufficiently account for why we communicate in addition to how we communicate. And so how then could we fine-tune our models such that they reflect the forms of written legal communication embedded in the interactions of the field? In particular, those that can reveal the strategic uses of language.