 Welcome everybody. I think we'll get started. My name is Sarah Harding. I am the Dean here. I like to say I'm the new Dean here because I am new, but at some point I'm going to have to stop saying that. So as always, we just want to start off with a couple of acknowledgements. To begin with, Delhousie is located in McMoggy, the ancestral and unceded territory of the McMoth. We are all treaty people. We also recognize that African Nova Scotians are a distinct people whose histories, legacies, and contributions have enriched that part of McMoggy known as Nova Scotia for over 400 years. So my job up here is very simple and very short. It's just really to welcome you. It's a pleasure to have you all here. It's a pleasure for us to be hosting this event. This is the F.B. Wickwire, the lecture in professional responsibility and ethics with the Nova Scotia Barrister Society. I was telling Elizabeth, who's my right-hand person earlier today, I'm pretty confident that I met Ted Wickwire when I was here in law school many years ago. I have this memory of him, so all of this is bringing back a little bit of a memory of the person that I met. So I'd like to thank Richard Devlin for his hard work in organizing this lecture. And finally, my last little job up here is to introduce Mark Scott, president of the Nova Scotia Barrister Society, who will be saying a few words about Ted Wickwire. So thank you so much. Thank you very much and good evening everybody. Thanks for making out on this messy evening. We've all encountered a person who has had that special impact on our lives. Ted Wickwire was such a person. He walked amongst us with purpose, humility and grace. He was an alum of this university, a gifted athlete who excelled at Dalhousie's basketball and football teams, an avid volunteer and was heavily engaged in community and politics. Ted's commitment to the legal profession, to his community and to his friends, was well known. He was a consummate professional. He embodied all that we strive to be, as lawyers and more importantly, as human beings. He practiced and modeled the highest professional standards. I'd like to briefly share just some of the many contributions that Ted Wickwire made in his life to our profession. While it's difficult to believe now, the legal profession in Nova Scotia did not have ethics guides for lawyers until the 1990s. Ted was the chair of the Society of the Society's first legal ethics committee and he oversaw the development of the handbook on legal ethics and professional conduct. He was the first chair of Nova Scotia Legal Aid Commission, which would reduce barriers for Nova Scotians seeking access to legal services. He initiated conversations about making our complaints process or disciplinary process public. This would only come to fruition about a year after Ted passed, but his prophetic guidance helped prepare us for the change that would eventually come. He also wrote a report on the findings of the Martial Inquiry and the role that our profession must play in the administration of justice. Ted represents the best of the legal profession. He was a person of unimpeachable integrity. He dedicated his life to calling on others to do and to be better. As lawyer and as people, Ted has given us an excellent example to strive for. So when Ted Wickwire passed away at the young age of 52 years, the Barrister Society wanted to honor his life and his legacy. We partnered with the law school to create the Memorial Lecture Series. This decision was made to have this lecture series focus on professional responsibility and legal ethics because Ted Wickwire championed professionalism and high ethical standards throughout his career. We are incredibly lucky to have had Ted Wickwire walk amongst us. We at the Society are so happy to have this opportunity to celebrate and honor him at this lecture series every year and we thank you for attending. Right now I'd like to pass the microphone on to Professor Richard Devlin who will introduce our keynote speaker. Excuse me, I've got a terrible cold so I'm going to squeak my way through this. First of all, let me start by saying thanks for coming all the way from Ottawa. Amy is a colleague and I think we're friends, are we friends? I haven't offended her yet. Just checking on it. Originally Amy hails from Alberta but she spent much of her life in Ontario, her academic for sure. She went to U of T and on graduation received the Dean's Key. That wasn't prestigious enough so she went off to Yale for a couple of years, did her master's degree then did her JSD, Yale being one of the top law schools certainly in the U.S. After coming back she practiced in a litigation firm in Ontario where she practiced contract law. Contract law people is important. Terror law, not so important. Professional legislation etc. So she did litigation for a number of years but saw the beauty of the university life instead of coming back to the university. So she currently teaches at the Ottawa U in torts, dispute resolution and professional responsibility and she's just telling me the next term she's actually going to teach a course on artificial intelligence and the legal profession so that's the seminar course at Ottawa U. She's currently the, no she was the chair of the board, no she currently is the president of the game association for legal ethics and it's now currently the chair of the board. She's also served as the co-chair of the National Association of Women and the Law. Recently she published as co-author the third edition of Understanding Lawyers Ethics in Canada with Justice Alice Woolley and as of this morning she was offered and accepted to take on her role as the co-editor of the next edition of the casebook that you guys have for your course. So that's her next opportunity and challenge. So Amy thanks for coming, I look forward to talking. I'm sorry to have to make you go through that but I really appreciate that introduction. Thanks for inviting me and having me come speak today, thanks particularly to Richard and Elizabeth for all their help in organizing it and to their society for having this lecture series go on year after year. I'm extremely honored to be here and to be attached to a lecture series that you know is a memorial of such a wonderful individual that we heard about today. As noted I'm going to be talking about AI and the legal profession. I'm going to go through a few different topics just to kind of set the groundwork, kind of describe what's happening but I also want to raise some kind of more philosophical questions thinking about where we are with the technology, where we might go into the future. So in terms of my plan for today, I'm going to present for about an hour. Here's what I plan to do. First I'll do some table setting, introduce the concept of AI to the group, go into some details about this current moment of AI we find ourselves in. Then I'm going to be talking about AI and the legal profession. How is this technology being used now? How might it be used in the near future? Then I'm going to get into that kind of more where are we going with this? Raising some ethical questions and having that kind of discussion with you and then I hope we'll have some time for question and answer hearing your questions, also hearing your perspectives about some of the things that we've talked about today. So yeah, I'll start with this, what is AI question? Could people hear me in the back with a mic? I don't kind of know. I'm going to stop filling with it. Okay, if anybody can't hear me, put your hand up. So we'll start with some definitions and background. What is AI? I'm sure in the audience when you mention AI there might be a whole different set of things that come to mind. So maybe when I say AI, some of you think of sort of a superhuman advanced robot. Maybe you think even more kind of nefariously about the Terminator movies. Maybe some of you come from a more technical perspective. Maybe you have computer science background. You think about lines of code. There's also maybe a number of you in the room that don't think about AI too much. You have a busy life. You're studying as students, you're working as lawyers and thinking about the definition of AI is not something that occupies your waking hours. Very possible. There's people like that in the room as well. The reason I kind of set us up that way is just to recognize that, you know, particularly with this topic AI, often in a room like this, we're coming from a wide range of different experiences, knowledges and perspectives. I know there's some people in the room that are very familiar with technology. I've also heard from some people attending today that they want to hear about some of the basics. So that's what I plan to do is start not for a long period of time, but just a few minutes talking about some of the basics. And the idea is that I can set the table to make sure we're kind of all well placed to have the conversation together and hopefully build some of those basic concepts for us. So what is AI? That's again the topic of this first section of my presentation. A good place to start I often think is to recognize that AI is a bit of a term of art. It's also been around for some time. Most people trace its origins to the mid-1950s. So we've been talking about AI for a while, what some people say AI is, or some kind of famous quip is AI is just what hasn't been done yet. So kind of looking to the most futuristic type of technology. Another kind of common simple definition talks about AI being doing that with computers that traditionally would have involved human intelligence. And so in both those definitions you kind of see this idea of boundary pushing, this idea of, you know, futuristic, kind of again aligning maybe with that robot image at the beginning. What we're seeing certainly more recently is more and more advanced technology being rolled out. You probably see some of this in the news or you read about this otherwise. And as we see more and more advanced technology and start seeing maybe some concerns about technology, we are now starting to see things like legislation, guidelines, principles, we would have seen something from Joe Biden the other day possibly. And so we have now available to us some more technical definitions. I should note in these kind of legislative definitions, oftentimes there's huge fights about exactly how to define AI. So it's not always agreed to in the expert community. What you do often see in these more technical definitions is a reference to systems that can learn themselves. Maybe some of you've heard this concept of machine learning. Or we also hear about systems that can do things autonomously or maybe partly autonomously. And probably hard to read in the back, but I have excerpted up there and reading doesn't really matter, but a definition from Bill C-27, which was introduced in Canada. That's the federal AI legislation. At the end of the day, I don't think it's essential for most lawyers or most students to kind of have one particular very specific definition of AI that they grab onto. To me, what's more important when I talk to audiences like this is trying to explore the idea of basic literacy and literacy vis-a-vis specific tools that are being used or talked about being used because the benefits and risks of different tools differ. AI is just not one particular thing. What I'll focus on today is something called generative AI. So that's a subset of AI. So we'll kind of start with AI, now talking about generative AI. Generative AI, again, you can define that many different ways. I think this is a simple definition that works well for us. Talking about systems that are capable of producing content like text, images, or video. So one way to maybe think about how generative AI is different than other types of AI that maybe focuses on analyzing things. An example I think about sometimes is facial recognition software. So probably people have heard about this technology. So essentially you have AI that seeks to match a face against a database of faces it has. So it's kind of analyzing that face. So that's not generative AI. That's a different type of AI. It uses machine learning too. But if you talk about faces and generative AI, what you'd be talking about are these new tools coming onto the market that can essentially generate photo realistic faces after you've had a text-based description fed into it. Has anybody seen these tools out there? Some people. Okay. So that is generating something. So that's kind of just the distinction. But I'll go even narrower today. So we started with AI. I just quickly used the word generative AI and described it. I'm only talking about one particular form of generative AI. And that's generative AI that produces text. That'll be my main focus for today. And that type of generative AI uses something called a large language model. And chat GPT, which some of you may have heard of, is a particular interface built on top of one large language model. What's a large language model? I'll be done with definitions soon, but just a few more. Essentially a large language model is a machine learning model that can recognize, summarize, translate, predict, analyze, sentiment, and generate text based on patterns and relationships it's learned from just massive data sets. LMS work by predicting the next term or word in a sentence given the words that came before. So again, focusing in on these particular tools that generate text and what that might mean for the legal profession. And I mentioned chat GPT before. This is a tool that was introduced by a company called OpenAI at the end of November. Just under a year ago. Offered free to the public. There's a paid premium version. And by show of hands, how many people have heard of chat GPT? Okay. So more than I did a talk earlier this week and it was about half. So people are more familiar with technology. How many people have used it? Okay. So fair amount of people familiar with the tool. I have a few demos we can watch and we won't have too many. But this essentially is me using GPT for to describe the access to justice crisis in Canada. And so the text, of course, is extremely small for you, but you'll be able to see certain headings come up across the legal services, legal aid limitations, complexity, the legal system. So, you know, looking at this, these would be certainly geographical barriers, marginalized groups, things that I'd be mentioning if I was talking about the subject. And I'm someone who's read a lot of literature on this technological gaps. It's pretty impressive. I'm not going to say it's the perfect definition, but or summary. But again, I would, I think it's unfair to say that it's not impressive. So that's an example again of this type of tool generating text. One other thing, and maybe this is something you've done when you've used the tool is the tool kind of has this ability to really capture different types of formats, capture different genres. And so I did something a little bit silly here. And I asked the tool to write me a story. I said 200 word limit because I don't want you to have to look at the screen forever, but about a lawyer who can't stop telling their client Shakespeare jokes. And then I just to make it even more silly, their clients Homer Simpson. And people want to copy this story. If they can't read it, I can send it to them later. And you see my that's my typing. This is a screen record. So that's all real time. So it's much, much to do about dough, which is kind of funny. Y'all, again, it's a bit it's challenging for you to read, but it talks about Homer Simpson, you know, a legal matter involved in the differential nuclear power plant. And if you could read it, you'd be able to see kind of different Shakespeare references thrown in at places that make sense to sue or not to sue. Maybe next time I'll do it, I'll do the audio version of this so people can hear it too. Just 200 words a little bit further. I just want my donuts compensated, Homer says. Okay. And then there's an end that makes sense kind of Homer leaving the office maybe next time I'll just represent myself like Shakespeare more donuts. So again, so a story if you I could have got to write a longer story probably with more complexity, more references. Those are just two examples. It sounds like a lot of people have used the tool before in this room. So maybe not super new to you. When this tool came out, it did kind of blow a lot of people's minds seeing what could be done with this type of technology. And that included people that worked in AI and who've been thinking for years about what AI could do, who'd worked in the field. Language had always been so challenging through the production of language. This kind of functionality was seen maybe not as impossible, but years off to a lot of people. It had been developing for a while. There's previous large language models, but this type of ability was quite surprising. And I like this quote from an engineering professor in the United States. It says, ChatGPT is outrageously compelling because for years, we've only known of one thing in the world that can generate language and that was us. And so we look at this thing and think, oh my gosh, it's like us. So something that maybe seemed a bit magical. There's a butt here and Professor continued in his comments, there's a moment when you can realize or you need to realize, no, it's not like us. And so I hope, certainly people could say there's various ways that I'm, as a human, different than a chatbot that appears on my computer screen. We could have a whole conversation about that. I want to focus just on one thing that has been noticed about this tool as it came to be rolled out. And this is this issue of hallucinations or confabulations. People heard of this term in the room before? Sometimes, okay. And so basically it means a tool can sometimes make stuff up in a very compelling way. An example here was in the spring, I asked the tool, you give me an example of a Canadian court decision where a judge has made a pop culture reference. And on the screen there, it references the federal court of Canada, a judge that actually sits there, includes an apparent quote from the case that references Moby Dick. The issue was this case didn't actually exist. So made up that case for me. Why does this happen? Well, one thing that's essential to realize is when chat GPT is given an output, it's not plugging into a database and looking up an answer using a rule-based formulation. It's producing an answer based on patterns and language. So I ask it to give me a pop culture reference. It's not linking to chat. So it's not linking to Ken Lee and looking up cases in a legal database. It's just putting together words that sound like they go together. And as you saw, it can do that quite well. And it's done that just after scraping an unbelievable amount of text off the internet and using other types of data sets. And so when it's had all that information, it's seen legal citations before. It's seen legal cases before. So it can produce something that looks like a legal case, looks like a legal citation, and gets an answer that actually, in fact, is totally incorrect. You may, on the other hand, ask it a legal question, get a correct answer. Certainly if you ask it something where there's a lot of discussion about that in its dataset, more likely you're going to get a correct answer. I think I asked at one time what the standard of care was for a lawyer. It seemed to be pretty good at doing that. There's probably a lot of web pages that talk about that. And there's some quotes here from people talking about what the tool is doing. Stephen Wolfram, who's been great at explaining it, talks about chat, GPT. What it's doing is trying to produce a reasonable continuation, whatever text it's got so far, whereby reasonable we mean, what one might expect to write after seeing what people have written on billions of web pages. On a shorter explanation, I can answer questions, write stories, have conversations, but responses are based on patterns it's seen in the training data, rather than its own understanding of the world. So what we have here is kind of a model that's fundamentally probabilistic. It's not deterministic. It's not using a rule-based formulation. It's not looking at a fixed set of options that's been pre-verified. It's kind of falling in the category of legal cases and picking the one that best matches. It's kind of, again, putting stuff that looks like it goes together. That's a very high-level explanation of what it's doing that you can see if you want on the internet or if you want to email it, it can give you resources to get much more technical. One other thing I think that's helpful to point out is that that type of tool doesn't just involve a machine kind of crunching language patterns. It's also important to note that this tool, when it was developed, had a lot of human training and guardrails put into it, with a goal to make sure it wasn't producing kind of offensive or dangerous content. There's also a human training aspect into that tool that I always wanted to mention as well for you. That's, I guess, a 10-minute lecture on what AI is. You can read collections of books to give a much more thorough explanation. Again, I'm hoping that kind of puts everybody in the room a bit on the same page. What are some of the main takeaways from what I just said? Understanding that AI is a big and somewhat flexible category. I think that's helpful to really take to heart because even when we talk about lawyers using AI, we need to be very specific of what type of AI they're using. What is its limitations? Generative AI is one type of AI. These large language models are a type of generative AI that produces text. ChatGBT is just built on top of one large language model. I don't have time to get in today if there's other companies building their own foundational models. It takes a lot of money to build those models, but there's other people in the game. The new capacity demonstrated is profound, but there are important limitations too. Now getting into the second part after that brief primer, I thought I would now situate us into the legal profession more particularly. Start focusing on lawyers. How are lawyers using generative AI? How might they do so in the near future? Again, I always start most broad, but looking at the big picture here, I think it's important to recognize that maybe ChatGBT just came on the scene last year, but lawyers have been using AI for some time now. I read a paper a few years back that talked about AI use in the legal profession being modest but increasing because about three years ago. When I read that three years ago, I thought that was fairly accurate and fair. We've had e-discovery tools, so e-discovery refers to when you have litigation and electronic evidence. Nowadays, sometimes litigation cases can have millions of different text messages or metadata. We've had to find out ways to use computers to vet that kind of data and see what's relevant to produce. There's AI behind that. Also AI in the area of data analytics, so analyzing large amounts of contracts when you're doing due diligence. There's been some AI used to make predictions in litigation. There's some long-standing tools. I would say, though, things have been different lately. Almost every legal conference I see now has AI in a legal profession as a topic. People are talking about it a lot more. That's absolutely due to the fact that we saw ChatGBT released last year. I grabbed a lot of public attention, and pretty soon people were talking about how it might be used by lawyers. You see the headlines here, the AI doing quite well on answering legal questions. There's been some tests on American bar exams and just how well it can do. Pretty soon, we did actually start seeing lawyers use ChatGBT for their legal work. This isn't just speculative headlines. This kind of thing was actually happening. So, how are lawyers using ChatGBT? In easy answers to say, who knows? We don't have that law society going into lawyers' offices and saying, declare every tool you're using. There's no registry you report on. So we don't have any exact statistics on this. One source of information has been headlines. People have seen this headline before in the room of the American lawyer that used ChatGBT. Okay. It sounds like many people are familiar. The story in short is that there was an American lawyer who had a motion before the court. It was an airline consumer dispute and asked ChatGBT, can you get me some relevant cases? This is a very particular motion I'm involved in. He was very happy because it gave him very relevant cases proving his points. He thought he was going to win the motion. He filed those cases with the court and the opposing counsel went to find the cases. They couldn't find them. They even ended up, I think they were called up. The other courts have a reference in the citation. These cases didn't exist. The lawyer then soon became a headline at the New York Times. The story got circulated amongst many law students, many lawyers. He was called before the court to explain himself and he said, I heard about ChatGBT from my daughter. I thought it was some kind of super search engine. These are almost verbatim quotes and it just could never occur to me that it would be making up cases. We've seen this happen. There's a few other American cases reported of something similar happening. I wouldn't say we have a large number of these cases being reported. I haven't seen one reported in Canada yet, but it is something that's happening. I think hopefully it's fairly obvious on its face when you have a terrible car crash example of something going wrong with a tool, that ChatGBT is not a legal research tool and we certainly can't have lawyers or self-represented litigants presenting made-up cases to a court. Our system is not going to function properly when that kind of thing happens. Again, it's possible it can give you legally correct answers. It's going to depend on how common the issue you're dealing with and that kind of thing. That's not what it's designed to do. It's not looking up from a data set what the best answer is. A lot of people saw that story and ended their consideration of lawyering and ChatGBT with that story. Lawyers are actually using ChatGBT in a variety of different ways, not for legal research. When I say lawyers, I mean lawyers I've talked to personally or I've seen speaking presentations or who've sent me messages on social media, so lawyers that they're saying that they're actually using the tool in this way. What are some other ways? Marketing has been one. Lawyers are using this to write blogs for their websites or website content. Some people are using it to draft at least first draft correspondence to clients. I've heard about correspondence to courts as well. It's a more standard form correspondence. Some lawyers are using it to produce case summaries. Not just saying summarize X case but feeding it the case and saying summarize what I've just sent you. Lawyers are also using it to visualize data. So one lawyer I heard talk at a conference said I had this factum and I needed to present some evidence in a table. I didn't know how to best do it so I gave the tool the evidence and said can you give me three examples about how to put this in a table or a chart. The tool gave them three examples. They picked the best one and then refined it and thought that was quite useful to them. I've heard also some people using the tool to do first drafts of things like pleadings where lawyers correct things. What a caution I think outside some very simplistic pleadings that's going to be very hard to do. You have to do a lot of what's called prompt engineering and make sure you're feeding the tool the right constraints information. Another concern there would be you have to make sure you're never putting confidential information into that. Some lawyers use kind of placeholders for that but that's kind of a use case I think is kind of more stretching the bounds of the tool but people and lawyers have said they're doing this. They've said it publicly. From kind of all that we can see some you know risks about chat to PT and legal emerging. There's a huge trustworthiness and reliability concern and you know in a profession like the legal profession that reliability is going to be paramount. Maybe a high school student's really happy if you know 80 percent of their essay is correct but if you know one in five of your legal citations is made up that's a lot of a different problem and you just can't have that kind of thing kind of infecting and contaminating the common law. Confidentiality is another big area so one of the other kind of splashy headlines we saw when chat to PT came out was there are some engineers that work in Samsung and one thing you can use a tool for is kind of helping you with computer code. So it took some proprietary computer cord from their company fed into chat to PT to edit little that they realize and now possibly open AI owns that computer code. There's no claim that your data stays your own or stays private when you feed it into the tool. We used to train the tool all kinds of things and that's you know very explicit in its terms of service. Talking about nuance and again I try and always be specific when I talk about AI how those risks materialize are going to really depend on what one is using the tool for. So if you're using chat to PT to write your firm's holiday card maybe there isn't that confidentiality concern or reliability concern. Maybe you know maybe it's not as funny as you might be. Maybe there's other types of concerns or maybe it's not as sentimentals you might be but those concerns won't be there. And again you know these concerns may not be there if you're just asking it to show you how to present evidence in a factum. These concerns may be there if you're saying here's this legal issue I'm going to delegate you writing the memo to you tool and you're just going to deliver that memo to your client. You're going to start having a lot of problems. And so I always like to give that caveat that again we need to be specific about use cases tools when we're talking about this type of technology. Another really important caveat to be made here is that chat to PT is not the only game in town and when people that follow technology in the legal profession get excited about generative AI they're not focused on chat to PT. They're not focused on this this free general public tool. And what we're seeing now is a massive development of we call it tailored or fine-tuned or bespoke LLM empowered legal tools that are being developed. They're already out there in use and these tools are built on top of large language models but they're not the same as chat to PT. They have different interfaces. They have different things that go into building them and the things that go into building them are exactly directed at reducing those risks I talked about. And so there's that trustworthy and reliability question. And so when you take a tool like sorry take a task like legal research these legal AI tools what they do is they pair the tool with a legal database of cases. So a verified set of information and basically direct the tool to only get its answers from that set of information. They also developing the tool are very aware of confidentiality issues and there's a variety of techniques that are used to ensure or attempt to ensure that that that firms information stays private. And so a lot of money and effort going into building tools that directly address those types of concerns. For my sense of these tools it's clear I think it's uneniable they mitigate those risks. I think there's a question mark about what level residual risk remains. Again when you're talking about those high risk activities like legal research. People are using them though and they're very impressed. And again it's always when you talk about risk and depend on what you're using that tool for. So a snapshot of some of the available legal AI tools. Harvey AI is a you see on the screen there a prominent example. Last spring Alan Overie and Price Waterhouse Coopers made a lot of headlines. They announced they were going to give access to the thousands of lawyers that work for that international law firm that accounting firm access to Harvey AI. Law firms are using Harvey AI too. There's a company called Case Text Listed up there. They also got a lot of publicity for a tool called Co-Counsel. It was marketed and is continued to be marketed as doing a wide range of things. So document review, legal research memos, deposition preparation. So we'll generate questions you can ask in a discovery, contract analysis. And Case Text was recently acquired in the summer by Thomson Reuters so a big legal information company, a legal technology company for $650 million in cash. So that is like the level of you know investment being put into this. I put a couple of mating companies out there. Some people may have heard about Blue Jay Legal. That's a company that does tax analysis. They have now a generative AI tool that they've offered to their users. Jurisage is another company. It has a chat with cases features. Again using generative AI. I can give lots more examples. Here's again very small I guess icons on the screen there. So this is someone prepared a legal tax generative AI landscape October 2023. And you'll see all kinds of different tools. Finance and M&A, IP, consumer, corporate, contracting. I didn't even look up what GRC is, government relations compliance maybe? Any GRC lawyers out there? Okay. Someone can let me know afterwards. Business operations, litigation. And this is not the set of legal tax that lawyers can use. It's not the set of AI lawyers can use. It's a set of generative AI tools that are out there or being developed. And I thought so Lexus Nexus for why I was talking about how it was developing a generative AI capacity and had released it a few months ago kind of pilot to some users. I think now it's gone to everybody or available to everybody in the United States. We don't know if and when it's coming to Canada, but here's the demo of their tool. They just released a kind of again mainstream last week. And also wanted to note, you know, for some KanCon here that Kanlee is also getting into this space. It recently introduced that it's starting to use AI to generate case summaries and started with case AI generate case summaries of Saskatchewan primary law. I don't anticipate this is where Kanlee is going to end with this type of capacity. And imagine they'll be exploring use cases more and more as well. Another thing happening kind of in this milieu is some law firms are choosing instead of buying off the shelf commercial tools or building their own tools in house. They're still using those foundational large language models, but they're building something more specific to their firms. That's something we're seeing obviously need to be a firm of a certain size to be able to do that. How much are lawyers using and have that big graphic of all these tools? How much are lawyers using these tools right now? I mean, I certainly talk to lawyers who are using the tool. So I know some people are. Again, we don't have kind of a detailed set of stats. We do have a bit of survey data. I note Lexus Nexus has done some surveys in a recent survey. They've got 610 Canadian lawyers, so I don't want to suggest that's representative, but 53% of those 610 Canadian lawyers said they're either using generative AR plan to do that in the future for legal purposes. So small sample set, obviously Lexus is also self-interested in people using generative AI. But I think, you know, it's definitely noteworthy of something. And I can certainly say that, you know, lawyers, judges certainly are very interested in this technology, want to learn more about it. People are asking what tools are available out there. Another sense, so you can find kind of blogs talking about how might lawyers use this, etc. But in terms of actual use, there was another source of information last month. There was a closed door meeting in the United States where they had lawyers from 40 major law departments, 40 major law firms come to kind of this closed door session where they could talk very candidly about how they use this technology. And you'll see that they were saying the successful use cases they were seeing were things like summarizing, like we saw with that Lexus tool, discrete issue research, internal chatbots, first draft creation, analyzing deal documents, data extraction. And the extractive uses that I think are quite interesting, we call this generative AI, but what a lot of lawyers or law firms are interested in is let's feed large amounts of our documents to it. Can it help summarize the key points in these documents for us or extract what we need to know? So that extractive use as well. So again, people are saying these are successful uses. On the use, another thing I'd like to kind of also, you know, put in people's minds is that even if a lawyer or a law student says I'm not really interested in generative AI, I just want to do things my own way, we may start to recently seen some of this capacity being built into the tools we all use, like Microsoft Word or Outlook. And so the point there being that engaging with AI may be hard to avoid. And this builds that point that, you know, getting that literacy is going to be all the more important. So I talked about what AI is. In the second section, I talked about how lawyers are using AI, what are some of the main takeaways I hope you got from that discussion. There's challenges with using a general public tool like chat GPT. Doesn't mean it can't be used for anything, but it proceed with extreme caution, I would say. There's lots of interest experimentation and money looking beyond chat GPT. I haven't seen, you know, they often look on the internet, you can see quotes about how big this market is. But I think, you know, I mentioned that case text was acquired for $650 million. It gives you a bit of a sense of the money and interest behind this. And this isn't just notional. These tools are being rolled out. Lexis rolled out their tool. If and when that comes available to Canadian lawyers, that may be something they see on their desktops. There are techniques that can be used to increase reliability. Again, I think they're getting quite good talking to experts and looking at the tools. I'm not 100% sure how much residual concern there might be there. Risks again are going to be task dependent depending on what you're going to do. You know, if you're helping it, using it to again make that chart your fact, them or brainstorm questions for a discovery. Again, the risks may be different than, you know, delegating that legal research to it. And there's those broad use cases already. So it's just not a tool or capacity that does one thing. It's kind of being used across a full spectrum of lawyers and a huge spectrum of lawyering tasks. Okay. So I thought with the remaining time, I could talk a little bit about where we're going with all of this. We can watch more amazement, but I'll leave it there. You can see what happens when they go through their boss eventually. So I played that video. I saw someone posted on LinkedIn just because it struck me. I mean, I found it kind of humorous, but it's also helpful to remember that the professions have been here before. Professions, including a legal profession, have seen in their vast lifetime different technologies being introduced. Some that were quite disruptive. There was a legal profession pre-computers. There was a legal profession pre-the internet. And certainly it's changed the profession, the legal profession, but they certainly have survived. And so, you know, when we talk about the future here, I think, you know, if you hear these speculations about generative AI being an extinction level event from legal profession, and some people are hyping it up to be that, I think, you know, we don't need to take those takes super seriously. I think it's going a little bit too far. But if it's not extinction, what does that feature hold? I think with this particular technology, a bit of humility is the name of the game. People often want, you know, firm predictions. I'm not someone out here doing that. I'm often wary of people that try and, you know, be super firm on predicting where this is going to go. But it first came out. People were mad at me because I said it wasn't going to, you know, I didn't say it was going to change the legal profession in two months. Now I'm more cautious, you know, maybe I'll be proven wrong in the future. But I think, you know, being a bit measured about this is kind of helpful. There's also a common quip about technology. Maybe you've heard it, that we tend to overestimate the impact of technology in the short term, but underestimate it in the long term. And so there's probably a lot of truth to that. The generative AI is probably going to impact the legal profession in significant ways. But exactly what those ways are precisely, I think it's going to be hard to say at least at this particular moment. Why do I think the impact is going to be significant? Why do I think we should be talking about this? Why am I going across the country in the fall here talking to lawyers, judges, law students about this technology? To me, one of the essential things to pick up here is that, again, we're not talking about the release of, you know, one new widget. We're not talking about the release of one new tool. What we're seeing is a new type of capacity with technology, a capacity that used to be seen as almost impossible or a far way off, something that was previously unknown. And if you think about how much of lawyering involves using language, whether it's reading cases, parsing contracts, writing contracts, drafting legislation, you can see why having a capacity with technology where language can be generated can be so profound. And so what I see here is kind of the collision of a profession of word merchants. The New York Times used that to describe lawyers. I thought it was an interesting description, you know, meeting a word machine. I think that really puts us into new territory and new experiments. There are certainly debates among experts about how powerful this word machine is going to get. It's probably safe to say we're not at the ceiling yet. Again, we're seeing rapid progress. We're seeing a lot of experimentation. We're also still trying to figure out the best use cases. On the use case issue, there's a recent very interesting report from Harvard Business School. They did a bunch of empirical testing. And it was empirical testing with, you know, knowledge professionals, I think business consultants, so not lawyers. But what they saw is what they called the jagged frontier. And here's how they described it. This is quoting them. They said, some unexpected tasks like idea generation are easy for AI, while other tasks that seem to be easy for machines to do like basic math may be challenges for some large language models. And that creates a jagged frontier where tasks that appear to be of similar difficulty may either be performed better or worse by humans using AI. So, again, we're still very much at this point in time. We have this new powerful thing. Powerful thing with limitations, but powerful thing. And we're trying to figure out where it's going to fit in the world. One aspect of that, where it's going to fit involves the legal profession, the judiciary as well. Where do I think this leaves us now? Again, I think we need to be measured when we talk about this. But it's really important to be measuring what exactly is happening in the courts and lawyers' offices. Where is this technology intersecting with the legal system? And I think we also really need to be thinking about where professional ethics and values intersect with all of this. On the legal ethics considerations, previously, to kind of orient my thinking on this, I kind of went through all the professional conduct rules and said, where am I generative AI intersect? And I wrote a blog trying to be thorough without. I'm not going to, I know there's some legal ethics students, I'm not going to give a legal ethics lecture and go through rule 1.x, that kind of thing. What I thought I would do, and if you want to read the more thorough account, you can see that blog there. I did think with the time remaining, I'd focus on three different areas. Areas where I think there's some kind of new interesting questions being raised. One thing I've been thinking about are lawyers' obligations in relation to candor. So we have people from the Barrister Society, we have law students in ethics classes. You're probably well familiar that in Canada, our candor rules largely focus on being candid with a client about their legal position. Is their case strong? Is their case weak? What do they need to know? But what about being candid with clients about what tools we're using? And I wonder whether or not we need to be more particular about this when it comes to using AI tools. Because certainly, I mean, we can all see why there's something at stake if a lawyer is not candid with a client about the strength of their legal case. Maybe they take something to trial that shouldn't go to trial, or maybe they don't make an agreement when they should. So I think, you know, substantively, that substantive merit to the case thing is important to be candid about. But I'm also thinking about where and when is it important to be candid about the process. What might be at stake if a client believes a human is applying their legal judgment to a task? But that task is being done either completely or mostly by a machine. There's probably a lot of clients that don't care. They say, do it well, do it cheaply, do it fast. That's all I need to know about. But might, particularly this moment in time, some clients feel duped or betrayed if they think their lawyers, you know, typing away and thinking about their legal arguments, but instead of just pushing a button and the machine is giving it to them. You know, what happens if a client thinks that a lawyer is the person writing their emails to them, maybe emails expressing like sympathy or other strong emotions, but it's actually chat to the BT, simulating those emotions. What's at stake there? I think there is something at stake. I think, you know, we think about this as a fiduciary relationship as a trusted professional. I think we need to be thinking about this a bit more specifically. I think there's a lot of big questions wrapped up on this. I think, you know, the medical profession is going to have to face these questions. There's been some testing that say that, you know, chatbots have better bedside manner than human doctors. And so we start using chatbots or, you know, AI therapy chatbots. So I think a lot of professions are going to be facing this type of issue of simulated sympathy or machine communication. I think we could, you know, in the legal profession, there's some simple interventions we might want to think about. So one thing I saw recently in the American Bar Association rules is under communications, they do say that a lawyer needs to consult with a client about the means by which the client's objectives are being accomplished. So I don't think, and people can correct me if I'm wrong if I missed it, nothing totally analogous in the Canadian rules. But I wonder if we do need some signal to lawyers that they need to be candid to clients, particularly when they're using AI tools. In an era where we may start seeing more and more work being done by machines or great assistance by machines, it might be important for legal professionals to be clear of the client, like who is doing what or what is doing what. Is that email coming from you or from the chatbot? And again, I think, you know, it's a fiduciary relationship and clients should know what's happening and be able to provide feedback on what they think of that. So Canada was one area I wanted to raise with you and happy to hear your thoughts and the question and answer period two and what you think about that. To the second area, in my mind some of the trickiest ethical questions, particularly with generative AI, and maybe AI generally, come in the area of supervision and delegation. And so there's rules related, these two things and codes of conduct across Canada. Some provinces and territories differ. But often the idea is you need to have appropriate supervision if you're delegating a task to someone else. So we can maybe understand when a lawyer delegates something to another human, but what is appropriate supervision mean when you're delegating a task to a machine? Particularly with AI, some people may have heard of this idea of a black box. We don't always know how AI actually reaches its results. And that can be actually like AI experts can always explain that to you. One thing I've suggested before, which I'll suggest again, is that perhaps in the case of AI tools, law society should consider introducing new rules requiring due diligence and they require legal professionals to take reasonable steps to make sure that technology they're using is consistent with their professional obligations. Again, there might be something kind of implicit in general duties of competence, but I think it's helpful to be explicit with a lot of these things. And I think one reason for this due diligence, kind of before you start using a tool, is because I do think with generative AI, simply double checking the outputs, though important, is not sufficient. I think we do require some understanding. We're using a tool. What's happening? What techniques are being used? Lexa says it's hallucination free. What does that actually mean? Does a tool do what its marketing says it does? And one reason for this is it's possible when you have this language generation machine, that not all errors may be possible or easy to spot on its face. So maybe you have a legal research tool that's much better than ChatGVT. It doesn't make up cases, but maybe when it's producing quotes from a case or summaries of case law, it swaps semantically similar words. The example, and this would be something easy for a lawyer to catch, but just to make the point, I also teach tort law, so people know it's a reasonable person test. While a tool like ChatGVT may see the word reasonable and swap in a semantically similar word like fair, but we can understand a court that's who's applying a fair person test as opposed to a reasonable person test, maybe doing something different. The point being that maybe if you're writing an English essay, having similar types of words or synonyms, that could be okay, maybe even great, but the language we use in law is very particular and needs to be very precise. The general point I'm making, and there's some writing on kind of this subtle contamination and concerns about this that I can direct you to, the point being that the types of errors we may see with these types of tools, and again, it may depend on which tool you're using, are not going to be the same as errors you might expect if you're a senior lawyer, maybe looking at a student's work, because you're not going to expect a student to be swapping out words, hopefully. And so due diligence to me is going to be an important thing going forward. I think it's an interesting question, how can law societies assist lawyers with that? There's some interesting conversations about kind of having quality marks or quality assurances. Another piece of this due diligence puzzle I think is important to note is the cost of developing these tools is going down significantly and also the ease at which someone who's maybe less technically minded can produce something's going down. And so to date, in the legal tech market, we've often had very well resourced, very thoughtful people. We've had, you know, Kanley, we've had Lexus Nexus, we've had Westlaw. What happens if we start having players that are trying to do something quick and cheap and maybe aren't being careful with what they're producing, but they're putting it out to the lawyers? I'm super concerned about that kind of thing being put out to the public. And so I think there maybe is a concern there on the horizon. So that was the supervision question. I think there's some interesting things to think about there. On delegation, something a bit distinct. To me, the issue of delegation, when it comes to AI tools, you know, given their potential maybe do a significant amount of work that lawyers used to do themselves. To me, an important question or something we need to, I think, keep in mind is just because a tool may be technically able to do something doesn't necessarily mean we should accede, you know, human contact with that task and get the machine to do it. You know, I think it's as tools potentially become more and more powerful as they start potentially doing more and more things in law offices. I think we need to have a thoughtful conversation about what tasks are essential for lawyers to do and why. Sidebar on this, I think it's an interesting lawyer independence issue to be thinking about here. A lot of these tools are developed by a concentrated number of private companies. If we start again, injecting these tools into our law offices, are we essentially giving some of that profession to those private companies and letting kind of all those initial human choices that are made building those tools guide how we practice law. Not talking about something kind of nefarious here or some kind of conspiracy, but I think it's something we need to keep in mind. Also, keep in mind, we think about using these tools in our courts. For the image on the screen there, I did some delegation on my own. I used one of those generative AI image generators to generate a lawyer delegating work to a robot. Let's type that in. That's what I got from a publicly available tool. Pretty good, I thought. Other sidebar, another area my research is deepfakes and I think that's going to be a huge issue for the legal profession going forward, too, if you think about client verification, evidence in courts, another conversation, but generative AI in images is something else to keep in mind. So that point with delegation, I'm going to use kind of a thousand dollar term here or whatever you call those big words can barely pronounce, but I think we should avoid a Langdon winner term, technological synabolism. And the idea there, again, kind of a mouthful, but let's kind of avoid sleepwalking into a future we don't want. Let's take agency and be critical about how we use technology in the legal profession and to make critical engagements absolutely essential. And on my third area as I'm wrapping up here, to me that critical engagement piece applies not only to how lawyers might use tools in their offices, but also broader questions related to our justice system. And so lawyers have explicit ethical obligations in relation to the administration of justice. And I do think we need to keep in mind how those ethical obligations might be engaged here. And you'll see that quote from the, that's a model code, this idea that lawyers have an ethical duty to try and improve the administration of justice, which includes a basic commitment to the concept of equal justice for all with an open order and partial system. And so in this context, I think it means that, you know, legal professionals ought to be engaged with, you know, what are the opportunities that exist with AI and our justice system? There's a whole conversation about access to justice opportunities. Again, that's a conversation I think where we need to be quite nuanced about what might be possible and where the limitations are. But I think we need to start exploring that. We know just the vast percentage of Canadians who have met legal need. And so I think we have an obligation to think about how we can harness technology. Alongside that conversation about opportunities, let's also keep our, you know, our critical hat on, maybe our skeptical hat on and think about and be aware, keep our eye out for uses of AI and our justice system and maybe don't align with our values. And to me, that's, you know, part of our responsibility as guardians of the proper administration of justice. There's a whole amazing field of something called AI ethics, maybe some of you have read in there that talk about some of these broader concerns. When it comes to the AI I've talked about today, generative AI, some of the concerns relate to privacy. So maybe you've seen privacy regulators are investigating chat, GPT, the various copyright disputes. So in this tool is built by gobbling up, you know, a bunch of words from the internet. One thing I think is quite interesting is the environmental impacts piece. So we often just think of these technologies as lines of code and maybe kind of not connected to the material world. Generative AI in particular takes a lot of energy to run. Misinformation piece, what's going to happen in the upcoming American elections? I'm quite concerned. Bias concerns, labor issues. There's been some exposees when chat, GPT was built, they had a lot of offshore labor working in not great conditions for not great money. And their job was to essentially try and make sure that chat, GPT didn't do anything violent or offensive or disturbing. And so to do that, they had to look at a lot of violent, offensive and disturbing things. And a lot of, you know, mental health issues as a result. And so I think, you know, we always need to kind of keep that broader cloud of ethical considerations in mind when we get excited about new technologies. At least be aware of what those actual concerns are. I think there's, you know, I'm not a pessimist on this technology and I'm not a 100% skeptic. I think there is tremendous a change of foot. There's lots of opportunity here. But let's keep in mind also the risks that may occur to some of our values. And with that, I've reached the end for you today. In terms of the final thoughts, you know, I just, like I said, to me, I do think we all have an important role in scrutinizing technology being used by lawyers in the justice system and guiding meaningful reform. And part of this, this role I think involves education, staying educated, understanding what's going on with these technologies. What are the ethical issues? What are the legal issues? What are the practical questions? I leave you this quote from Janice Clark, who talks about law having a long and distinguished past is a learned profession. But only in the past decades has it been clear that we must also be a learning profession. So I'm going to wish you all the best in your learning journeys and law, including not only in AI, but in your learnings of everything else that we continue to do. I thank you for your time. I think we have some time for questions if people have any and perspectives and feedback. Thank you.