 I'm going to be turning it over at this point to Ivy Ashton. Thank you so much for coming in and talking about this topic. Very excited to hear. Yeah. Well, thank you, sir. I appreciate it. And my screen is showing right now. Is that correct? Definitely. Looks good. All right. Good. I don't need to do anything. Excellent. Well, so my name is Ivy Ashton. I'm an attorney based in Chicago. I own a software company called Legal Server. You may know me from that. And I'm going to talk today about artificial intelligence and try to give it some context for how it's impacting our community. So this ideally would be a very interactive conversation. So feel free to ask questions. Stop me along the way, whatnot. I'm happy to see if I can answer your questions or show more examples or give more context. So the way I got introduced to artificial intelligence, the way I kind of got became aware of what it was and where it's going is that to kind of start the story a little bit further back, when we started working in this space about 20 years ago, we were trying to solve this access to justice problem. We knew, and I know everyone on this call knows this deal, we knew that the vast majority of people weren't getting access to justice the way that our system should work. And we thought we could make a difference using technology. So originally back in 1999, I was the founder of a website called Illinois Legal Aid Online. And then in 2001, I started working in the community more specifically with agencies and whatnot. In about 2015, 2016, I was looking at how far we had come. And I looked back and I realized that we really hadn't made as big of an impact with technology as I had hoped we had. There was still a massive problem in this space in terms of getting access and whatnot. So I started researching different ways that we might use some of the new technology that's coming on the scene. And that's where I started learning about artificial intelligence. So I just wanted to give that introduction because I want to tell you the punchline to this whole presentation in case you leave early is that I've never been more optimistic, more excited about where we're going with technology than I am today. This is just an absolute... I mean, we're in the middle of a revolution. And I think it's going to completely impact how we deliver legal services. And I actually think that in this space, in the nonprofit legal aid space, we're going to make great strides in this and kind of lead the way in many ways. So let's start the conversation by talking about something called the digital transformation. So we are 17 years into an era in history that people will talk about that is being called by some the digital transformation. I thought this slide was kind of funny. If you can't read it, it says, the factory of the future will have but two employees, a man and a dog. The man will be there to feed the dog and the dog will be there to keep the man from touching the equipment. It's just kind of indicating where we're going with all of this stuff. So some characteristics of digital transformation. I'm going to talk a little bit about Moore's law and explain what that is. We hear that a lot in our space and I thought it might be helpful just to give it some context. One of the things that I think is really helpful to understand is the concept of sensors. Sensors are getting smaller and they're getting faster. They're how we know certain things. So if you think about the things in our phones, in our cars, our refrigerators have sensors that tell us things. And so those sensors are growing like crazy. And one of the keys to all of this is that objects, kind of inanimate objects, with these sensors in them, are being connected to each other via the internet. This is called the Internet of Things if you've heard that term before. And the rates in which they're growing are amazing. I mean, it's predicted that by 2020 that 250 new devices per second will be attached to the internet. Right now we're at about 100 per second. That's amazing if you think about how many things are being connected to the internet. That's one of the things that's driving artificial intelligence. So I wanted to kind of put that out there. But I think if I had to describe what's going on in the digital transformation, I would describe it like this. If you think about what happened during the Industrial Revolution when we took objects and attached electricity to them, how that completely changed the way that we lived every day, the way we worked and all of that. So think about something like a hand pump, right? We used to pump water by hand. Well, once electricity came on the scene, we could connect a pump to electricity and we could start doing a job that used to have to be done by a human. It could now be done by a machine. And we saw great expansion, great efficiencies gained throughout the Industrial Revolution. In fact, some people call the digital transformation the fourth Industrial Revolution. So you might hear it by that term as well. So the defining characteristic in the first 20 years is that intelligence is being added to objects. And that will fundamentally change the world. So I thought this was a great quote. The advantages gained from cognifying inert things would be hundreds of times more disruptive to our lives than the transformations gained by industrialization. I think that's true. All right, so let's talk a little bit about Moore's law. So Moore's law is something that we hear a lot in this space and I find that when I hear terms that I don't know what they mean initially, it's helpful to try to define them, try to make it more accessible. So Moore's law is named after a guy named Gordon Moore who worked for Intel who predicted that the number of transistors would double and the number of transistors would double every two years in a computer. So what that equates to for us is competing, processing speeds and power double every two years. Some people say it's every 18 months, but the idea of that. So what that looks like in reality is something like this. So if we have always thought about progress, human progress being efficient like this, and I'm going to show this back how it relates to Moore's law in a second. In reality, this is what it currently looks like. That's how quickly technology is changing. So to give it a little bit more context, what you're looking at here is a graphic of representing Lake Michigan. And one of the characteristics of Moore's law that's really important is that if it doubles every two years, that means the first year it doubles in size, it's almost not noticeable. So what this is showing is computing power doubling every 18 months, one drop of water starting in 1940 and how quickly it would take up to fill a body of water like Lake Michigan. So you can see almost no noticeable difference for decades. And then all of a sudden, when it hits sometime in the 2000s, you start to see it accelerate really quickly. So I typically do this when I give this example in a room where I've got people in the audience that will respond back. I know it's kind of hard to do that here. So I'm going to pretend that you're going to say yes to this question, but I usually ask people who in the room plays golf? And then inevitably someone will raise their hand and I said, this is a great way to think about Moore's law. So what I say is, let's pretend we're going to play a game of golf at 18 holes. We're going to go out and we're going to make the game a little bit interesting. So I say to you, what if we put a little bit of money on this each hole of the 18 holes? Let's make it a dime, right? And I said, would you take that bet? And most people say, yeah, yeah, I'd take that bet, right? Because the maximum risk in that equation is $1.80, right? 18 times 10 cents. So then I say, well, let's make it a little bit more interesting, right? Let's take that same dime, but every hole, let's double it. Would you make that bet? And most people will say that they'll make that bet, and they'll say it because they don't really, they don't do the math in their head about what that actually means, but I think it shows kind of what I'm explaining here. So I say, all right, so the first hole is 10 cents, second hole is 20 cents, third hole, and you go down the line. And even as we hit the turn, even at hole nine, we're only up to $25 and 60 cents, but you can see how it's growing, right? So now we keep playing golf, we're 10th hole, all of a sudden now we're in the hundreds, but we're still, you know, still 200 bucks. Like that's not crazy amount of money yet, but as we go through it and we get to hole 14, 15, 16, now it's getting real, now it's really real. So if we were to take a dime and double it for every hole, 18th hole, the 18th hole would be worth $13,107. So that's showing it. Here's the same thing showing this on kind of the same graphic. This is showing up to hole 12. I can see it starts to go up, and then it shoots all the way up to hole 18. So that's the curve we're on right now with technology. That's what's driving all of this. That's what Moore's law looks like. Here's just another really quick example just to put it in non-golf terms. So if you took Intel's ownership today compared to the one in 1971, so the one today is 3,500 times more performance. It's 90,000 times more energy efficient. It's 60,000 times lower in cost. So if you put that in the context of a Volkswagen Beetle and compared it to today, the Volkswagen Beetle would go 3,000 miles per hour, 2 million miles per gallon. It would cost four cents. And the gas mileage you could drive on a car, a tank of gas for your entire life. That's Moore's law. That is what is defining what we're seeing right now. All right, so there's another thing that's going on in the digital transformation that I think is helpful to understand as well because I think a lot of people are really nervous about what's happening in our society and how quickly things are changing and whatnot. And I've heard this explained this way and I think it's actually a really helpful way to look at it is understanding the activities that humans do and understanding the activities that machines do. And there's lots of examples of things that humans used to do that machines do right now. If you look at tollways on highways, when you drive through a tollway now, a sensor reads, you send a transmission to a sensor and it deducts money from your iPass or whatever you have in your car that lets you on the tollways. That used to be performed by a human. You used to have to stop and pay money. So there's lots of examples of where we're seeing machines taking over for what humans are doing. One way to think about this is if we think about it on a scale, and this good guy who created this calls it a humanology scale. I like this. On the left side of the screen, it's kind of laid out like a pH scale with acidity and base. Zero is kind of neutral. On the left side, these are things that really need humans. So funeral homes, right? We're not going to have robots talking at funerals or things like that. And these are things that are really specific to humans. Doctors visit human decisions that need to be made. Versus things that technology can do. If you look at Amazon, I mean, we buy things on Amazon right now. We used to have to go to the store to buy them and we used to have to interact with a person to do it. I do a lot of shopping online now where I just click a button and the thing arrives at my house. That's been automated in many ways. And then a lot of places where we're seeing machines helping humans in surgery and whatnot. So these are very highly scalable and automated tasks that can be done. So what is artificial intelligence? This is, I think, the thing that I hear all of the time. And what I want to say about this is that a lot of people will frame artificial intelligence as something that's coming. And I think a lot of people in this call probably know this by now, but it's not coming. It's here. You know, if you think about the fact that billions of people every day search petabytes of data, which a petabyte is a million terabytes of data on the internet and expect to get accurate results, that's artificial intelligence. That's that algorithm, that search algorithm is being driven by a form of artificial intelligence. You know, we talk to our phones in natural language, and we fully expect our phone to speak back to us in our native language and give us nuanced information. We do that every day, or my kids do it all the time. I don't do it as much, but I see it time every day. And pretty soon we're going to be getting in cars and they're going to be driving themselves. So look, this isn't something that's coming. This is something that's already here. So when most people think about artificial intelligence, one of the things that I think a lot of people, at least in the context of how we're thinking about it today, a lot of people think of something that is called artificial general intelligence. You talk to a machine, and the machine understands what you're saying, has emotional intelligence. It is able to detect what feelings you're having and give you an answer back. And you can ask it anything. It's like talking to a human being. So I think a lot of people think of like Hal from Space Odyssey. Other people think about artificial intelligence, might think about it as kind of Watson playing Jeopardy in 2011. I think a lot of people think about it like this. That's a form of what's called artificial narrow intelligence. That's the only artificial intelligence that is available today, artificial narrow intelligence. The form that was played on Jeopardy is a form of artificial intelligence called question answering. And what's important about artificial narrow intelligence is, or let me say about artificial general intelligence is that the earliest predictions I've seen have it somewhere around 2029 that we'd have some type of artificial general intelligence. So I don't think anyone thinks we're really all that close to it. And so when you hear the term artificial intelligence, what we're really talking about is artificial narrow intelligence. All right. So one of the things that I want to try to unpack in this presentation is kind of giving a framework for thinking about or at least share with you how I think about it and how I made it accessible in terms of understanding what it was doing. So one of the most important things for me is to understand that artificial intelligence is not a thing. It's literally thousands of things. And so at its core, at the most fundamental, artificial intelligence is a single discrete task. And when you chain them together, you get something in return from that. But I think if you think about it as just a single discrete task that is probably the best way to think about that. All right. So this is a term we hear all the time, algorithm. It's a little bit of an intimidating term. And a lot of people will say, well, don't worry, the algorithm will do that. And I'm always like, what algorithm? What are they talking about? What is an algorithm? And it must be new because it's ever using the term. It must be new. So it turns out all an algorithm is, if you just want to get it down to the simplest thing, is it's a sequence of instructions that tell a computer what to do. It's important that the order in which the sequence is really important. If you think about baking a cake, if you put the pan in the oven and for 40 minutes or something before you put in the flour and the eggs, you wouldn't end up with a cake. So you got to think about the precise order. It's got to be, it cannot be ambiguous, right? An algorithm has to have a very specific instruction. Computers are literal. And then the computer will properly execute them. So that's what an algorithm is. So these have been around forever, right? Algorithms actually, and artificial intelligence, go back to the 1950s and the 1960s. Originally, the computer was called a thinking machine. So in some ways, if you really wanted to study the history of artificial intelligence, it's been around for a long time. So kind of the first phase of artificial intelligence was kind of the dawn of the computer age. A lot of expert systems. So we see this today. So as we look out into our landscape of the things that people are doing, we see a lot of work with expert systems. Expert systems might be, like TurboTax is a good example of an expert system. Document automation can be an expert system. The online triage systems and the etalogic systems that are being built in several states, these are forms of expert systems. So an expert system is kind of a classic way to program a computer. You give the human pre-defines the logic that goes into it. So if you ask a question, if you're asking somebody who's seeking asylum, when did you enter the United States? And if the answer to that question is anything that's greater than a year, then you know that they have some type of legal issue where they didn't apply for asylum in time. All of that is pre-programmed. Somebody sat down and said, here's the logic, put this logic into the system. And those systems are really important for what we do. I don't want to minimize them. That is expert systems will definitely play a role in how we solve these problems. They're not going away. A lot of knowledge management systems are built on expert systems. Those will continue to be really important. But there's another type of algorithm that is really kind of defining this wave of artificial intelligence. Some people call them learning algorithms. It's another word for that is machine learning. So the characteristics of learning algorithms, it's different than just a regular algorithm, is that they acquire information and rules using information. So they literally learn. They reason. So they use rules to derive conclusions. And they correct. Increasingly, they self-correct. So that means the computers are basically writing their own programs now. And the efficiencies that we're getting by using something like machine learning and learning algorithms is huge in this space. So some characteristics of algorithms and kind of the great, I'm sorry, of machine learning and kind of the great promise in machine learning is the ability to perform new unseen tasks. Based on known properties learned from previous examples. So the terms big data, we hear that a lot. That's really important because we need lots of data to train these machines. And again, kind of the great promise of this is that they're training themselves on how to do things. Like Google did a really incredible, has a really incredible deep learning program that watched YouTube videos and taught itself how to distinguish between dogs and cats. And the key there is that humans weren't telling it that that's a dog, that's a cat. It was the machine was able to derive what was a dog and what was a cat based on just hours and hours of YouTube watching. So there's different types of machine learning and I don't want to get too deep into this, but this is important to know. There's what's called supervised learning. That's where humans are involved. A lot of what are called classification systems are used a technique of machine learning called supervised learning. That and what that means is that you have to give it examples. So I'll show you some examples of it here. I know that Jonathan Pyle in Pennsylvania has been doing a lot of great work around this and where he's getting information from his website and as people ask questions, it tries to predict what type of issue they're having and then he's going in and tagging what the question, like what the correct classification of that question was. That's how a lot of the work we're doing in this space right now is centered around supervised learning. Unsupervised learning uses a technique called clustering. This is a way of trying to train it without a human helping it by clustering things that are similar to other things together and using math to figure out what things are. And then you might hear the term deep learning. That's kind of what everybody's really excited about in AI right now and that uses something called a neural net, which is mimics how the brain works. It uses layers of computation to try to derive and eliminate what it's not and try to come up with something that it is. I think part of what's important here is that with supervised learning versus say like neural nets, you can explain the answer. If the Google example where they were finding cats and dogs, if the neural net got it wrong, nobody could explain to you why it got it wrong. Whereas if somebody, if there was classification going on, somebody could explain to you why it got the answer wrong in that case. All right. So one of the things that I like to do, and again I like to do it in an audience of people where I get feedback. So just play along at home with me here. But I like to ask this question. I think this is a really powerful example of showing a bunch of different AI techniques and just to kind of get people engaged in artificial intelligence. And I asked him this question, do you know what happens when you call a credit card company? And someone usually raises their hand and says, what's the first thing you hear when you call a credit card company? And the answer is, yeah, that voice comes on and says, we are recording this call for quality assurance and training purposes, right? It tells you it's recording the call. And I said, do you know why it's recording the call? And I say no. I said it's recording the call for three reasons. The first reason is that it's matching your, it's trying to put you in a personality category. So there was a guy that worked at NASA years ago, I mean like in the 60s, and NASA had to figure out what types of personalities people fell in and they came up with these six different types of personalities that are on my screen here. And the reason they had to figure it out is because they're sending people into space together and you're going to be up in space for a long time. Apparently they literally had a situation where they had to bring the rocket home early because the two astronauts were literally going to kill each other. So they came up with this personality profile and tried to figure out, use psychologists to put you in one of these categories. Well, one of the first in credit card companies do is they record your voice and they record the words you're using when you're describing things and they're trying to put you into one of these six categories. And the reason they're doing that is they found that if they can match you with a person who has a similar personality, that they can gain efficiencies in terms of how fast they can process your needs. So like if you call a customer service line and you're the type of person that just wants an answer, like my cable doesn't work and I just want it fixed, you know, they're going to put you with somebody who's like, hey, what can we do for you? You know, if you're someone that needs to be coddled a little bit more, you might call them, they might say, hey, how's your day gone? You know, you're having a good week? And then, you know, what can we do for you? How can we make your life better? Right? And they're finding that they get better results based on putting you into one of these categories. So this is one thing that happens. The second thing that happens is they're trying to detect your emotion and they have a couple of different techniques for doing this. They can use the words that you use to try to figure out what your emotion is. Are you mad? Are you happy? Like, what is your current emotion? They can also use the tonal analysis of your voice so that the frequency in which your voice speaks, they're trying to measure that. So this uses a bunch of different kinds of artificial intelligence. It uses something called speech to text to put the text into, it uses the AI is taking your speech and translated into text and storing the text in the computer and then running that text through an algorithm. And it's also storing the frequency of your voice and putting that into a different algorithm to try to come up with what your emotion is. And one of the reasons they do this, especially on customer support lines, is they found that the angrier you are, the faster you'll get to a supervisor. And the reason for that is that the faster you get to a supervisor, the less likely it is that you'll end in litigation. So they use this to try to place you with the right person and put you higher up in the chain if need be. All right, so let's see if I can make this work on here. I don't know if it'll work on the webinar, but the third reason that they record your voice, and this is especially true at Kuddecker companies, is every voice has a unique voice print. Just like every person has a unique fingerprint, every voice has a unique voice print. And they're measuring your voice print to see if you've ever called before, both to see if you've ever called before on this account and if you have, then you get like a green light, right? Okay, we recognize that voice they've called before on this account. Or if you've never called before on this account, you might get a yellow flag. It's your first time calling. First time we're hearing your voice. We can't be certain it's you, but we don't have any other voices to compare it to. So yellow flag kind of caution. And then the last reason is they're seen if you've ever called in any other account before. So they get these scammers that we'll call in all the time trying to get information from them to get information about a person in their credit. And if they spot your voice as having called in other accounts before, then it throws up a red flag and says stop. This person is called before and they're trying to scam us. So this is a good, it's a really short video, it's about two minutes, but I think this kind of proves the point here. It's a female hacker. She's at a hacking conference and she's trying to get into this guy's phone accounts. Let me see if it'll work on this. So the audience is not going to get the audio on it and our frame rate is going to be about one second. We can take this video and distribute it with the notes also. And we can look at putting a link in it also for the, or a link in the description for the video. Awesome, thank you. Sorry, I was worried that you wouldn't be able to hear it. Well, so I'll tell you what it is. I mean, the punch line is here, this woman in a very short order gets this guy's, takes over his entire account, locks him out of his account and it took like maybe 30 seconds. And it's just, it's amazing how easy it is to do that. So that's the motivation behind why AI systems are coming in to try to take that over now. So sorry, you couldn't hear it. That's actually a great video. Every time I watch it, it makes me laugh or cry. All right, so as I was going through this, I think one of the things that is helpful to understand is why, what are the uses of AI in our space? How are we going to see it? So one of the kind of flavors or one of the use cases of AI is just to predict things. It can be very predictive. You can use different machine learning algorithms to try to predict things like outcomes on a case. There are what are called litigation banks now that will analyze cases and try to figure out what case is likely to get a lot of money and litigation banks will come in and finance those cases with the idea that they'll get a big upside on it. So they're using algorithms to figure out outcomes on a case. You can also imagine someday being able to predict an outcome in a court case based on the judge, based on the circumstances, what are the best arguments to use, things like that. So all of that would be predictive. We're seeing this, it's somewhat controversial. We're seeing what is the likelihood of recidivism in criminal cases. Those types of algorithms are being used in certain courts to figure out whether or not to set bail or whether or not to give someone bail, how much bail should be, things like that. What's the likelihood that this person is going to repeat this crime? Unfortunately, and I didn't put a lot of it on here, one of the things you have to watch out for is bias. Bias is a natural thing that happens in any type of algorithm. And there's a great book called Weapons of Math, MATH Destruction, and they talk a lot about bias and some of the dangers of it. So I think there are a lot of really positive things about AI. I think it's interesting to think about how they can be used in criminal cases, but I think we, as a community, need to be really careful about how we use this stuff because the algorithms they're using based on data is likely inherently biased. So a lot of groups are fighting those types of algorithms. So one of the things that as we got started that we've been working on is trying to figure out the capacity of an organization. So a lot of the organizations that we work with, if we ask them questions like, you know, are you guys at capacity? Everyone says, yeah, we're over capacity. And then I kind of laugh at like, well, how can you be over capacity? That is, it sounds like you're right at capacity. And I say, well, how do you know that? Right? And they say, well, because we're really busy and we don't have, we can't take any more cases and all these things. But it occurred to me that we have no idea of what a capacity of an organization really is. So we set out to one of the very first things we did when we got into the space is we were trying to figure out what the capacity of an organization is. And when we started, we learned, I mean, when we learned a ton, we failed a ton of times before we've kind of figured out at least an approach to take with this. So one of the things that we started with is like, well, what if we just took every case by its problem code and tried to figure out what the average amount of time that would be, average amount of time spent on these types of cases. Well, that failed for a couple of reasons. One is that average is the wrong way to think about this stuff. That's not how prediction works. It's not just, you don't want just the average because if you have 100 cases that take an hour and you have one case, or actually if you have 100, if you have one case that takes an hour and one case that takes 100 hours, the average of that is somewhere around 50 hours. But that is not indicative of what the next case that comes in the door will be. You can't predict that it's going to be 50 hours is because that's in between the two. So we started out by taking what's called a normal distribution of time and we tried to look at what are all the different, like of all the cases you have like this, what are 25% of them were between an hour and three hours. The next 25, so up to 50% were between three hours and four hours, up to 75%, maybe up to six hours. And at 95%, the cases that fell within that normal range maybe took up to 10 hours. And then based on that, we thought, could we come up with a prediction based on that? Then we realized you can't do that either because every agency may do different levels of service on a case. So it could be that they're giving advice on a case or they're doing a full rep on a case. And that was a factor that was incredibly determinative in this. So we had to add something for level of service. Other things that we put in there now are experience level of the staff working on the case, who's working on the case. So what we're ultimately trying to predict is how many hours will this case take that we just brought in and based on who's assigned to it and the level of service of the case and the type of case and the legal issues in the case. And then we're also trying to predict how many days will this be open. So that was, that actually is starting to get more narrow down now in terms of being able to predict a capacity of an organization because you can look at every case, how long it's going to take, number of hours and how many days it's likely going to take. And then you can start to chart that and see who has capacity where. So that's just an example of something that that's actually kind of what it got us started in this. There's another service called Case Text. It's a company that does a lot with around case law and they have a lot of different services. One of the predictive services that they have is you can upload a document, a legal brief to it. And it will tell you what the likely case law and arguments from the other side are going to be or what you missed, right? So it would say, all right, based on your jurisdiction and based on this brief that you've written, of all the observations that we've seen before, people have cited these cases and you didn't cite these cases, right? Or you've cited these cases and no one's ever cited those cases before. And then again, kind of the opposite of that is and the response brief is likely going to cite these arguments. And it's, I think, I mean, this is a pretty new company. They've gotten a lot of traction. They've got a lot of money lately. So they're something to watch out for. I got an email and I've actually been trying to get ahold of the founder of this company saying that they're offering some of these services, some of their services free to non-profit organizations for certain types of cases. So someone might want to reach out to Case Text and see if you can get a good example of using their service. But law firms are paying for it and they obviously see a market in it. Another example is there's a site called this2.co. It's based in Toronto. And the idea behind this service is they took 58,000 divorce cases and they based on your income, your spouse's income, whether you have kids or not, things like that, how long you've been married or whatever, it predicts what you are likely to get in the case and how much you're likely to spend in the case. And I always joke about this because I say, if divorce were rational, which is not, you would make a different decision if you knew this information on the front end. If the law were more transparent, if you went into it and said, look, you're likely to get $50,000 in Alamone if you get divorced. You're likely to spend $75,000 to get that $50,000. Would you do the deal? And I think our rational minds would say, no, actually we're losing $25,000. So you might just settle at some lower amount knowing that that's better than what you're going to get if you knew that. So what they're trying to do is based on data from other cases that are similarly situated tell you what the likely outcome of your case is, how long it's going to take and how much it's going to cost. So again, I think I really create a view to this. I don't have a slide for the next one, but one of the other predictive things that we've been working on that I think is really interesting in this space too is trying to figure out when a person needs a lawyer. And I know this community has a lot of interest in this and so we're always open to having conversations with people that are thinking about this too, but some of the factors we're looking at is, what are the severities of the consequence in the case? If you have a public housing, if you get public housing and you're being evicted, it's a lot more important that you get an attorney perhaps than if you have private housing just because the consequence of losing that is that you lose your subsidy as well. And not only are you out of your apartment, but you're also out of the subsidy that allows you to pay for that apartment. So the severity of the consequence, the power of the parties, like the perceived power of the parties, one side's represented and the other side's not. These are factors that we're thinking through right now trying to figure out how to put that in there. All right, so another type of, so that's kind of the predictive stuff. Another term you'll hear a lot of in this space is natural language. There are kind of three flavors that I think most people think of the natural language processing, how we process language. I want to show some examples, hopefully of all of these, natural language understanding and natural language generation. So the first thing that we can talk about is natural language processing and again, think of classification. So what we're trying to show here or what we're doing here is spot legal issues. One of the things that we've been working on is a natural language processing classifier that tries to figure out what the legal issue is based on the text of somebody describing it. So here the audience of who you're processing language for is really important because a client describing an eviction might be different than a lawyer describing an eviction. Like a client may not use the word eviction. They may not even know what that means. A lawyer may. And a doctor or a social worker may use different language as well. So we're very sensitive to where the audience is, but the idea behind this is that you could put this on the front of websites. You could put it in notes to kind of spot different things. But one of the things we're trying to do is figure out what the likelihood of the legal problem is. And there are kind of four levels to it. There's the problem category, the problem, I think in the legal aid, we think of it as a problem code. Maybe the third layer would be a more nuanced code like eviction. Is it public or private? That would be kind of, I think people describe that as a special problem code. But the issue that we're really trying to get at, and this is why we started building this, is what legal issues are involved in the case. Because this goes back to that predictive thing. If an eviction is a lockout and where the landlord locks the door and turns the heat off, those are issues in the case. They may take different amount of time or the amount of effort or whether you even take that case may be different depending on what the legal issues are. You might have two eviction cases, but if you put them side by side, they're really defined by what legal issues are in the case. So that's one of the things that we've tried to build using natural language. And the idea behind it was, what if we could train algorithms to spit that out and give us percentages on that? So that's really, I think, an exciting area as well. And another similar project that we did was around VOCA. So in our community, VOCA funding has become more and more. I mean, there's a lot more funding available right now for VOCA. For those of you that don't know, VOCA has to do with victimization and victims of crimes. And in a VOCA case, you have to first spot that it's a VOCA-eligible case, that this case could be eligible for VOCA. And you have to spot what type of victimization the case would qualify under. Is it sexual assault? Is it domestic violence? Is it financial fraud? Things like that. So one of the observations, and I actually give credit where it's due, a lot of people at various legal aid agencies approached me and said, look, we're leaving a lot of money on the table, we suspect because it's hard to train everybody to look for VOCA. And so I think there are cases that would qualify that we're just not seeing. So we spent several months working on an algorithm that uses tens of thousands of observations of VOCA cases that we knew were VOCA-eligible. And then we trained an algorithm to take the text in those case notes and try to predict based on that text in a case note, whether it's VOCA-eligible or not. And that tool has been, we're actually just rolling out in beta now, but the idea behind that tool, I think it's a really good example of how, again, when we talk about artificial intelligence, we're talking about a discrete task. All we're trying to do with the VOCA classifier is spot, is there VOCA-eligibility? And is it what kind of domestic, or what kind of victimization was it? So that's another example of that. Language translation, this is one example of natural language understanding. So I won't say much on this. This is actually a really controversial topic, especially in this community, about how good language translation is. I will tell you that about a year ago, maybe it might even be two now, that the companies that have these types of machine learning algorithms have changed how they train them and they've gotten incredibly more effective. Their error rates are a lot less. And the way that they're doing it is they're using something called natural language understanding. And natural language understanding doesn't just process the words in a sentence, it processes the meaning of those words in that sentence. Like what is the context of it? The example that I give a lot of times when I talk about natural language understanding is, if I said to you the word bank, what does that mean? You might say, well, it's a financial institution, or it could be a bank shot, if it were talking about basketball, or it could be someone's last name, or it could be a river bank. So there's lots of different meanings for the word bank. So without context of a sentence, we don't know what the word bank ultimately means. But if I said I deposited my check in the bank, then people would say, right, you're talking about a financial institution, most likely, although it is possible I could deposit a check in a river bank. But most people would assume that I meant a financial institution. That's natural language understanding. In the context of that sentence, what does the word bank mean? In the context of speaking a sentence in your native language, can it be translated into other languages? And the answer is machines are doing this faster and better all the time. There's a company, by the way, that has an earpiece that they think will be out sometime in 2018 that if you wear it, it will translate in your ear what people are saying to you. So we'll see how that goes. But that's kind of an interesting concept. You're wearing an earpiece, walking into, getting into a cab in Paris and saying something in English and having the cab driver understand it in his or her natural language and then saying something back and having you understand it in your natural language. Kind of an interesting concept. So the... Let me just get out of order here. The other thing that we see a ton of hope for in this space is something called entity extraction. It's something that happens when you, within natural language understanding, entity extraction is a way of understanding the context, not just the context of the sentence, but breaking it down into individual sentences and then breaking down the parts of the sentence and trying to identify what's called an entity in the sentence. So on the screen what I'm showing here is the entity in this example is just looking for nouns and verbs and pronouns and adjectives and prepositions and proper nouns and whatever. There's actually a ton of work going on in this space. This tool is a lot of the big AI companies, this is kind of their fundamental tool. So Google has one, Stanford has one. A lot of different groups have these types of entity extraction tools. The way we see it happening is looking at something in terms of trying to identify objects within a document. So just to give you a sense of what we're working on, if you take a picture of a document and you text it to a machine, so let's say you text it to a phone number, that uses a form of artificial intelligence called computer vision. It's OCR, optical character recognition. It's been around for a long time. Most people don't think of it as AI anymore because we've had it forever. It's getting way more advanced and a lot more accurate than it used to be. That's where it takes it and puts that document, that picture that you took of that document, and it puts it into actual text. There's another tool that breaks every sentence apart, pulls it individual sentences apart. There's another tool that reads the document to try to figure out what type of document it is. So is this an eviction notice? Is this a pleading? Is this a citation? Is it a charging document in a criminal case? Things like that. And then based on what kind of... So that's a classifier, just kind of classify what kind of document that is. Then based on that, the entity extraction tools will look at every sentence in that and try to figure out things like, all right, if it's an eviction notice, who's the landlord in this notice? What's the address in this notice? What's the... Who's the tenant? How much do they... Oh, when do they have to move out? And the idea of that, if you think about how we do intake now, where we ask all these questions, like what if intake started with, I got this document? What does it mean? And a machine just spit out, all right, it's an eviction notice. The landlord's Bob Smith, the tenant is Sally Jones. The move out date is this, the address is that, whatever. So we see a lot of hope in terms of entity extraction for all the documents and that we see as being a big mover here in this space. I took my computer vision a little bit there. I mean, this is an interesting question. I'll ask it and then I'll answer it since this is less interactive. But do you know how many photos are uploaded to Facebook per day? 300 million. It's 136,000 photos are uploaded to Facebook every second on average. Why is it important? Because Facebook and Apple have been working a ton on computer vision to do facial recognition, to identify people. I talk to my kids about this and the example I use that is I say, look, I went to college in the late 80s, early 90s. I am sure that there is a photo of me in a shoebox somewhere at a party or something that I would be embarrassed if I saw it today. Probably that exists. And I don't worry about that because the likelihood of somebody finding that photo identifying, though, that's IV and doing anything with it is very low. But what I tell my kids is that those same pictures that were taken in the late 80s are now being taken on phones and uploaded to things and what not. And it's very likely that in my kid's lifetime that somebody will do a Google search on them and find that proverbial photo in the shoebox and know who that is. And the reason I'll know it is because of computer vision and being able to identify people just by looking. So that's just an important one to know here as well. I'm trying to keep this to an hour and we're almost at an hour. So let me just say that one of the other one of the other things that I think is really important in this space is is the idea of chatbots and conversational interfaces. We see a lot with chatbots that do not pay. Chatbot gets a ton of press. But there are many chatbots that are doing lots of things. And I think what you can the way to think about chatbots and how they're relevant to the work that we do is think about what we call them conversation loops. We think about these things as microservices. So we want to understand more information. We're going to create something in form of a conversation. So interfaces, I mean, the prediction is that in 10 years websites won't exist. Right. That we'll just have conversations with people and pull that data out in conversation. So if you think about it through SMS text messaging, you think about it through Facebook Messenger, Snapchat, all the different platforms we have conversations on. The chatbot revolution is basically is a way for a machine to have a conversation with us and understand what we're saying. So if somebody asks a question and the question is what is your gender? Or how do you identify as your gender? And if I said I'm a dude, right? Like that might if I said that to a person they might know what that means and they might infer that that means I identify as a male, right? Well, in the chatbots you have to train it with giving it different examples to say, well, that means this. So the way that we do it now is we ask the question and we give them a dropdown menu and say pick from this menu. The way that it's transitioning to is through more of a conversation where we have to understand what the answer was to the question. And then based on that do something next. And I see this as being a space that is completely changes how we go about this stuff is the chatbot revolution. And again, it's not, chatbots have been around for a while by themselves and not all that sexy. You just have to think of them as like little micro loops of just trying to get information from somebody and then figuring out what the next question to ask is. And I think that when you chain them all together you can build pretty powerful examples of that. And then the last thing I just want to call out here because this is something called natural language generation. And so this is somewhat new. It's coming on the scene right now. There are more and more examples of it. And when you realize that it's being used in certain places it might surprise you. This is instead of understanding language or, you know, so if you think about it the context of natural language processing and natural language understanding we're taking unstructured data and we're trying to create structure to it. So in my example of the document this is the landlord this is the tenant we don't that document is unstructured data. We don't know anything. 10 years ago we wouldn't have been able to use a machine to figure out what the structure of that document is. Whereas now we're trying to extract out of that and give it structure so we're trying to take unstructured data and make it structure. Natural language generation is the opposite. It's trying to take structure data and then create some type of article or some type of writing that sounds like a human wrote it. So there are a lot of examples. You might be surprised to know that a lot of sports articles written on certain websites are written by computers. So when the game ends a computer generates a narrative that is published online that you might read and you say like oh wow you know the Cleveland Cavaliers won a thrilling game against you know Chicago Bulls and LeBron James scored 32 and had a buzzer beater and whatever and you're reading it as though you read any sports article and all of that's being generated by a machine. And the reason it's able to do that is think of the millions of examples that are out there of humans writing sports stories about about sporting events and all the structured data that we have on a sporting event we have the box scores how many points did you have how many minutes did you play you know the play by play all of that stuff is pulling that structure data out and creating unstructured data with it meaning it's writing some type of narrative. When this hits in the law it is going to completely change everything right. I mean here's just draft pleading or here's the draft you know it's talking about document automation and how we're doing it now it will be like forget about asking the questions here's here's the draft based on you know who the judge is what type of case it is what we know about this case this is your best argument we think and the computer will have written it and then you'll edit it and the efficiency is gained by that. So I just want to end on kind of a point of optimism. I know we have some time and I'm happy to take questions and I would love to take questions so but I just want to end on a point of optimism and say you know the idea of artificial intelligence the idea of machine learning is intimidating it's there parts of it that we need to worry about like the bias and whatnot. I want you to know that lawyers aren't going to get replaced lawyers will still be needed we still need humans to do this work we're just going to do it with the assistance of technology and probably the best and kind of funniest example I have of this and I'll end with the story is a good friend of mine graduated from law school in 1992 at the top of his class and he moved to Chicago and he worked for one of the biggest law firms in Chicago and he worked directly for the managing partner who happened to be a huge litigator in the city one of like one of the most famous litigators in the city and he explained his job and his job was that every day he would go to court he would carry boxes he would carry briefcases he would sit in court all day and he'd watch this this person do this and at the at the end of court he would he would leave and they come back to the office and I'm hearing voices now sorry to say well he would come back to the office and he said that's when my day would begin right at four o'clock and the partner would come in and this one day came and he said hey I need you to look for this word in this stack of depositions the stack of depositions was about six feet tall and he said he knew when he gave him that assignment that he was going to spend all night reading through that stack of depositions looking for this one word so when the partner left my friend Dave turned to the paralegal and said hey don't we have these depositions on floppy disk and the paralegal says yes so can you bring me to floppy disks and so paralegal brought the floppy disk Dave hit control F found 19 places that word was used highlighted them in the depositions walked in the partner's office an hour later and said hey I found it it's in these 19 places and the partner said there is no way that you found it that fast and there's no way that you that you were accurate even if you found this 19 there must be other places that you missed and Dave said I'm pretty confident but I didn't so he told them about control F and the partner was just oh my god control F this is amazing like think about that and the worst part of the story is that when he told the other partners the partners were angry and they said that you just cost us $1,800 with your control F because we could have built for the six hours of time that Dave would have spent looking through all those depositions and now we can't right so that was 25 years ago control F that's what artificial intelligence is today these are the tools that are going to help us do the work that we do and so I think if you think about it like that just as instead of maybe artificial intelligence think about it as intelligent assistance right or an intelligent assistant right something that is going to make us better at what we do make us more efficient and hopefully allow us to serve more people because that's what we're ultimately after so that's what I have today for artificial intelligence I hope you found it informative but I'm happy to stay on and answer questions if anyone has any no great great presentation there I thought that a lot of practical examples with regards to kind of benchmarking these technologies what do you think some of the best practices are best ways for people to keep up on a field that is moving so quickly well I think you just said it I mean this stuff is coming fast and furious you know I I think what what people ought to do is just first of all take a breath it's it is intimidating but take a breath and just try to understand the places in your practice that you do things every day over and over and over and over like the repetitive routine tasks that you do and think is there a way that I can automate this somehow and I think it in if and again you just keep this stuff small right I think about is there a way that I could figure out how to do this one task that I do all the time where it was just automated and it happened automatically and I didn't have to worry about it like I think the VOCA example is a good example right like here's something that it's really like it's it's nothing earth-shattering right it's like oh you just saved the world but like that one example is is very practical and it maybe it only say you know maybe it it only has a marginal return on investment in that you only get say three percent more cases than you did last year because you spotted them through a tool like this but look that's three percent you know we're serving 15 percent of the people that need our help if we get a you know if we can increase that by three percent awesome so um I don't know if that answers your question but that's how I approach it definitely thank you so much for coming in and presenting just today Ivy of course all right thanks everyone