 Hey, so the title of today's webinar is the future of legal services technology is now and we have a great group and I'm gonna try to not waste any more of your time so that we can get on to hearing from them. Quick rundown of our panelists though, in alphabetical order, David Caleruso is joining us, he's a clinical fellow and director of the Legal Innovation and Technology Lab at the Suffolk University Law School. Margaret Hagan, director of the Legal Design Lab at Stanford Law School and Design School. Jack Haycock, client-focused technology innovator from the Pine Tree Legal Assistance. Alice Mori, managing attorney from City Bar Justice Center. Glenn Rodden, senior program counsel for technology from the Legal Services Corporation. And Christopher Schwartz is the deputy director of the legal hotline at the City Bar Justice Center. And my role going forward be primarily to moderate questions and keep the conversation going forward. So without further ado, I'm going to hand things off to David and Margaret who are going to talk about their project called Learned Hands. David, I'm gonna hand things off to you. Great. And let's hope it shows the correct screen. All right. Everyone see in the slide deck? Yes. Okay. So today we're gonna talk about our project, Learned Hands, which is a partnership between my lab, the Lit Lab at Suffolk University Law School, and Margaret's lab, the Legal Design Lab. Marty, you wanna just say hi and introduce yourself? Hi, everyone. Yeah, I'm happy to be here. David's gonna take over. I'll just be tell our commentary. So we're gonna talk today about a project that we put together to create gamification to help address access to justice and machine learning. So first, the big elephant in the room, big data and the law. So what we're gonna do is we're gonna talk about machine learning, which of course requires that there be some data to work on things. Now we're not gonna get into a big description of how that works for our purposes, it's sufficient to understand that if you have a lot of historical data, you can train machines to find patterns in that data and then you can use those patterns to make predictions about the future. In our case, we're gonna be making predictions about what the issues are in a bit of text. And so that's something that we're going to do using machine learning. What's important for you to understand in that process though is that we're basically making a model. That model is making an educated guess about what's going on. And one thing to remember is all models are wrong, but some models are useful, which is this great George Obama quote. So a map is sort of the prototypical example here. A map doesn't have all the detail you need to see everything in the world. It has just enough fidelity to be able to be useful in certain circumstance. So when we're talking about building a classifier that's gonna be able to say we spot this issue or that issue, I don't want anyone to think we're trying to build a robot lawyer. What we're trying to do is build a tool that we can use to make an educated guess about the content of some text. So whenever you have a model, since all models are wrong, but some are useful, we like to think about what you need to talk and ask yourself about before you decide to adopt the model. And there we wanna basically talk about how the output of the model, since we know it's wrong, should be the start of a discussion, not the end of a discussion. So you never want to have these models being the final say so. You want them to help start the discussion. And our process, we're gonna talk about getting over some easy first steps. And then you always wanna ask compared to what. So if you're not doing much right now, it's very easy to clear that bar. But whenever talking about whether or not a model is useful, you have to ask compared to what and then make sure that it's gonna start discussion, not him discussion. Which brings us to our project, learn at hands. And so this comes from the idea that many hands make light work. And then of course, the fact that attorneys love puns. Or at least I do. I'll let Margaret comment on whether or not she does. She did. I love a good pun. That's great. That being said, I'll leave it over to Margaret then to talk about sort of where our project is placed. Yes. So our motivation is figuring out where we can help automate the, if there are a large number of people asking legal questions online, whether it's on legal sites or even on sites like Reddit or Yahoo Answers or just on a Google search engine. We know that more and more people are going to seek justice information online. Which is interesting because that creates data. As they ask questions on Google search engines on Reddit's legal advice board, we see these kind of stories of people trying to seek justice and oftentimes where they're getting denied or getting confused. So there's a lot of this access to justice crisis happening on the internet. Where people are not necessarily getting matched up with public jurisdiction correct resources that are available, but just people aren't able to find them. So we have this interesting data set or possible data sets of people asking questions telling their stories online. And this is where David and I were seeing opportunity for an initial machine learning approach where we could start to build a model on top of this data of people asking questions, of people telling their stories about getting evicted or trying to get custody of their kids or dealing with a bad situation at work. So the big motivator for us was if there's all these people showing up to try to get access to justice online, could we possibly harness the power of machine learning to use data set that we might be able to get from these platforms to train a model that would classify the different legal issues that we're showing up in people's stories. And then the third part of this diagram is the pent-up lawyer power, which is really where Learned Hands comes in is to actually train these models, we need to have well-educated lawyers and law students look at these people's stories and start to apply structured labels to them to say, we think that this legal issue, code number, blah, blah, blah, is present in this story and look at hundreds, if not thousands, if not millions of these posts and have lots of different pairs of eyes, all these many Learned Hands looking at these posts to then help our model learn how to automatically classify the legal issues that might be present in them. And so what we've done is we've gone ahead and we've created a game, which we're all gonna give you the URL to play in a second, but we'll hold off a second otherwise you just wanna listen to us. But here's a screenshot. Basically, we've created a game that takes data from Reddit's Our Legal Advice subreddit. The, some of the moderators over there were kind enough to provide us with their own copy of the data sets. We have about 75,000 questions as people are actually asking them. So we're not talking about the answers that people get on Our Legal Advice and all that could become very dicey and we can have a lot of discussions about that. We're talking about just the questions they asked. So these are in their own words how people are asking questions. An important thing to understand is that we would love to have data from places other than Reddit so that we had a different population of a server because Reddit's population is skewed to a certain demographic. So we're also interested in getting private data sets and taking those to help train models as well. We've done some initial training on some labeled data that we have and it actually doesn't look that bad. So we can actually spot family and child children issues with a pretty good accuracy. The thing we're looking for right now is just to make sure we don't make horrible models. For us, a horrible model is a model that doesn't beat a coin flip, right? If you could flip a coin and do better as a model and it's a horrible model. Also, you have to be able to be always guessing yes or always guessing no. So accuracy is not necessarily a measure but compared to always guessing yes, always guessing no that can help you out. So just think about a situation where you have something that only happens one percent of the time. You could be 99% accurate by just saying it's never going to happen every time you're asked. So we try to beat that. And then we also have some other metrics that we look at as in like how many of the potential things as we catch and when it says something is something is it really that something. And so we've got a preliminary data that's looking nice and we really want to get more labels and get people in there and get that labeling so we can start doing some machine learning. But Margaret talked a little bit more about what those labels are and how her lab has worked to design them. So what it is we're labeling. That suggests to reinforce. So the whole point of the model is to be able to scout high level legal issues so we could maybe bucket like a given post on Reddit that the model could say likely based on the labels and the model worked from before. Likely this new Reddit post indicates the presence of a family law issue or a housing law issue or a work in employment issue. So it's kind of high level labels. So but then we want to get down to more specific issue codes because really if we think about possible applications of this we can imagine a direct referral of somebody posting on Reddit about transferring a house out of a trust with three trustees to one trustee. Maybe passing them to a state's trusts and wills practice. That's good but it's not really going to solve that fundamental access to justice, access to public correct specific legal information for them. So ideally we can get down to more granular issue codes so that the model doesn't just predict this is a housing or a family but this is a particular type of divorce or a particular type of dispute with a landlord or a particular type of traffic ticket question. But we want to get down to more specific legal issues. And then ideally we can also label the many wonderful legal health websites whether it's the statewide portals whether it's court self-help sites but with all of their very rich specific information that honestly not enough people are accessing right now. How can we match these people showing up on platforms like Reddit legal advice and get them to the exact right page for their jurisdiction that a public government or nonprofit has already created. So to that end, we've been at my lab working with wonderful partners around the country who have already built legal health issue taxonomies. So starting with the National Subject Matter Index which was funded by LSE and now is hosted by the Northwest Justice Project and LSNFAP. So that was an initial set of issue codes and we're taking that as a base and we've been talking to a lot of other legal aid and website administrators to hear about how they're classifying legal issues on their web pages. So we've brought together all these different taxonomies. We're trying to refine into a new version of NSMI that could then have these specific legal issue codes and then we can use those when we tag up on Learned Hands People's Posts and then we can also automatically tag up legal health websites. So hopefully these codes will then be the basis for a Rosetta Stone-like interaction where we can match how people talk about their problems with how lawyers and court staff frame and present their legal health information so that we can get these two groups together. So I'm sure everyone's eager to see the site so you can, in another window, go ahead and pop up learnedhands.law.stanford.edu and what you should see is something, it doesn't like me trying to open something at the same time that I'm, all right, and you'll see something that looks kind of like this. You'll log in. We are asking that people either create a log in or use a social media log in because we need to actually track specific users because of the way we're actually figuring out whether or not an issue is present. So you might be asking yourself, well, you ask three attorneys whether an issue is present and you're gonna get five different answers. And so what we do is we actually use some statistics since assumptions about the distribution of right or wrong answers among our answers and we actually can figure out whether or not something is or is not present based upon. It's not a majority vote system, but it works sort of like this. Everyone says this issue is present. Then we can make sure we don't show, we don't have to keep showing that issue or that text to someone to label. And so what that might look like is you come in and you're able to play mix mode and we see a little question here and it says stealing Instagram images. My sister runs a semi popular African-American national hair blog on Instagram, blah, blah, blah, blah. And so it's gonna talk about what this person's issue is and we're gonna act, we're gonna vote whether or not there is or isn't something there. I'm just gonna go ahead and skip because I'm not gonna keep reading that. But then it would go ahead and it would keep asking about other issues until all the issues that needed to be answered for that question were answered. What's nice is that you don't have to do more work than it's necessary to get to that final answer. So if everyone agrees upon what the content of text is, then the next question that people see is not gonna be that same question. Likewise, we're gonna use active learning, which means we're gonna start training our models on this on these texts and based upon how well our models can determine whether or not something is present as an issue, you'll pick those that it's not sure about and we'll put those at the top of the queues that will always be getting the most bang for our buck as far as people's labeling. There's a nice gamification aspect. As you can see right now, I'm number nine in the ranking, but if we look at the letterboard, Margaret is the undistuted queen. And so hopefully you can get in there and try to knock her from her phone. I'll let Margaret talk a little more about some of the design aspects that went into creating the game. Yeah, so our goal was to really make it as fun as possible because when we tried labeling or asking research assistants or other people here in our Stanford team to label, it's usually very heavy and tiring with a spreadsheet. We really wanted to make this as lively and also as powerful as possible that we could attract something like a crowd. Looking at other games that have come out from physicists, cosmologists, where academic researchers have harnessed the power of crowds by making games, kind of the citizen science. So that's really where we see an opportunity and access to justice as we get more data and we think about more machine learning applications if we can invest in these ways to hopefully make it really lightweight, engaging, rewarding to actually do the hard work of cleaning labeling and preparing the data to train models. So this is our first kind of hypothesis that we can get people to come and label with us and really scale up our power to train these models pretty quickly. So even tomorrow here at Stanford, we're gonna host a game tournament where we have students kind of competing against each other for prizes about who can label the most and who can also spot bugs in our program. So we're trying to figure out ways to really scale up the amount of labeling because even if we can hire small teams of RAs in our labs, it's nothing compared to harnessing the wonderful lawyers and law students out there. So we really encourage you to come play. Hopefully it'll be easy, fun, quick, very game-like. And we'll also give quality scores. I think David could say a few more words, but it's not, there's also that our ability to tell you how effective you are at actually spotting issues. Yeah, so once we determine that an issue actually is present, which is to say a sufficiently diverse number of folks have said that it is present, then that becomes a finalized score. And so we're doing that to a 95% confidence level. And then we will compare how your answers compare to the sort of group answer. And then based upon that, we're using something that you might be familiar with, which is a measure called the F1 score, which is basically a measurement. It's a combination of how well you did at finding all the issues and how often the issues you said were issues, actually were issues. And so that's a quality score, basically from zero to 100, where 100 would be perfect. For those of you who know F1, so we're basically taking F1 score, multiplying it by 100, but that's your quality score. And so you can do bragging rights, right? So I mean, you get points for answering more questions, but then that's also mixed with your quality. So if you answer a whole bunch of questions, but don't have high quality, then you're not gonna make your way up there and not margaret off her post. And we're really hoping that people will find it fun. I like doing it on my commute. And we already have people hot on our heels, even though we've been playing in sort of beta for a while. So we really hope you'll go. If you share it with your law firm, if you got a law school, so share it with law students, share it with professors, just get as many people as you can doing it. And then we'll have a tool that we should have made clear. This is being done with funding from the Pew Charitable Trust. And we've agreed to go ahead and make as much of this available for the public as possible, which means we'll be making the classifiers available and as much of the labeled data as we can available. So this labeled data that everyone's voting on, we can make available. We still would love to get private data sets. So if you're sitting on top of a data set, maybe you're a legal aid organization and you get cold emails that come in. Those would be the perfect type of thing that we could bring into the system and we could have either your staff or our staff start labeling them without making them available to the public. And the benefit there is we could then train our algorithms on it so that we're able to spot more than just the issues that people have in the subreddit. And we can sort of see what broader portion of folks are looking for. David and Margaret, I had one quick question. This is Mike. Just from the perspective of someone who manages a national public legal information platform, in what ways do you envision collecting the information? I know you say you want more than just the subreddit. It would like search queries be of use or what sorts of forms of information are you looking to collect and what mechanisms would we want to look at using to collect it? So search queries eventually could be a useful thing. Right now we're looking for, I mean, so basically if you've got it, we'll figure out a way to make it useful. But right now if you have questions that are sort of similar to these reddit questions which is to say they're sort of a lay person's explanation of their legal conundrum in their own language, then that's the sort of stuff that we would love to have. And specifically if it's from a different type of population in the reddit population, excuse younger and male. So we would love to get those and they can be provided to us just in a spreadsheet. Each row is a bit of text and then we'll take over the labeling. We can give them unique identifiers. We can come to some agreement where we wouldn't share that with anyone. So just reach out to either Marta or I, our Twitter handles are down there on the screen right now for those of you listening on the call. You can also probably find us with a quick Google search. And we would love to have the more data the better. And the benefit is that by sharing that data with us we can keep it private and only let the people you want labeling it. But the larger community can reap the benefit by allowing us to train models on that data. And then it can help remove those first couple of steps which often can be intimidating. So we see multiple use cases for a classifier like this. And we also think that other people will see other use cases for the labeled data. And so one of the reasons we want to make it public is because we want to see what folks come up with. But it could be something as easy as connecting people with resources or helping a nonprofit understand their population. So imagine you get a bunch of emails that you don't go through and code them all. You could run this classifier and you'd say, oh, well, 20% of our emails coming in is this issue. We don't cover that issue right now. Maybe that's something we should staff up for. And yeah, so please share the more merrier. Yeah. Sounds like a conversation we should have. Yeah, especially if you have live chat, like conversations, form entries, search entries will be great, but anything where there's more substance, like more like a paragraph or multiple sentences, that will definitely be a better fit with our current data sets. But really anything that you think has people expressing their justice needs or their justice stories, yeah, we'd be happy to talk to you more. And something that we didn't mention, which is implicit in what we talked about, is we're trying to get at how, well, we explicitly mentioned we're trying to get at people how I actually ask questions. But you can imagine someone might ask for some, might have an issue that they don't give the right term, quote, unquote, right term of heart. And so what this allows us to do is hopefully be able to get at how people are actually asking questions, even if they aren't using those legalese. Great, sounds fantastic. If anyone has any questions, please feel free to jump in at this point. Not seeing any hands up at the moment. Hopefully everyone is glued to their underscreen playing the game. I would imagine so. I wanted to start playing. All right, great. Well, normally I would sort of hang out and wait for questions, but I know that David and Glenn both have some hard stops a little earlier than our 90 minutes. So I'm going to, in order to make sure that Glenn has maximum time to present, go ahead and move on to Glenn. But if anyone has questions they'd like to follow up with David and Margaret about, please feel free to put those in the question box and we will be happy to follow up. Even if we need to do it offline, we'll be happy to do that. So thank you both. That was fantastic. Thanks, guys. All right, great. So now I'm going to hand presenter to Glenn Rodden who's going to talk to us about the legal navigator projects. Thanks, Mike. Is my screen up now? Yes. Great. Well, I wanted to talk to everyone. We actually have a name for what we've been calling our litigant portal and it's the legal navigator. And so this is very exciting and it's also very descriptive because the idea behind the project is to have something that helps people work their way through their problems. And I say problems instead of legal problems because just like Margaret and David were talking, we want to use this in such a way that we can help people determine whether or not they have a legal problem. And this is a very broad project with the partnering of the Legal Services Corporation, Microsoft, Pro Bonnet, Q Charitable Trust, Accenture, Legal Aid Society of Hawaii and the state of Alaska. And this is a really good illustration of a public-private partnership that is working to do this. This is something that came out of LSE's Technology Summit back in 2012 and 13 and that Microsoft, when they heard about the project, decided to step forward and to put development work into this and they've been working on it now for the last couple of years. And so this is something that is very, very exciting. Now, there's more partnerships than this than just Microsoft though. This gives you kind of an illustration. Mike says we only have 90 minutes, so I'll only spend about 10 minutes on each one of these points. No, I'm just kidding. But this has been a very broad project. You can see here some of the places Margaret and David mentioned Reddit. We work with them to get some initial data for their machine learning. Stanford's been involved. We've been working with Grave Bloom on Open Referral. The National Center for State Courts has stood up an Oasis group for us to get some standards because one of the things that we wanna do is to build a platform that other people can use, that other people can exchange data with us and we can talk more about that later, but it's really very important that we do this. And so this has been, like I said, you can see that we've got the National Subway Matter Index, schema.org, and LegalZoom, AVO, Rocket Law. We wanna build a platform that if they decide to come in with us, then this is something that we can reach out to actual commercial providers as well and be able to provide a platform that can be used. Now, we're going to build it, deploy it, and grow it. And I'm gonna talk to you a little bit more about that, but first of all, let's talk about the mission statement. Connecting people to local, legal, and human resources using AI and machine learning principles that will learn and dynamically evolve over time to meet the needs of the people we serve. This is why David and Margaret's presentation before the one on legal server was so important because you can see from what they're doing how the system will get smarter the more and more people that's using it. And we are using Microsoft's Lewis Engine and the engineers that have built this have built it in such a way that each one of the modular parts of the portal has an API that other people can use. And so even if you didn't want to use the portal that we're building with Microsoft you would be able to tie in through that API and use Lewis and take advantage of the learning that is taking place by the users and by the work that Margaret and David are doing with Learned Hands. So this is something that's very important is we build these systems that have standards involved so that everybody can use them that we don't want people to all have to recreate the wheel on these types of things. So what's the system gonna look like? Well, let's envision the benefits that we're gonna have. We're gonna have one platform if you will a single point of access and what we mean is this will be someplace in Hawaii and someplace in Alaska that people can come and actually ask a question if they want to. They won't have to worry, should I go to the court website? Should I go to the legal aid website? Should I go to the bars website? They don't have to worry about where they're going to go. There's one point of access that people can come to. And it's going to be designed in such a way that it can be accessed from all different types of devices. One of the things that we learned coming out of the summit was the real need to do a mobile first type of design for everything that we build. And that's something that's very important and we've worked with Microsoft and Phil Swoop on a design that will work very well across platforms. We also want to base this in the cloud because we want something that can grow. We don't see this as ending with Hawaii and Alaska. We wanna have something that other jurisdictions can come on and use and by our maintaining the data server and the cloud platform, there's a lot lower lift for them to onboard other jurisdictions. We also wanna have curated experiences. We've got Google, we've got Bing, we've got lots of search engines now, but we want something that's a little more user-friendly. We want it to actually not just give them a bunch of results but also help them build and guide checklists that can help them through there. And so we wanna give them process assistance so that they have step-by-step information on the court and non-court solutions. And so this is something we think is very important like I said, not just a list of resources but actually giving them steps that if they've got a particular legal problem, these are the steps that they need to address that legal problem and these are the resources they can use to actually complete those steps. We also want it to integrate with existing providers. One thing that we should be very clear on is legal navigator is not going to be a solution. It is going to be the path to get to the solution because there are so many resources already in the different states. This isn't something where we're recreating those resources. We are putting some information on there that will help with the curated experiences, but this is something that we're actually going to be depending on the existing providers in the state. It's not gonna replace statewide website. It's not gonna replace any of the automated forms. It's not gonna replace the court's website. It's going to help people find those resources that are most able to help them with their legal issues. And finally, as we've been talking, it's gonna run on artificial intelligence. What I'm gonna talk a little bit more about that here. Artificial intelligence is really growing right now and one of the reasons is because of the vast data set that we've got available to us. And this is where we're talking about the work of learned hands is getting more and more data. The more it gets in there, the more accurate the results are going to be. And we've also got cloud computing that's come together so that we've got the ability to aggregate this all in one place. It's not segmented across a lot of different individual servers anymore. And then the algorithms, the code that Microsoft and others are able to build to analyze this data is just getting better and better. If we look, there's kind of a learning curve with this. You look at something and how things take off. This is kind of tracking the progress of some of the things, the big technology developments over the years, going from the wheel, the printing press, the steam engine, all the way up to AI. And we really feel like that right now, AI is on the cusp of really taking over and taking off and being able to drive this engine of innovation that we want for this particular platform. And it's really, really very important that we do this. Now, but technology is not all that's involved in this project. We also need to look for the end user. We want to have inclusive design. And because of that, we went through an extensive process where we brought users together in Alaska and Hawaii and we actually sat down with them and asked them what they needed. I know this is something that Margaret's been telling us to do for years and years. And I hope, Margaret, you realize that some people are actually listening to you. So we brought users, the actual end users, the people with the legal problems. We brought the providers, the people who actually have solutions that we want to match people up with and also community navigators because there's many places where, instead of going to the legal aid organization or going to the court, people actually go in and talk to local trusted officials. For example, in Alaska, we had a gentleman there who worked with the tribe in one small, very small village. There was no legal aid provider. There was no court in that village, but there was a tribe official and people come to him from things ranging from social security to public benefits, to divorces, to guardianship because they have no other government official to go to. So we wanted to bring all of those groups together to talk to them and come up with a user-centered design and testing. Now, these workshops were, this is an example of the one that we're doing in Alaska. And you can see it was all storyboarded out, very low technology. And you can see the blue dots, the yellow dots, the red dots. We asked people after all the storyboarding was up and around to actually pick what it was that they thought was the most important as far as design homeless. We did the same thing in Hawaii. Again, the same three groups of users, the end users, the providers and the community navigators and asked them what it was they needed to serve each of their constituencies or themselves. And here's an example of the priorities that we came up with. A need for information they can understand in my own language that's respectful of my culture. I need to connect with the right people in my community. I need to know where to start and get step-by-step instructions to help me along the way. I need accurate and trustworthy information. I need a secure way to store my personal information. Now, you can see this centers around information. This is what people are looking for, but they want it to be trusted. And that's one thing that we really hope for the navigator project is that community providers that community navigators can feel that this is a source that they can trust. And I got to attend both of these workshops and I was struck by how similar the needs in both communities were, both in Hawaii and Alaska, because you think of them as very dissimilar, you know, an island culture and then this vast arctic wasteland. I'm sorry, I don't mean to insult Alaska by that, but they're very different climates. But the needs of the people were very, very similar. I think this is very important. And so once we came out with their needs, we started working with them on how the system would look like. You know, what do we want? How are we gonna do it? For example, they'd like it to be able to take a photo and tell them what the document is. You know, that's something very important and that we found that in both areas. So that's a feature that we hope to integrate into the platform. And so after all of this was done, working with the group called Phil Swook, we came up with this very, very simple design that will be able to be used generically in different places. But we want something that is very simple and easy for people to use. That was something that was very important. And that was part of our design principle that we wanted to do is to make sure that we can do that. And we want them to be able to go in and to go to the search screen and type it in, or we want them to be able to go in and look under different topics. Because this is something that was important too, was that different people have different ways of looking for information. And we didn't want to try to build a platform that was just one size fits all. If you're a search type of person, you can see up there on the right, you've got the ability to search. But if you are looking for topics and resources, you can also do that as well. You can go to the guided assistant if you want to type in what your problem is and let the guided assistant try to find you. So you've got basically three ways to go into the system to get to the resources that you need. And so this is something we think that's very, very important. And like I said, our plan is to start with a soft launch of this in Hawaii and Alaska to work on it, to see what problems we might identify, to get real users using it and see how the results come up, to work with learned hands and get more and more information. But then our plan is to make this available to other states to onboard and to be able to use in their jurisdictions. And like I said, I want everybody to remember, even if you've got your own platform that you want to use, even if you've got your own portal that you want to use, there will still be components of this with our API design so that you can come in and use those. Because again, that's very important. Now, I want to call you back though to the OASIS standards that I mentioned. What we're working with is the ability to exchange information between the portal and the providers because initially, this is basically going to be like a cold handoff. We'll be able to show you the providers and give you the information to get to them, but we're not passing any information to them. But we've got a TIG grant that we'll be working with both Hawaii and Alaska and their case management systems so that we will be able to exchange data. We're also working with the courts in both jurisdictions. So hopefully we'll be able to exchange information with the courts so that we can send the people directly to the courts, but then hear back from the courts whether or not they actually resolve their issue because we want the system's referrals to get smarter and smarter. And just like the system needs data to identify the legal problem, it needs data to see if we're making good referrals because we spend lots of time in our community making referrals that where people bounce around from one resource to the other. And that's something that we really want to avoid here. So that's why I would encourage you to look at the work that the National Center is doing with these OASIS standards for these data exchanges so that we will be able to use those to make these warm referrals and then also look at things like open referral, those standards, so that we'll be able to have a better way of finding out all of the providers that are in the community. So this is something I think is very important. I think I'm just about at my time here. So if you want to keep up on how we're doing, we have a blog going, simplifyinglegalhealth.org. If you've got any questions directly, you can get in touch with me. I'll be happy if I can answer them to get to somebody that can. Again, this is really exciting time using machine learning and artificial intelligence. And so now we've got time for some questions. Great, thanks, Glenn. Anyone have any questions for Glenn? I know there was a lot of landscape to ask you to cover in that amount of time, so. So one of the questions. It's usually a 90-minute presentation, but. One of the questions is which Microsoft Engine is being used for this? Or what's the engine that's built on? They're using lots of different components. And I'm sorry, Jonathan Pyle just told me all this earlier this week. He's been doing some review of it. Like I said, the artificial intelligence is the Lewis Engine. I know that. But what the actual code base is, sorry, I'm not a programmer, so I don't honestly know what it was built on. I do know that they've made, like I said, the APIs. They've got a long list of available APIs. So that you can connect into those different components. But the only actual engine, well, now I know that they're using being advanced search for part of it. One of the interesting things, too, is they're looking to do some of the trusted logins as well so that we won't be storing your information in our cloud. You will be able to connect with your Google account or your Hotmail account or some other type, you know, your Facebook account and actually store your data and information there and we're hoping to be able to do the trusted logins so that we don't have to retain your data, but you can still have your data available, too. Excellent. And so I put through a link to the Lewis, the language understanding system that I think that's what Glen is referring to. So folks want to check that out. And also a link to the blog that sort of tracks the progress of this project. Thanks, everybody. As Mike said, I've got to log off so I can go on to another call. But again, if you've got questions you think of later, just send me an email. Great, thanks very much, Glen. I do see a couple of questions here, but I will forward them along and we will get answers offline. Okay, thanks, everybody. Great. Excellent, okay. Well, please, if there are additional questions for Glen that I can pass along and we can follow up offline, that would be great. So we're at, let's, great. Okay, so I don't see any other questions, so I'm gonna go ahead and pass presenter to, I believe we're passing to Jack, who's gonna talk about the guide clearly project that he's been working on. Sorry that they've been working on. So here we go, gonna pass off to Jack. Sorry, it's a long list. Here we go. Jack, does that work? I'm not seeing the prompt, but... Okay, let me try again. Oh, here we go. Sorry, my pop-up was being hidden. There you go. Yep, I've got it now. All right. All right, have we got my slide deck up? Yeah, thank you. All right, brilliant. So hi everyone, I am Jack Hacock. I work at Pine Tree Legal Assistance in Maine as the client-focused technology innovator, which is a really fancy way of saying that mostly I run our websites. I'm a lawyer and a technologist and an avid experimenter with all kinds of different website tools. So we've always had a bit of a DIY attitude at PTLA when it comes to our websites and their use of technology. And so in keeping with that, this is gonna be a little bit more of a crash course in DIY future tech. So the future of legal aid technology can feel really overwhelming. There's never any shortage of jargon or buzzwords or next big things coming along. And so whether these things feel kind of far off and out of reach for you and your own use of technology, or if you feel like they're closing in on you, I hope I can give you at least a little bit of peace of mind and at least the kind of basic building blocks for how you can also build meaningful interactive and conversational tools on your website and that you can do it for free and also without needing to know how to code. So what do you need in your basic toolkit here? A website, first of all. So at PTLA, we use the open source platform Drupal, but what I'm gonna demo today doesn't require Drupal. You can use it on any website, just what I have the most experience with. You will also need free account with Dialogflow and or Guide Clearly. And these are the two tools that I'm mostly gonna talk about Guide Clearly today, but I also do wanna touch on Dialogflow. And you could also use a Facebook account or other social media accounts as a platform for a chat bot or kind of a customer service bot, but I'll touch on that a little bit later. So Guide Clearly is a tool that was created by Urban Insight who are the wonderful folks behind the D-Law platform, the Drupal build specifically for Lelaid. It's free to use up to 10,000 sessions per month. And we get a lot of traffic on our website at ptla.org and that has never been a problem for us. We haven't gone over 10,000 sessions and we have a few of these guides on our website on various topics. So that's always been plenty for us to be able to use it without having to pay monthly and add that into the budget. So Guide Clearly allows you to create really simple screener tools using branching logic, so it's all kind of expert system-based, but there's no coding needed. So you build the screener on the Guide Clearly site and then you can embed it on your site in a ready-made iframe and people will be able to use it pretty seamlessly. So I'm going to give a little bit of a demo here. So I'm going to do this in a little bit of reverse order. So we have here a Guide Clearly guide, kind of live and in the wild on our website. So from a user perspective, it is very much like a familiar and simple text message style interface. Users only have to look at and think about one question at a time and it's also easy to go back if you made a mistake. You can change your answers. And what this does is a user can go through this screener and by the end answering a few simple questions, I think the maximum on this one is five. We can then direct them to a resource either a resource on our website or a referral out to another resource that would be helpful for them. So I'm just going to take you through this one quickly here. And this will take folks directly to the application for benefit. So this is a quick screener that I put together in response to a change in the regulations allowing for more families to be eligible for TANF benefits and not many people knew about it. So we put together a really quick screener, was able to do it within a couple of days working with partners at other agencies and now it is here on our website. But one of the other really neat things about Guide Clearly is that all of these guides are shareable. So you can actually copy this guide if you see one that you like to your own Guide Clearly account and work editing, modifying it to fit your program and your needs. And so that's actually something that we've done with some of the other legal aid providers here in Maine. We've actually shared some of these guides and now other providers are using them which makes me really happy to be able to share this. So then moving behind the scenes, Guide Clearly is, along with Read Clearly and Write Clearly, some of the tools made by Urban Insight, as I mentioned before. So here I'm logged into my free organizational account and I can look at my dashboard to see where all of my guides are, their status. And so looking at the one that we just saw on the website, this is the actual editor where you can make and edit your guides and it's as simple as editing, just editing text, adding in branching logic options and saving or not saving. But this also gives you some tools to kind of look at the bigger picture of your guide so you can both see what it will look like on your website when you embed it there. But you can also see an outline of the logic that is behind this. So even this relatively simple guide that has, I think a maximum of five questions that a user would see, this is all the branching logic behind it. So it's deceptively simple on the front end, which is good, but you can actually make incredibly complex branching tools using this. And then when you are ready to put it on your website, it's just as simple as using these pre-created iframes, just in the HTML, on the certain page that you want to have this guide show up in. So all in all, it's really easy. It's user friendly for both the ultimate end user and for those of us who are working to put together these screeners. So just to kind of give you an idea of the depth you can really get into with these. The first guide clearly screener that I ever made was a really, really complicated guide about some changes that were made in Maine to Section 17 eligibility, which is a program here under Maine Care that provides in-home services for certain people with certain mental health diagnoses. And so those changes were really confusing and everyone was very much legitimately in a panel over it. So we put together a screener, a paper screener for caseworkers, medical professionals to be able to help people navigate these new requirements. And I translated that into a guide, clearly guide, which when I was finished had a total of 124 questions with 216 unique paths, and that took me about a week to do, and that was the first guide that I made. So even doing something that in depth, if you're familiar with branching logic and using it, you can use something like Guide Clearly very easily. And so I already mentioned that these are shareable, so they're good for replicating these kinds of screeners across organizations or within an organization and across websites and platforms. And it's just been a really, really helpful tool for me, especially when I've been confronted with a technically complex process and a time crunch. This has been a really good way to get the information out there. And one thing that I will mention about Guide Clearly, which is kind of a footnote here, is that it's a wonderful tool, but the technology that it's built on is not compatible with assistive technology like screen readers. So even though a lot of the logic is very complex, because of that accessibility issue, I always make sure, and I think if you put together guides, you would also want to make sure that you explain the logic and the process behind the screener in a format that is accessible for folks who may not be able to use the screener because of their need for assistive technology. So quickly, the other thing that I wanted to touch on, which apologies, this is going to be more of a tell than a show, unfortunately. But the reason for that is both time and that the Dialogflow console, ironically, is really not playing well with PTLA's network this week. So I wish I could show you a live demo of this, and I promise you it does work. But you're just going to have to bear with my talking for a bit. So Dialogflow really quickly is a platform for building conversational interfaces, chatbots. It's built using Google's machine learning and speech-to-text technologies, and it's really versatile. You could do quite a few different things with it. My kind of first introduction to Dialogflow was building a simple contact us bot, and this was to solve a problem that we have been having where we have a pretty active Facebook account and a lot of people message us looking for help, but we just don't have the resources to respond in a tailored and individualized way to those people, and so we just have a standard response message directing people to our contact us page. So I built a chatbot that would actually interact with the user, ask them which county they lived in, and based on that would give them contact information and hours for their local office. And it's also the kind of thing where you could build in, and I did in this case, kind of a fail-safe safety mechanism where recently we've started serving a lot more people who are survivors of domestic violence and sexual assault, and so if someone mentioned something in their message to our Facebook account that indicates they might be in a dangerous situation, we provide them kind of a safety, quickly exit this page outlet and also references to the local DV organization. So it's just a very, very simple, limited conversational interface, but is a lot more personable and hopefully a lot more useful to the people who are messaging us than our standard, here's a link to our contact page response. And so I managed to put this together with no coding, but a lot of trial and error. It's a fairly user-friendly tool. The learning curve is a little bit steeper than guide, clearly I would say, but there is a lot of kind of tutorial and video documentation available. And now to the really, really exciting part. And that is that Dialogflow now goes beyond just that Facebook Messenger integration that I was really excited about. And this is really thanks to the work of Gwen Daniels at Illinois Legal Aid Online and Brian Dyer Stewart of BDSWorks, who's a Drupal developer that a lot of us work with. So at a virtual hackathon this May, which was generously funded by LSC under TIG 17-0-2-0, they worked on a team and I worked with them to put together a Drupal module for Drupal 7 and Drupal 8 that allows Dialogflow chatbots to not only be embedded on a Drupal site, that part we could already do, but to actually use the data from a Drupal site to respond to requests for information in Dialogflow. So basically fulfilling intents in the Dialogflow bot using data from a Drupal site. For instance, providing an appropriate informational article in response to a user question. So this is still very much in the early stages, but I'm really happy that in the coming years I'll actually be able to work under another TIG to enhance our Drupal-based triage tool using the natural language processing power of Dialogflow. So essentially doing on a very much a micro scale what Margaret and David and Glenn have all touched on in terms of that harnessing the power of natural language processing using input directly from our users on how they're describing the problems that they're having to match them up with the best resources we have for them. And this is kind of in addition to and in some cases instead of having users sort down through the kind of hard-coded logic tree that is our triage system right now. And to do just a quick plug, there's going to be another hackathon before the LSE IT con this January. So if this kind of hackathon work or the projects that we've done at these hackathons are something that you're interested in, you should definitely stay tuned for more information about registering for that, and we would all love to see you there. And you don't have to be a coder to participate in the hackathon. I'm not a coder, and I was probably on three different hackathon teams at the virtual hackathon in May. So we need expert users of all kinds, so we would love to see you there. And I'll also post the page for the hackathon presentations, which I'm showing on the slide right now in the chat, but the GitHub where the code for that Drupal 7 and Drupal 8 dialogue flow connector module is in my slide deck here, too. All right, so if anyone has any questions, I'm happy to answer them. Great, thank you, Jack. That was a great preview of some really exciting things to come and definitely encourage people to check out guide clearly and dialogue flow, really powerful tools that I think will play a pretty big role going forward. Really interesting to think about how this connects with Learn at Hand, if Margaret's still on the line, particularly with the chat bot, and thinking about how we might connect up all of these different sources of information. Seems right for a follow-up call. Yes, definitely. Well, I can imagine whether it's the data that can be gathered from the current chat bots, but I know, and I don't want to speak for David, he's always more cautious than I am about the power of the models and technology. But we can imagine a future where there is an application that draws from the models that come out of Learn at Hands to then have people enter in free text, and then the model is able to at least make an educated guess about what issue might be going on based on their story. Yeah, I think it's a great model for us all as a community going forward, because we're all working on our individual innovations. I know I'm working on a TIG right now that will enhance the search capability of the Law Help platform to add natural language search to it, but to have the ability to feed and then get feedback from a huge data set that would really radically improve the reliability of the results that users get would really change the whole landscape for them. So I think definitely want to have follow-up conversations about how that will work. Great. Absolutely. Excellent. Well, I'm not seeing any hands up, but certainly, again, if there are questions for Jack, please feel free to put those in the question box, and if we have time at the end, we will come back and answer any unanswered questions. Since I'm not seeing any at the moment, and we are actually on schedule, I want to keep it that way. So I'm going to hand things off to Chris and Alice. We're going to talk about their smart intake projects. So, Chris, I'm going to hand off presenter to you. Okay, let's see here. Is this not the right screen? Let me switch over. I'm actually seeing your slides. Oh, you are. Great. Okay. Me too. Me too. All right. Alice, take it away. All right. We'll go to the next slide, perhaps. Okay. So Chris and I are from the City Bar Desktop Center, which is the Legal Services and Pro Bono Affiliate of the New York City Bar Association. And among other things, we operate a civil legal telephone hotline that gets about 22 or 23,000 calls a year and is able to answer about 65% of them. A couple of years ago, in 2015, to broaden accessibility to the hotline, we developed and launched this online intake application to complement our legal hotline telephone intake. The online application provides, needless to say, 24-7 accessibility, and after a conflict check, it enters the initial eligibility data that the client has entered on the application form straight into our case management system, which is legal server, thus saving our staff time re-entering that information. So it's been really a very useful tool and as the years have progressed, we get more and more of our queries online. The telephone certainly gets many, many more, but it's been helping people get through. So because of its success, I suppose, in 2016, we, the City Bar Justice Center, along with four other legal services providers in New York City, we were asked to develop something that we call the New York City Consumer Help Finder. It's a unified online application, which is essentially a single point of access for New Yorkers with consumer issues who are looking for legal health. Chris, I guess you can go to the next slide, yeah. Got you. Yeah. Chris is going to talk about some of the technology behind these programs, but basically the New York City Consumer Help Finder is set up to route applications to provide the five legal services organizations involved in this project. We all handle consumer issues, but some of us do different things, do things that other organizations don't do. For instance, our office handles bankruptcies, not all of the other four do that. So the program routes applications to one of the five organizations that actually handles the type of legal issue that the applicant is seeking help for. And it also limits the number of applications that are routed to each organization based on what that organization has said is its capacity each week. And our commitment is to have each applicant get a call back from a live legal services provider within two days. So as the slide says, it really is the idea to save consumers, save clients from having to call multiple legal services offices around New York City, and there are many of them, which have different intake days and different times when they will take telephone calls. This way they can do it whenever they want, and they are guaranteed to actually get a call back from a live legal services provider. So basically, we found both of these online applications, our own online application and this consumer health finder to be really efficient tools for conducting intake. They're available 24-7. As I said before, it allows clients to access legal help at any time. They democratize the intake process, giving every qualified applicant an equal chance of getting through to an appropriate provider or resource. They aren't dependent on when a person calls or how much time that person can have to spend on a queue waiting for a call to be answered. We all know that many people can't do that during the working hours of 9-5. Also, they aren't dependent on whether a caller calls on a good day when there's a very full staff and a sort of relatively normal flow of callers or whether it's a bad day when a bunch of staff are out sick and it's the day that all the crazy callers decided to call at once and stay on the phone forever. It also sorts out over-income and out-of-catchment area cases right away, leaving more access open so that the meritorious cases that can help will actually get through to a legal provider. And finally, both of these have... well, not both. The New York City Consumer Health Finder has links to appropriate resources as well as to law health in New York. So some applicants actually find that a link to one of these resources, one of these pro-day resources, is all that they need and they end up not submitting an application, and that's fine too. They've gotten what they need. So finally, for us, both have been time savers since the data entered by the applicants is automatically entered into our database case if we set the case. Basically, it automates all the stuff that can be automated, which is information that a lawyer then does not have to spend time collecting such as income and type of issues. The next slide. And this is page one of the New York City Consumer Health Finder. You'll see that it's very basic. Our motto, or our mantra maybe, was less is more on the theory that people are less likely to complete an application if it's lengthy. And we really asked very few questions. The idea being that whichever organization gets the case will then do a more complete intake for that client. That's what I have to say about those two. Chris? Yeah. More about technology behind them, which I don't understand. I will do my best. I think in a sort of similar vein, like Alice, primarily a legal services attorney who has become involved with the technology side of things just sort of out of necessity, right? Because we're not always enabled to hire staff who is specifically trained, whether it's in coding or other sort of deep tech issues. So we really kind of have to make sure that we're speaking with the community and keeping our own skills up and really having an ongoing conversation about the things that are affecting us. And I think calls like this and organizations like Alice N. Tapper are amazing for keeping us in communication. So we kind of build our skills together and we see what's out there. So the Consumer Portal for us did grow organically out of this online intake module, which we are on the legal server case management system. And online intake as a module was one of the options that legal server was developing right, I think a little bit after the time City Bar Justice Center had transitioned over to legal servers. So we were very interested in experimenting with this new means of access because, you know, there were so many opportunities for us to provide a heightened degree of access to case handlers through having an online intake available. We use a triage model for our intake. As Alice said, we don't do a deep dive into issues spotting or into learning very much about a person's case. We really want each person who approaches us through either online resource to get to a live case handler as soon as possible. So the Consumer Portal is also designed as a triage system where as Alice was saying, our mantra is to get the case to the right provider sooner rather than later. And generally, when you think of a partnership across several different legal organizations, you can kind of imagine what those challenges might be. Some organizations have overlapping catchment areas for one type of legal problem, but not for others. Organizations might at different points in time be prioritizing a different subject with more or less weight based on things like funding requirements or whether they're looking for additional kinds of good test cases or sample cases to go forward in a particular class action or impact litigation. And there's also the issue of making sure that from the organizational end, we're kind of keeping abreast of what each organization's case load is internally, and we're making sure that they're getting a fair number of cases from the intake source. So that is a good stream of intake for them. They're not being overburdened, but they're also getting some return on the resources that they put into being part of the consortium of consumer providers. So we're built, as I said, on the legal server platform and coming across these unique challenges for a communal intake system, legal server helped us conceptualize and then program out what we call the automated referral tool. This is a bit of artificial intelligence that looks at each intake applicant's county, the problem type, but also looks at the other end at the capacity of a particular partner organization at what problem areas they're currently covering and also how they're waiting those problem areas to figure out which of the providers is the best match for a particular case. The case then is shunted over to their intake system for providers that use legal server or also use an intake system that is compatible with legal server. We're five organizations right now, and I think three and a half of us use legal server. One of them is transitioning the legal server, but we have the capability to talk to other systems through API that also are able to speak to other systems through the same language. And we even have one organization that doesn't use a case management system at all. So we had to come up with an interesting way and a secure way of passing information along to that organization so that they can then contact the referrals that they got. So the really beneficial side of art is that it's a set of instructions, but it's not a set of instructions that requires heavy lifting to alter. So if there's one organization that is seeing an increased number of cases and they need to kind of tone down the online referrals that they're getting, or maybe someone is out sick or there's transition in the job or someone has been shifted over to a different subject area, the organization can actually go into art and say, we're programmed to take 30 cases per month now, but we're actually going to cut that in half to 15. If an organization, as we mentioned before, is looking for a particular kind of case, they can move, for example, bankruptcy up in their priority list so that they're more likely to get positive hits on bankruptcy cases and maybe less likely to get credit or harassment cases. So it's a system that is living, right? It's knobs and dials that can be adjusted on the fly, which is really important for us, especially as people who aren't programmers and who can't necessarily have to rely on the third party to go in and make those changes. We're able to do that among the organizations. So in the process of setting up this system for routing the cases, we also came to realize that there are some efficiencies that can be replicated for things like our pro bono matching of cases. When we have pro bono attorneys who come to the City Bar Justice Center or one of our member organizations who have a lot of expertise in one area, perhaps they are a long-time divorce attorney or they're a long-time trust-in-estates attorney, we're starting to conceptualize a way of using art both internally and externally, but internally with our pro bono practice to match up different pro bono volunteers with online cases. So this, again, is great because it's something that can be programmed on the fly and particularized to an attorney. On this next screen here, you can see some of the information that art gathers and the way that it makes its decisions. So on this screen, we've got percentage of poverty requirements. We've got this sort of standard financial requirements for general legal services, but we're also sorting out by, you know, LSC funding code, legal problem code and special legal problem code, the types of cases. And again, we're doing triage here. We're doing a very sort of preliminary cut at getting the case to the right person, so we're not really getting in there and saying, well, what do your assets look like for a bankruptcy case? That's something that for at least this phase of the project, we would prefer a case handler discussing directly with an applicant rather than an applicant kind of programming in their own information. And that lessons the burden on the applicant as well. They're able to kind of go through and take very sort of standardized questions and answer them. If they have a problem, they're always directed, you know, to call our legal hotline and they can, from step one, go through the process with us so that they, you know, if they can't answer a question, they don't have to. There's always that off-boarding option. But this helps us get through the routing system, and this is something that we can use to match with pro bono attorneys as well. Now, so if I can just jump in real quick. We had a couple of questions from Sarah Fresh from SRLN, and I've unmuted her. Sarah, if you want to jump in and ask a couple of questions, that'd be great. Thanks. Can you hear me? Yes, I can. Okay, great. So I have a couple of questions. This is fascinating. The first is given that your intake process was opened up so that people, like you said, didn't be waiting on hold or call during work hours. My first question is how did your intake numbers change? And then my second question is given that you've got this triage system, which I think is fascinating, where you can send cases based on what capacity each organization has, what their focus is. How did outcomes change for folks? Did you have an increased number of people who are represented or who had positive outcomes? Sure. So I can answer that. And then, Alice, you can supplement what I'm saying since you are the numbers person. As far as both systems go, they're kind of, I think, still in the stage where we're not necessarily seeing broad shifting of numbers, certainly in our general online intake. You know, we started out getting maybe two or three intakes a week, and then we started getting two or three intakes per day. And now we're up to quite a few, I would say, between 10 or so additional intakes per day. What we're doing right now is we are shuffling those intakes in with people's assignments on the phone. So the impact isn't, I don't think, any great number of additional cases that we're taking. And we can also dial those back by certain controls, not having the button appear on the website, not having the option available, and also directing people to call hotline intake. And we're kind of contemplating some other controls to slip in there now because the general online intake is reaching a volume where we might want to think about telling people there's going to be, rather than a two-business day wait, maybe a three-business day wait for periods of heavy consumption when there are a lot of online intakes waiting in the hopper. One of the reasons why we picked the consumer area to partner with other organizations is because there's not an overwhelming burden of consumer cases that are out there right now for the previous couple in New York City, particularly for the previous couple of years. The consumer filings have actually been trending downwards, and I think we're on the upswing again. But there, the issue is a little bit reversed, so we don't have good data yet, which is we're not seeing as many consumer intakes as we'd like. What we did was what we call a soft launch, a kind of gradual rollout of giving out the website, the NYC Help Finder website. First, we kind of did our own press releases, and then we told elected officials about it, and now we're working with the courts to publicize that even more. But we haven't reached the point where we're getting, I think, a significant impact of numbers that wouldn't come in through other means of intake. So that was your first question, and that was a very long answer, wasn't it? Yeah. I was going to say just in terms of the numbers on our hotlines, I mean, I think Chris is right. They've got a bit as a result of this. We're getting many, many more online intakes. But as Chris said, because our staff are answering those as well as they call them on the hotline, the numbers aren't changing drastically. The good news is that because the way the system is set up, we're planning on being able to use volunteer lawyers who maybe even sit in their own offices that can answer some of these online intake applications. And that will certainly be a real benefit for us. Yeah. One of the catalysts for us to start thinking about using art to route cases to particular pro bono volunteers was we were asked by a firm who employs attorneys who are members of the New York Bar, but aren't necessarily housed in New York. They have an office in another state somewhere where they're doing their interest in commercial work if they could be of use to us somehow, if they could volunteer with us somehow. And it's not necessarily feasible to just put them on the live hotline, whereas it is a potential source of referrals to have them become members of this kind of routing system where they would get an online case. They would be able to pick the subject areas they're experienced in or comfortable with answering. They would get those cases and then they would be able to have the back and forth with our hotline attorneys about drafting the right kind of answer. And can you repeat your second question? I'm afraid that my blabbermouth has resulted in my amnesia. Sure. I went back on mute. I have a noisy dog in the background. My second question was just about, and maybe it's too early to tell, I guess whether you're going to measure positive out or negative outcomes, I guess. Right. If more cases are placed, if consumers are getting more cases settled because of the increased efficiency. And thank you both. I'll go ahead and go back on mute. Sure, absolutely. So in terms of outcomes, being able to track outcomes on a advice, assistance, and brief services practice where, you know, we're generally not going to court with people, but we're ghost writing things for them. It is something that we've been struggling all along, having the follow-up to ask the right questions for outcomes. We do have other parts of the consumer legal access partnership that do direct representation on cases. But again, because we're not where we think there's an actual increase of overall case numbers that we're getting, I'm not sure that we can say with any confidence that this has, as of right now, resulted in a greater number of people being represented at the end of the day. But we're hopeful and we're pushing in that direction because the goal is to, you know, obviously get closer to that 100% access goal that is central for all of legal services. And I would say that the courts were very interested in us doing this and still are. They keep asking us what we're seeing because they would prefer that people didn't come into court with no knowledge whatsoever. In fact, they'd like people to not come into court if possible in these cases. Right. And so I'll add two really quick points because I see we're almost out of time. Another great benefit of online systems like this is the automatic collection of data that we can see in real time. You know, so this is representative of the testing cases that we entered into the system when we were building the consumer help finder. But we'll be able to look at graphs like this to see over the course of, you know, a day or a week or a month or a six-month period, are we getting a lot of bankruptcy cases? Do we need to kind of organically shift people who can work into our bankruptcy practices over into those practices in order to take in more cases? Do we need to reach out to different community-based organizations to let them know that our resources are available in this area? So really, really helpful to have that as an automated part of the process as well. And then I just have our contact information up here because I know we're running out of time and you might want to speak with us more about everything. The other point that I wanted to add, all of the systems that we talked about today, you know, are different components of something wonderful that can exist for our client population in the future. And the sort of plug-and-play aspect of them using API to communicate is something that we have to continue to have really, really detailed and in-depth and wonderful conversations about. Great. Thank you, Allison, Chris. That was fantastic. Thank you for fitting all of that into the time we had available. I'm going to grab presenter just for a couple of closing comments. I just want to express my appreciation to all of our panelists today for their contributions to this webinar. I think this was a really fantastic presentation. If anyone had any questions they weren't able to ask during the presentation, please do feel free to follow up. I'm going to put my email address in the chat. I'm more than happy to follow up with presenters and follow up with you offline if need be. Thank you for attending today. All of our attendees, the next webinar in LSNTAP schedule is the IT Disaster Planning 101 Assessment of Vulnerability and that will take place on Tuesday, November 13th, at 10 Pacific 1 Eastern. And finally, Sarge, if you have anything you wanted to add, please feel free. But thank you all for attending and thanks very much to our panelists. Thank you definitely for putting those together. Great group of people. The recording should be up in the next few days here on our YouTube channel, which is linked at our homepage, LSNTAP.org. I really enjoyed the breadth of different projects that we're looking at here. Thank you guys so much for putting this together and thank you to all the speakers. Thanks all. Bye-bye.