 And with that, I would like to welcome you to officially, to today's webinar, From Investment to Impact Recent Outcomes Evaluations of Legal Aid Tech Projects. And today, I'm going to turn it over to Claudia in just a little bit, but Claudia Johnson, my colleague at Fibonnet, will be moderating today's session. We also have Kessing and Brian Rowe joining us from LSN TAP to help with me out with some of the webinar staffing work. And then we also have Keith Pacaro from SimLab and his colleague, Valerie Olsen. And we also have Tara Saylor joining us from Q2 Consulting. So with that, I am going to turn it over to my colleague Claudia to kick us off. Hello, everyone. Thanks for joining us. And we hope that you will walk out of this webinar invigorated to think of evaluations in a fun, helpful way. So when thinking about what would be a good topic to talk with the legal aid and legal access to justice community, in general, evaluations kind of came on top as a priority. Because evaluations as a field of study is a well-established field, and it exists in many, many different worlds, and Legal Aid Tech is a newer community. And when we use technology, we generally use technology that generates data. And so what we wanted to do is give you a framework for evaluation that brings in some of the tools and approaches from other disciplines and then talk about then inserting that into Legal Aid Tech evaluations, in particular, because Keith from SimLab will talk a little bit about how technology can make things more complicated and then talk about then how do we incorporate that into Tech Legal Aid Access to Justice evaluations and then talk about how do we do evaluations in a way that is affordable, that is an investment and that is fun and helpful as we live in a research constrained world. So in thinking of that, you know, I went and I was thinking, well, for a lot of people it's just a chore, for a lot of people it's a task and duty, something that has to be done, but it's not necessarily fun, but you know that if you don't do it, the grass is just going to get longer. And for some of us it is a helpful thing to do because we kind of know what's going to yield. And so again, thinking on even if an evaluation is kind of like sailing a boat or playing fetch with your dog, even if an evaluation can be a lot of work and requires you to be really strategic in your approach, at the end if you come to it and you get familiar with it and you start seeing the benefits of doing it, it can end up being a lot of fun. And Keith is going to share with us the framework that they use at SimLab on evaluations and the different types of evaluation methodologies out there. But I just wanted to say, you know, that there are routine monitoring activities that we're constantly doing in our systems and sites if we're running technology projects like Low Help Interactive or like a statewide website or a chat feature in a website. There's those routine activities where we're collecting data and we're using the data to staff and manage and support a broad community. And then there's evaluations which is something more discrete. But even when we just look at evaluations, I really think of it as there's all kinds of different types of evaluations that we can do. So what we want to do is develop the capacity and comfort to pick the right type of evaluation approach in a way that it eventually is an investment that we can use and recycle to help us figure out where to put the resources that we have and then help us make the pitch to internal stakeholders but also to our funders on how a project or an approach or an innovation can become more helpful if our goal is to meet the mission of the organizations that we work or serve. So, you know, there may be kind of environmental evaluations where you're looking at a bunch of complex systems. There could be very specific type of evaluations to classify and group, but Keith and Valerie are going to go more into that. So I'm not going to say more. I'm very excited that Simlet is part of the presentation today. And without more, I will let Keith and Valerie introduce themselves under group. Great. Thank you so much. So this is Valerie. And I'm just going to pull up my slides here. So, yeah, so I'll just quickly introduce myself. My name is Valerie and I'm a project director at Simlet. And I also help our CEO lead on our M&E monitoring and evaluation work. And I'm Keith Pacarro. I run tech and development at Simlet and run some Varmaroc and portfolio as well, including our work in DGLA. So just a little bit about who Simlet is. We're a nonprofit and we use technology to help build societies that are equitable and inclusive. And this means that we focus on using inclusive technologies to really reach the broadest amount of people. So this is kind of inclusive technologies is a term we sort of made up. And so I've defined it here for you. So as most of you working in legal aid will know, we kind of focus on low income and vulnerable populations. And they don't necessarily have access to hire and technology. So really looking at things like radio, SMS, interactive voice recordings, things like that. So trying to use existing tools that these people are already familiar with and use instead of trying to teach everyone new technologies and get them to incorporate those. So without further ado, we'll go into what monitoring and evaluation is, which I know Claudia kind of reviewed a little bit. But just talking a little bit about the differences between monitoring and evaluation. So monitoring usually refers to kind of an ongoing periodic process where you collect certain cases of data. You might include real-time data or feedback from your participants and clients. And it's usually used to inform programmatic decisions and implemented to make changes throughout the project. And evaluation refers to something that's a little bit more in-depth. It usually happens either at a midterm point that's predetermined or also at the end of a project or some kind of activity. And it looks at sort of what happened over the course of the activity. And then it will give recommendations for kind of designing, implementing, and doing those things later on again. And as well as look at how, what kind of impacts your activities had and the overall kind of result of it. All right, so why do we do that? Well, real quick, we have a question. Kil asks, what are interactive voice recordings? Sure. Interactive voice recording is a prerecorded, basically voice message. Yeah, it's a prerecorded phone system. So the types that when you call in and after your flight's been canceled and you're instructed to press one to do this, and two to do that, that's an interactive voice recording. Yeah, of course. So why do we do M&E? So here at SimLab, the reasons we use M&E is we use it as a management tool to drive change, mostly with the monitoring to kind of look and see that we are on track of where we're supposed to be and make changes when necessary. As an accountability tool to make sure that our projects are in line with our organizational mission, and that we're not doing any harm when we meant to be doing something good or things like that, and to provide lessons in learning. We really think this is key. So not only is our learning kind of used internally, for example, if we had a poor experience with implementing SMS in a situation, we might look at why that was and then share it both within our organization, but also with other similar organizations so that other people don't make the same mistakes we do, or so that they can kind of use some of the successes we've had and implement those within their work as well. Kind of on a more practical matter, M&E can be used to inform future funding decisions. So Claudia mentioned that a lot of people do evaluations because their funding depends on it, and that's also true within our field. It's probably the primary reason that people evaluate, but it can also help you make a business case to future funders as well. So if you can prove that you were effective at something or that a piece of technology works at something, it's a way better business case for other donors. We can use it to judge the performance of contractors or partners and people that we've worked with, and whether or not we would do so again in the future. Use it to gather evidence on whether or not a particular approach or maybe a specific piece of technology is useful for what we were trying to do with it, and then kind of look at how, for us, we look at how that technology impacts and applies to the wider programmatic goals. So if we're looking at a project that's trying to use technology to improve community members' relationships with policemen, the programmatic goal might not really look so much at that technology, and so they're more focused on improved relations, and we'd like to know if technology actually played a part in that, or if maybe just the program achieved that through other means. Challenges to monitoring and evaluation. So there are, of course, many challenges to monitoring and evaluation, and we feel that sometimes technology can add even more challenges. It's just an additional layer of complexity to a lot of the environments we work in are already very complex, especially, for example, we currently have a project that's looking at technology in post-conflict environments, and so it's already complicated to look at post-conflict countries and then also looking at technology and how that plays a role amongst other issues like gender and youth, and things like that can become very, very large all of a sudden. So another thing is that a lot of technology projects are frequently new operational partnerships, so the staff at NGOs, ourselves, and then maybe even a third party, if we're working with a particular technology tool, haven't previously worked together, and so that can be complicated. And with that, all of those different partners always have different kind of aims and working styles and things that they'd like to measure. So the person that is the SMS-based technology provider will want to look at how their tool specifically improves efficiency and ease of communication and things like that, while perhaps the NGO doesn't really care about that, what they're worried about is if refugees were able to get their food aid on time and things like that. So there can be some differences between what people want to monitor and evaluate. And then of course, as in any field, there's always limited capacity and resources, both with funding and with time of staff members and things like that, and whether or not those staff members have any experience doing M&E. So lots of challenges. So now I'm going to just kind of go into our framework. So we developed this framework kind of in response to the challenges. I just discussed, and also because we saw that there were really, there were a lot of people evaluating their work, but they weren't looking at how technology specifically played a role within their projects. And so we wanted to offer people kind of a more robust way to actually look at the technology portion of that. But we also recognized that people have very limited time resources, et cetera. So within a lot of the organizations we work with, they do international and development work, and most of them use the OECD DAT criteria in their evaluation work. And so we wanted to use this criteria so that we could build our questions and processes into the criteria that people were already using, and we thought that that would make it a lot more likely that they might actually ask some of the questions we'd like them to. What is OECD DAT? Sure. It is an international body. Yeah. The OECD is the Organization for Economic Cooperation and Development. So it's a collection of high-income countries, mostly in the Americas and Europe. And then the DAT is their Development Assistance Committee. So it's just a kind of a funding body that funds and advances a lot of development work. Yeah. So I think their criteria ended up being widely kind of adopted because they're a major donor. And so everybody was required to use that. And then because of that, everyone just kind of sort of adopted that so that everyone's using the same thing, and you can kind of compare across different projects and things like that. And we're happy to follow up if people are interested in that with more information on the OECD DAT. But so the Active Learning Network for Accountability and Performance, which is another sort of international but smaller nonprofit, adapted the OECD DAT criteria to better fit complex humanitarian settings. And like I said, that's kind of also the type of setting that we work in a lot. So we looked at both of those criteria and then we added our own additional considerations that we thought were more applicable to inclusive technology from there. So that's kind of the history of how we came up with our criteria. So we didn't just invent them. They were pre-existing, but we did kind of adapt and adjust the questions that we would put underneath them. So now I'll just look briefly at each of the criteria. So the first one is relevance, which is from the original DAC, and they have the same definition, but they look at the extent to which the aid activity is suited to the priorities and policies of the target group, the recipients, and the donors. And so we've adjusted this to look at, was the technology choice appropriate for everyone involved? So this would involve making sure that you look at your contacts ahead of time and really look at what pieces of technology are people already using and if it's appropriate to them. So if it's a largely illiterate population, you wouldn't want to implement something that relied on SMS where they would have to be trying to decipher and read your text messages. In that case, you might instead look at radio or something like that. So that's relevance. The next one is effectiveness. The measure of the extent to which a communication channel or your technology tool or whatever you're really implementing here obtains its objectives. And for us, objectives are usually predefined. We use, a lot of people will use either a log frame or a theory of change or a variety of different sort of M&E methods that they'll choose ahead of time for their project and they'll determine what the objectives of their intervention are going to be and then they'll also determine kind of how they're going to measure that ahead of time. So effectiveness would just look and see how we met up against those things. So this might look at how did the technology tool or platform perform? Were there a lot of bugs? Did you have to kind of constantly update it? Was it necessary to translate it into languages or easily translated it? If it was a digital processor channel, did it replace a non-digital process? Kind of how did it compare with the previous process versus now? So different things like that. The next one is efficiency. So when you're evaluating the efficiency of a programmer project, you might consider what were the activities cost efficient? Did you achieve your objectives on time? How did it compare to alternatives? So did you use the least expensive technology and would maybe spending a little bit more on a different technology have made it a better choice or tool in terms of saving you time or in terms of not having bugs and having worked most of the time and therefore you would have met your objectives sooner? To what extent were aspects such as cost of data? So a lot of times people forget about indirect costs of using a piece of technology. So if you're using mobile phones, people still have to charge the phones. They still have to pay for the text messages, things like that. So sort of some of those indirect costs and how those were incorporated. Was there time spent providing user support? So did you have to train people on how to use your technology? Did people already know how to use their phones or did you have to show them? If you were doing some kind of smartphone app, you would have to kind of teach them how to use the app, how to download it, things like that. So basically how efficient was the process and how were they adopted by people? The next one is impact and we have a lot of guidelines on this. So the impact really looks at what happened as a result of your intervention or project and it's usually this would be done at the end in a final evaluation rather than through your monitoring. And so it's looking at what was the real difference, your activity made, how many people were affected and really looking at kind of the concrete sort of overall impacts of this. So it also includes whether or not there are any unintended consequences due to the introduction or change in technology. Was there a group that was accidentally left out that you hadn't realized would be left out or did it end up causing problems that were unanticipated or something like that? And you would also look at how you kind of mitigated those things from the beginning. So for example, if you were working with vulnerable populations, did you offer a training on how they can protect themselves through their technology or did you make sure that women and men had equal access beforehand? The next criteria, we have two more, is sustainability. So to what extent did the benefits of our program or project continue after your donor stopped funding it? So did you create a system that was going to last past your own interactions there or did the second that you kind of pulled out and left the community, did the project just kind of altogether dissolve into nothingness? And other things that you might look at is how much financial or time contributions are required from your community members that would now be taking this over. If there was a comprehensive business model for the intervention after the funding period ended, so for example, previously I've worked to set up kind of community radio stations and towards the end of the project when we were starting to pull out, we made sure that the new radio hosts had tools like they knew that they could fund the radio through advertising or that they could seek funding through community members or have people pay to do some dedication or things like that. So making sure that there is some kind of financial sustainable model after you will no longer be there. And also in terms of technology that can also mean like whether or not user support will continue after or gone and how will they maintain those pieces of technology. If it requires a computer, are they able to maintain the hardware? Are they going to have to replace the computer? Can they repair it if they need to? And then our last one is coherence and this one is actually from LNAP. They added it to the OECD DAC but we really liked it so we also included it in ours. And it kind of just looks at how your project fits together as a broader whole. So in terms of technology, did it comply with existing legal and regulatory frameworks? Is there harmony within other information systems that it might work with or around? And if you use multiple tools, how did those tools work together? And how did information flows maybe complement each other from radio to SMS to IVR where there are multiple kind of channels that people could use? So the point here, I'm going to talk a little bit more about how this applies to legal aid and jumping off of what Claudia's point was is that this is really an opportunity to learn more about your organization, about your client base, about the community that you're working in. And so if you can build funding for monitoring and evaluation into some of your technology grant proposals then really what you're doing is you're having somebody pay you to learn more about the work that you're doing and how well that you're doing it. And so generally we try and split up the kind of monitoring and evaluation work into a couple of different ways to think about it. One is what are the outcomes that a donor is going to look at? And then the other is sort of what are the practical outcomes? And so as a quick interlude before I kind of get to some of the specific nitty-gritty lessons that you might be able to take back to your organization, I just wanted to point you to FeedTheAfricanism.org which is a micro site that we set up about two weeks ago. So it's still a little bit in beta but what we're doing is kind of using this as a repository for information about a multi-country pilot that we participated in analyzing how well different feedback mechanisms worked in maternal health clinics in about seven different countries. And what you'll find is a mix of the sort of very high-minded donor-facing notes of how well each piece of technology work in each specific context. But you'll also find what we call practitioner resources which are sort of one of the nitty-gritty kind of practical details that might be left out as an evaluation report but are really important in order to be able to replicate the thing. And the evaluation here kind of as Val was mentioning doesn't just focus on how well does a piece of technology actually do to support the program but it also investigates whether or not this mechanism using any piece of technology is a good idea in the first place. So there are kind of three lessons that I really want to communicate that I think are really important when it comes to monitoring and evaluation of tech projects. The first is that technology can be a bit distortive and that alone it may not kind of produce a complete picture of the world. So this is my favorite example. The map on the left is a map of tweet frequency kind of during and the immediate aftermath of Hurricane Sandy. And so if you look at just sort of this map on its own, a lot of the places that have high-tweet frequency such as the east side of Manhattan and some of the inner areas of Brooklyn are the ones that face pretty severe flooding. And so you might sort of take from just this map that if we look at people's tweets we can sort of see and get an indication of where flooding is at its worst. And then the map on the right is where actually deaths happened during Superstorm Sandy. And so if you kind of compare the two maps together then you might see that the places where people actually died in Staten Island and kind of in Jamaica and Brighton Beach really didn't have any tweet activity at all which says maybe two things. One is that people who are really in trouble might not be tweeting and also the populations who might be the most vulnerable, the poor and the elderly, for instance, on Staten Island aren't likely to have technology tools that will be able to sort of provide these secondary indicators. And so the issue isn't really saying that technology on its own is disordered but that you need to sort of think about getting as many sources of data that you can both from sort of in your organization and outside your organization in order to build a more complete picture of kind of monitoring and evaluation of how you're doing. And I apologize for this. The bullet points seem to have just run off the slide but I'll talk a little bit more about them. But the point is that you should be using as many data sources as possible in order to help build a bigger picture of the world. And that can come from a couple of sources. One is one of the technology tools that you'll be using especially if they're off the shelf will produce their own data. So it will produce auditable logs. You'll have to be able to export them as XML files or into CSV files which are comma separated values which you can make important to excel and process as spreadsheets. And if you have tools that you have, don't do that. Then you should start thinking about making that a requirement for the tools that you do use. And so what this can do is just provide you a baseline level of activity about how people are interacting with the system so that you can sort of sense check to get something else. The second is that you can use data that you already have. For an LLC grantee, you're recording time to the 10th of an hour. And so that gives you a really in-depth sense of what are your people spending their time on. You also have email and phone records and other sort of qualitative data and opinions in terms of how people actually feel about the technology tools, how clients feel about the tech tools. And being able to sort of collect all that information together can be really important. And graduate students and interns, by the way, are really helpful in order to get all of this information together and into some sort of manageable form. And then the last thing is that you can use data that sort of exists elsewhere in the world. And so by this, I mean a couple of things. One is that, you know, if you are looking at, say, how technology impacts something like a case outcome, then you might be able to lean on court records in your community to see sort of how individual case outcomes may be changing. Or you can look at sort of total case filings in a particular area to see sort of what's the total market for legal issues that you think you're propping up. And then you can compare that with other communities to see if maybe people are filing an unusually high or an unusually low amount of, you know, filings in a particular issue that you're interested in and what you might be able to do about it. And the other ways that you can use data that's out in the world are sort of other organizations, both legal aid and otherwise, who may be interacting with clients, with communities that overlap with your client base, and look at sort of ways that they've used technology in ways that they've communicated and it might just be, you know, it might not be a statistical report, but it might just be picking up the phone and sort of asking how somebody has done it. And then finally, I think there's an untapped opportunity, especially in legal aid, for using kind of academic researchers who are always interested in sort of finding data and studying sort of the outcome of both legal aid generally, but also technology projects. And while there are a few sort of issues, particularly around confidentiality, you know, we think that there are just sort of an untapped opportunity to, you know, be able to strip out some of the sensitive information and have the researchers sort of feedback data that may be helpful for you. And the second thing here, and this is, or I guess the last thing here, which is really important is document everything, write it all down. And we sort of talk about it in a couple of different ways. One is documenting it for replicability and the idea here is that from the documentation that you've produced, somebody should be able to recreate the project without any additional help. So that includes setup and that includes integration, it includes regular use. And it also includes what happens when the technology tool winds down and you need to get rid of it. So, you know, this is, I think, common, especially for small organizations and small legal aid organizations where people leave and new people come on all the time. And so what, you know, is maybe the worst outcome or a bad outcome is when you've got a technology tool that really works well, but as people leave and new people come on and there's no good training to sort of onboard them into these technology tools and so that sort of fades out. But just as bad as if you've got a technology tool that doesn't work very well, but you've got no way to sort of unpick it from the rest of your, from the rest of your productivity suite. So you're used to just sort of doing these strange workarounds in order to make it work. The thing that we tend to see documentation skip out a lot on is for errors. So, you know, are there common mistakes, are there ways that people make mistakes in inputs? Are there, you know, weird practical workarounds or compromises or collages that you have to make in order to integrate it with your particular case management system or any of those other sort of unusual bits and bobs? And then the last part is actually document for the public. So we have documentation for our clients is all public on our site. And, you know, we use kind of, we heavily rely on screenshots and animated gifs. We use sketch and lysecap to create those. They're great because you can annotate them and you can blur things out that might be sensitive. But, you know, the problems that we're trying to solve that everybody in this community is trying to solve with tech aren't necessarily unique to your organization and aren't necessarily unique to Legal Aid. And so, you know, there is an opportunity, I think, to sort of continue to give back with this documentation so that others can sort of build on the projects that you've made and the, you know, the $40,000 that you spent on a project may only cost, you know, 10% as much as the next person. And so, lastly, just in terms of little, you know, last little bits, you know, in terms of budget, you might expect to allocate between 10 and 20% of your project costs for evaluation, especially if you're hiring somebody. And if you're looking for an evaluator, we strongly, strongly recommend that you find an independent evaluator. We can help you find one or, you know, we do them as well. But the person who's providing your technology definitely shouldn't be evaluating the job that you just did. And it's also something that, you know, these things take time and, you know, if it's somebody within your organization, it might sort of be used to this kind of bubble of information about how things shouldn't work. And so, bringing somebody in who's independent and who's relatively new to sort of how your organization works and they reveal questions that, you know, you may not have thought to ask. And then kind of the last thing is, again, a report that you produced or done or all obligated to produce these reports and send them out. But if that's what all you're producing the report for, then you're really just throwing money away and you've got this really great opportunity to sort of use these evaluations to find out things, not only about the technology project that you're doing, but about what's kind of important to your organization and what's useful to your organization. So what monitoring and evaluation really is, it's sort of a way to structure what every organization should be doing, which is kind of constantly evaluating how good of a job that they're doing and where they can do better. And so, you know, we sort of use M&E and we advocate for M&E to start building a culture of learning within an organization. And so then lastly, before we kind of skip the questions, we have a monitoring and evaluation framework, which is a voluminous school lock in it that kind of continues with the draft that talks about sort of a lot of the philosophies that we went through in this presentation in a bit more depth and sort of the framework that we use in order to build monitoring and evaluation projects with our clients. So you can check it out at the link below or you can get in touch with us. Thank you very much. Okay. Thank you, Keith and Valerie. That was really, really helpful. I really encourage people on the call to check out the resources that they share, particularly, you know, I was struck by how well-developed they are, particularly the coherence part. That's not something that we have talked before in terms of legal aid tech. And then also I think the reference, and I think Jillian posted on the notes here, the reference to the OECD, which are actually implemented very well in other fields like public health and stuff. I think we're going to skip four questions now, and we're going to leave those to the end because we want to now hear the advice that Ms. Tara Saylor has. She has done a recent evaluation on a project that came out of Oklahoma that was using four different technology tools that included live help, live chat, including modifying staging page for online forms through the law help platform using a new tabbed approach. Then they created document assembly forms, and then that was overlaid old in a new tool that we call Connect. And so Dr. Saylor took a very smart approach to doing this evaluation, which was overlaid on a complex legal issue, very complex law, using four technology tools, and did some really amazing work for that project. So without more, I let her introduce herself. Thank you, Claudia. It's really great to be with you all today. I'm the senior researcher at Q2 Consulting, and Q2 is a full-service research and evaluation firm in Tulsa, Oklahoma. We have all of our PhDs do our substantial evaluation work, and I think that's an important part of looking for an evaluator because you really want experts in research and research design to be guiding you, and they're likely to be very quick. So in the end, it may save you money as well. At Q2, we stress that you don't need a lot of data. You need to collect the right data in order to have a successful evaluation. And at the end of your project, we really want you to understand how your project was successful and how it can be more successful in the future. The word assess originally comes from a Latin term that means to sit beside. And when we are doing an evaluation, we really think of that term as being really important. We want to sit beside our clients and really immerse ourselves in your project so that we really understand it, and we know exactly how it should be evaluated. So when you look for an evaluator, you want to really look for someone who's going to immerse themselves in your project, not just crunch some numbers for you at the end. And to build on some labs' discussion, I think you want an evaluator that will both monitor and evaluate. So you want both of those things when you look for an evaluator. So today I'll be covering three main topics. I'd like to start with what's called a logic model, which is really the foundation of any evaluation. And then I'll explain the logic model within a context of the evaluation Claudia just mentioned, which I recently completed with Legal Aid Services of Oklahoma. So here is the logic model. It looks a little complicated. It's really not. Well, I'll break it down for you piece by piece. A logic model can guide your evaluation, and you might be familiar with the logic model already because many funders require them. And I think this is a really good place to start before you even submit an application to a funder. Filling this out is a great exercise for project managers to articulate in one simple page all the specifics of a complicated project. So it should require some careful reflection, but it's not a difficult exercise if you have a good idea of your project and what you want to accomplish. A good evaluator will be able to take your logic model and really understand your project and all its components, and this should be a really good starting point to think about what you want to measure and how you want to measure it. So I'm going to walk you through my most recent evaluation with Legal Aid of Oklahoma and explain this logic model with you using a real-world example. And then at the end, we'll talk about how you can save some money while also having a high-quality evaluation. We worked with Legal Aid Services of Oklahoma, which we call LASO, to evaluate their newest technological innovation, which connects pro bono attorneys to self-represented litigants. I often refer to self-represented litigants as SRLs. This project was generously funded by the Legal Services Corporation, and it's a new technology, so we needed to evaluate the process of building the technology and the results of the technology. LASO decided to create this technology specifically for a new expungement program. So we also needed to evaluate if expungement seekers were effectively able to apply for pro bono assistance and produce their pleadings by using this new technology. So to kind of recap, there were some things going on in this evaluation. We had a new technological project and a new legal assistance project. So I'm going to bring that logic model back so we can think about together how this project was constructed. Under inputs, which you can see at the top in the yellow column, we are talking about all the components that went into this project. So that subheading is what we invest. So for LASO projects, those were LASO staff time, pro bono attorney time, partner organizations time, the budget provided by LSC, Oklahoma expungement law would certainly an input had changed throughout the project and also the existing technological platform. And then under outputs, which you'll notice is the green column, we're focused on who we reach, what we do, and what we create by we, I mean LASO. Your evaluator will really need to understand these things to be able to know how to properly design your evaluation. So for the LASO project, the reach was self-represented litigants, interested public, community partners, court, clerks, judges, attorneys, and non-specialized audiences. In terms of what we do or what LASO does, they were creating a new technological platform, new resources for expungement seekers. They were developing educational materials. They were educating the public. They were hosting attorney training sessions, holding public events for self-represented litigants, nurturing community relationships, and sharing their work with partners, funders, and the public. And then in terms of what they create, LASO was creating new technology, new resources, and opportunities for SRL's evaluation reports. We're being created, obviously, public briefings and community partnerships. So now we're going to move to the outcomes example. That's the pink column in your logic model. And outcomes is basically what did you accomplish? What are you trying to accomplish in this project? And so we're thinking about our outcomes in terms of short, intermediate, and long-term results. You really want to think about incremental change for your project as well as long-term changes that you hope to see. From an evaluator standpoint, I'm probably not going to be able to measure your long-term goals. That would be a really big, expensive evaluation, something we call longitudinal design. So that's not likely to happen, but it's still a really important... it's really important for you to identify your long-term goals so you can place your project in the context... in the context of your overall goals. So in the short term, this group wanted to educate people, improve the existing technology, and establish and enhance community partnerships. Over the project period, they wanted the self-represented litigants to use the technology and to improve efficiencies for litigants. They wanted to also improve efficiencies for the courts and for their own offices. These are all things we can evaluate to some degree. For example, we could look at the web stats and see how many self-represented litigants were using the technology. The long-term goals of this project are really important to identify. So you can place the project into a context that gives your project meaning. Last but not least, the Expungement Project will help people with records reintegrate into society and have a chance to education and employment and housing, and generally move on from a criminal record that's been served or addressed from a criminal standpoint. To measure these things would be really expensive. For example, if we had the time and money, we would want to measure employment rates for expungement seekers over time. So now... let's talk about the evaluation and design. Most evaluations share three components. Who do you want to reach for your project? Why are you doing this? And what are you creating? So in terms of who we reach, an evaluator is going to want to know who's participating, and then that will have them question how they can sample, how can they best learn from the people who are participating. So that brings back to that logic model for all your input. So that's not just expungement thinkers. That's court clerks. That's pro bono attorneys. That's LASA staff. Your evaluator is going to want to think about all of the people who are being reached by your project. Next, what we do and create? Did we say what we were going to do? I mean, did we do what we said we were going to do? What we're going to do is modify the way the products meet our original goals, and why or why not? Not every goal is going to be met in every project. The important thing is that you learn about the goals that maybe did not go full, were not fulfilled, and understand the context, which made some of those goals impossible. And finally, why we did this? Can we measure our success in short, intermediate, and if possible, long-term goals? If you can answer these questions, your evaluator will really have a solid understanding for how to create a really good research design for you. Okay, so the last thing you want to do is rob your budget that you need to complete your project just to fund your evaluation. Obviously, evaluators have expertise and they should be paid accordingly, but you really want to strike a balance between investing in your evaluation while not compromising the budget for your project. So I think there are some things that evaluators really should do for you, but there are also some things that you can do yourself so you can keep your evaluation budget in check. Ideally, a PhD evaluator will design your evaluations using your logic model that you provide. They will determine appropriate sampling plans. They will create your instruments. They can teach you how to properly collect data and they can analyze your data. For example, let's look at determining your sample plans. You don't need to collect data from every person who comes in contact with your project, but you want to try to reach a sample of each group. So for LASA's expungement project, we did surveys at events. We sat in on two small information sessions and spoke with self-represented litigants. We interviewed a live help student navigator who worked on the chat program and we did a focus group with attorneys and pro bono attorneys. We did not reach every person, but we had a lot of different types of people represented in the evaluation. And then in terms of your instruments, instruments refer to things like your surveys or your interview protocols. It's really important for an evaluator to create those for you. PhD evaluators are trained to develop instruments that will collect valid data for your project. Really, your conclusions are only as good as the data you collect, so instruments are really important. A good instrument will collect only what you need, so remember, more data is not better. And in fact, less data is often better so that you're not overwhelming people with super long surveys and that likely can't all be analyzed anyway. So there are some ways that you can reduce your evaluation budget. A lot of data can be collected by your organization, not by your evaluator. If you record that data in a format that works for your evaluator, that will make things much easier and you can write your own final report. So let me delve into each one of these for a moment. Collecting data is a really time-consuming thing to do, which can cost a lot of money if you have your evaluator do it. So you may want to take on the burden of passing out surveys at your own events or you may even want to do interviews on your own if your evaluator writes up your interview instrument. If you do this all on your own, you want to be really mindful that you're keeping track of all the data. So if you interview someone, you need to record the interview and transcribe it and give it to your evaluator. It's really not going to do you any good to interview someone and then just give your evaluator some notes. They need to see the actual data and they can also train you to conduct an interview in a less biased way. So this will give your evaluator something to work with, but it can also save you money. One of the big problems I run into with my clients, and I'm not including legal aid at Oklahoma, they'll collect data but they won't put it in a format that I can really use or there will be so much missing data that I can't really analyze it. So that's something to keep in mind when you're collecting your own data. You want to show your evaluator what you're collecting throughout the process so she can intervene and say things like, oh, this is fine but I'm missing a lot of demographic data. So I really need you to encourage people to sell out the last page of the survey, things like that. You just want a lot of open communication with your evaluator to make sure that what you're collecting is going to be useful. And then in terms of writing the final report, an evaluator is unbiased. Like Simla mentioned, you want someone who's neutral. And so because of that, I think external reports carry some weight that an internal report really can't. But at the same time, having an evaluator to write your whole final report can cost a lot of money. So I recommend that you collaborate with your evaluator and give them the final say in the editing of the report to lend some credibility to it. But a large chunk of final reports are going to include background information. And honestly, you can often do that better than your evaluator because you're living these projects. So you can save certain sections for your evaluator while you do some of the background sections. So your evaluator should definitely write your data analysis sections and then the conclusion. But I think you can collaborate on that to reduce some of your budget. This is my information. I'd love to talk with you if you have any questions. And thank you so much for having me today. I really feel honored to speak with your group. Okay, thank you. We have a few minutes for questions. I don't know if we have any right now in the chat box. I'm not standing right now, Claudia, but we can give it a minute and actually turn it over while we're waiting for them to come in for Brian or Kett to talk about. So there was a question that just came up, which was, some works are hiring data analysts in-house to staff tracking what's going on, ongoing monitoring responsibilities. Do you see reasons to bring in data analysts in-house, or as much of the discussion has been today about monitoring, is this better done with an outside evaluator? What do you see as the differences between those? This is Claudia, but I let somebody else jump in also. I'll just give you what my perspective is on data analysts. They're very expensive, very expensive. So if you manage to get funding to bring one to help you with monitoring and evaluating, looking at the data that your systems and tools are producing to help you better understand the relationship between those tools and your program goals and maybe other tools. You won't get a lot of time. You know, you're going to maybe get a couple hours. If you're lucky, you can keep that for a period of time. But to have an in-staff data analyst, unless you're getting maybe interns or people who are working on PhD and they want your data set, then you have to worry about other things, it's not going to be a lot of time. So if you can partner with other legal aid groups to bring that sort of expertise, maybe that's where we need to go forward in the future as a community, like identifying ways where we can partner in sharing data and sharing data analysts, that we can train to understand the legal context in which a lot of our programs are done. I don't know yet, and maybe somebody in the audience, if you know, if you have hired a data analyst to help you with a project to evaluate, if you could share that, that would be helpful. Because I'm not aware of any legal aid group building a technology tool that has hired one, but I don't know everything. Yeah, we brought some people in, but they were students overseen by who are working on PhDs or other things through our local information school here at Northwest Justice Project. Partnering with academia is an often less expensive way to get access there, but it has been very expensive when we've looked at consultants externally. I wish we had the funds to bring somebody in-house to do it full-time. That would be amazing. And on that one is just to emphasize kind of the difference between data analysts and somebody who's kind of specializing in monitoring evaluation. And they're related, and I think that monitoring evaluation is kind of a fresco a bit beyond just, you know, since the data training might get, and I might say that in terms of hiring a data analyst in-house, you need to really be confident that your organization can internalize the kind of lessons that the analysts might produce for you and then be able to apply that elsewhere, or it's going to be a lot of money for not very much. And I think that, you know, when you're trying to, you know, in kind of a practical way think about an analyst versus monitoring evaluation, hiring from the outside I think is best when you're trying to analyze the impact of a discrete project on your organization. And, you know, if you want to bring somebody in-house, then I think that's the emphasis of really letting somebody run a little bit free within your organization to find insights and opportunities for improvement that may not be immediately obvious. This is Tara Saylor. I'll just add that at Q2 Consulting, I guess we are both. We do the data analysis as part of the evaluation, and I think most, my specialty is in qualitative research, but we have statisticians also at our firm. And so hopefully you can find an evaluator to work with that can do that part for you too, as an external evaluator as well as a data cruncher. I think that from my perspective, you need that in an evaluator. You need someone who can both establish your research design for you and then also analyze your data. We'll just pause just another moment to see if there are any questions, but I'm not seeing any at the moment. I have a question if, you know, just something that stroke me. I mean, how to do with the long-term outcomes issue, right? I think that a lot of funders expect two questions. Let's say that you do a project and it takes you two years to get it done. In the research field, is there a rule of thumb that if, you know how, when you're doing technology, if you code one hour, then you're supposed to test for three hours, two to three hours. It's kind of like a rule of thumb that the testing is going to take at least twice as much. It's not three to four times as much, depending on how complex it is, what you build. Is there a rule of thumb in the evaluation field that if a project takes you two years to implement? Once you roll it out, how long would you have to track? Is there anything like that? Are there any guidelines? Or is it really custom? Like, okay, if it's a custody case and the kid is going to be 18, this case is going to be open until you're 18, and this technology could be used many times whenever there's a custody battle. And then that would be a really custom type of question. Yeah, I think you sort of answered it a little bit, just there, that it really depends on the type of thing that you're evaluating, not so much the tech piece, right? So like an interaction between, for instance, something like a rule of law initiative might take five, 10, 15 years to actually show outcome. And versus in something like, say, child support modification in the US, there might be an immediate outcome and then you might expect that outcome to be sticky for maybe six or 12 months, and then that might be all. I think that what that really underlines is trying to set reasonable and defendable expectations in your program design. And those types of considerations are something that you can and should actively be thinking about as you're designing a project, not only in terms of what is sort of good looking like in terms of the stickiness of an outcome, or how long something is going to have to continue to be tested for it to be considered a success, but also were you going to be able to sort of evidence those outcomes, because I mean, you're going to have to go into court records. Does I mean that this technology tool is going to have to stand up for 15 years, which might be a challenge, or are you going to have to have some kind of transition planning in place? Is that your question? Yeah. And then in terms of long term, I mean, I used to be in the public health field and I have seen a longitudinal evaluation for high-volume users of emergency rooms be funded at the tune of $10 million for five sites in the state of California. I don't think that I've seen anything similar in the legal aid context, wondering if you've worked in long-term evaluations of technology projects or of legal social services interventions. What kind of budgets are you seeing for long term? Because I think that a lot of funders may expect a long-term report, but the funding may be sufficient to do monitoring and outputs tracking and maybe some intermediate evaluation, but the funding is not sufficient to really get outcomes through a long-term evaluation. I'm wondering if you have experience with long-term evaluations and costs. This is Tara Saylor. I have not had experience with long-term evaluations for the reasons that you mentioned cost. It's generally very expensive to do that. That would be, and it's unfortunate that it's very expensive because you essentially would need to contract with an evaluation firm for five, 10, 15 years depending on how long you wanted to observe the changes. Obviously, that could get very expensive over time. One thing that you might want to consider is collaborating with an academic for long-term evaluation. I could see that being really appealing to someone in the academic field. Of course, that raises issues, of course, with IRBs and different research issues, but oftentimes academics have an interest in longitudinal data and would be willing to work with your organization. This is Keith just to jump in. We've done a few long-term projects with a few long-term evaluations. I think that the cost of a large evaluation really gets into more complexity of the project than necessarily how many years it takes. The example might be if you're trying to do a longitudinal study of court data from the last 25 years. In the U.S., most of that information is online and that might not be very expensive to collect. It might just be a lot of data to process, but the sort of variances in the type of data might not be very high. If you're, and I think get into a little bit of what Chad said, if you're doing some of your own data collection with these long-term outcomes in mind, then you can sort of save time and make sure that you don't have to go back and collect 10 years worth of data or hire somebody to collect 10 years worth of data. But in other contexts, especially if it requires travel or a lot of days interviewing your staff members or trying to track down staff members who have left, those are the things that start to really e-up time. We tend to say 10 to 20% of the budget for discrete projects that last a year or so. As you start getting up to five, 10, 15-year projects, it's not unusual, at least in international development, to see monitoring and evaluation take up 30 to 50% of a budget. And I would caution against saying that as an absolute rule of thumb, just for two reasons. One is that working in developing country contexts tend to just frankly be more expensive. And two is that part of the monitoring and evaluation has very much been baked into international organizations from sort of quirks of the area. So one thing that I think is an opportunity that we really haven't done compared to other fields is really look harder at case outcomes and trying to associate case and other sort of social outcomes with the aid that they're providing, especially organizations that have poverty reduction missions. And that's something that I think is more of a cultural thing. You know, in the U.S. it's very much sort of a procedural justice type of mindset. And so investigating outcomes and collecting that data is a bit anathema. But, you know, I think that there's an opportunity to do that and I think that's something that, again, those cases would be very complex. But that's a good opportunity for academia as well to jump in because you don't have to worry about as much about confidentiality because a lot of those records are covered.