 I am April Faith Slaker. I am with the Harvard Access to Justice Lab, and I have an overview for all of you of different evaluation and research options that you might want to think about in evaluating your program. And I also have some information for you on resources that you might want to look into to help support your efforts to evaluate your program. And then I'm going to turn it over to Rory Martin who's going to talk about a cool resource that she has just developed. I am actually going to switch over to the internet for a second. If you go to the Access to Justice Lab website, which is a2jlab.org, there is a resource there that you might want to look at, whether it is following along with this presentation or to just refer to later on your own. A way to find it is on that webpage to go to the blog. And then there is a post called RCTs and Other Evaluation Methods. And this is a post about a chart that I created that presents a bunch of different evaluation methods and you can download the chart here or you can download the chart with different types of examples integrated in. So that's something you might want to look at along with this presentation or take a look at later. Okay, so the first thing, this presentation is going to cover a few questions. And so first of course is why even care about research and evaluation in the first place? What are the various methods and when should you deploy them? And then where can you find some helpful resources to help you do that in evaluating your program? I feel like I'm probably speaking to people who already know all of this, but it's worth just one slide to say there's a really big need for civil legal services out there. We know that courts are in crisis. We know that there are funding cuts. You know that self representation is a real problem and it has been on the rise for a while. We know that legal services providers turned down about half of the people who come to them with needs. We know that approximately half of attorneys do pro bono. And we also know from Becky Sander first research that a lot of people don't even identify that they have a legal issue in the first place. So these are all things that I think we all know. And then the question is how do we think about distributing resources and designing the best programs that can be the most effective given this real crisis that we're facing. And right now we don't actually exactly know what works and what doesn't. And maybe more importantly, we don't know the best way to allocate those resources to be the most effective for the best types of projects and designed in the best ways. So that's a reason to really think seriously about evaluating what we're doing. Both evaluating programs that we've maybe had in place for a long time, but maybe need to be updated or retooled or approached a little bit differently. And certainly when we're starting new projects or programs, it's a great time to think about evaluation so that we can make sure that they're being deployed in the best way. So this is a really complicated slide. I think there's a lot of content on here, but this is really kind of an overview of what I'm going to go through today. There are different kinds of questions that you might ask when you're thinking about program evaluation. And the different questions that you ask will suggest different types of methodologies for evaluating them. So you might be asking, what do I know about the landscape within which I'm implementing a new tool or have this program that I want to evaluate? You might want to know, did this project or program that I've created or this tech tool, is it doing what it was designed to do? Is it working properly? You might want to know, is the program cost effective and sustainable? You also might want to know, is it associated with some positive outcomes for the clients? And finally, you might want to know, is the program or the tool or the project actually causing the positive outcomes for the clients? And each of those questions is slightly different and suggests a different methodology. So I'm going to go through those different methods and just kind of touch very briefly on what they are and what some of the pros and cons are of the different methods. It's a lot of content to go through in a brief webinar. So that's why I showed you where to find that chart. And I would recommend spending some time looking at that chart. And then you are welcome to reach out to me if you have questions about some of these methodologies. But hopefully they'll give you a good jump start to then look further into the methodologies and better understand them. And just to let people know, I put a direct link to that chart in the chat. Perfect. So before I launch into the methods, some of the big takeaways I think would be great to think about is really be clear on knowing what kind of research or evaluation question you're asking in the first place. Again, because it suggests different types of methodologies for answering them. Another big takeaway is feel free to use more than one type of evaluation. This is not meant to say that you ask one question and then look at one type of evaluation approach. But really feel free to approach evaluation holistically and combine different methodologies. And that will help you get at actually a more complete answer to your questions, I think. And then this is an important one. Be aware of the limitations of each approach. So using one type of evaluation approach may get you some kinds of answers but will be limited in other ways. And just being really clear about what you can know and what you can't know given the approach that you have chosen is going to be really relevant to really understanding what that information has gotten you and what additional evaluations you might need to do and how to be really clear about what you know and what you don't know. And then finally, this one is maybe a little counterintuitive to some people. I think what I see a lot out there in the field is so much of a focus on getting a new project up and running or developing a new tool and then thinking about evaluation after the fact when really what you want to do is plan some evaluation along right at the beginning, right when you're developing something new or right when you're launching a new project. And the reason for that is because there are different ways that you might want to collect data or design how you're collecting data that you won't necessarily think of until you've thought through the evaluation. And if you look at your evaluation a year into a new project, it may be too late to then have collected the data that you wanted. Okay, so we're going to talk through. When I talk about the different approaches, I'm going to use this example, which will hopefully help you all understand the different methodologies. So the example is, you've designed a new app to help self-represented litigants file for chapter seven bankruptcies. The tool contains eligibility criteria. It is available to people with incomes up to 200% of the poverty level. It excludes people with a mortgage. And at this time, you only have it in English. And you're wanting to know a couple of things. First, will the tool reach the people who need it? And second, whether it will improve or help people obtain those bankruptcies and improve their financial situations. Okay, so the first research question in that list that I had put on a few slides before was, what do we know about the landscape within which the program operates? Okay, so we've we've developed this app or we're thinking about developing this app. And you will probably want to get a bit of information about the clients that you're intending the app to reach. And about kind of the landscape, the demographics of the clients, who's going to have access to this app? Will the people who you hope have access to the app actually have access to the app? Those kinds of questions. And so the methods you can use there are a whole spectrum of methods that you can use depending on your what the tool is or what the program is. But sometimes this involves focus groups or interviews with potential clients, direct observation, you can look at if you've, you know, put the app out out there into the world, you can start looking at the administrative data that you're collecting, or looking at external data sets, such as the Census Bureau and other data sets that your state might have that would describe the population. And so what you might want to know with this example is, how many people below that 200% cutoff point have smartphones? How many of them have would be able to access the app? How many of them have are English speakers since the app is only in English at this point? And then you might also want to think about it in terms of different geographies. Is there some part of your state where this people are not don't have cell phone access, don't have smartphones or just don't have cell reception? And is there something different that you need to do in that part of the state to reach them with this tool? And maybe that means a web based version that you make available at libraries or whatever it is in their area that would enable them to access it. But so these are the kinds of things that you want to collect data to find out about before you just throw a new tool into the field and think it's going to reach everyone. So this approach is very effective at describing the landscape, helping to guide modifications, checking in to see if things have changed in your population such that you might need to update a program or a project that you already have in place. And it's also great for describing the need for funding, especially if you've got some kind of project in one area and you want to expand it, having the kind of information about that landscape will help you make that case. The limitations of this approach are that it is really pretty just descriptive. It's not going to tell you details about whether your app is going to help people file for bankruptcy or whether it's going to improve their financial situations. It's really just looking at the context within which you would be putting this project or program in place. And so then also some limitations of the methods you may choose if you're doing focus groups or surveying. For example, if you have to think about who you've collected that information from. So for example, if you're doing focus groups, are those focus groups representative of the population that you're trying to reach? And if not, in what ways do they differ? So you want to think very carefully about that. If it's data about the demographics of the population you're trying to reach or something, is it a good survey that has really captured your population or is it some small sample? And you want to think about how representative that would be. Okay, so then process evaluation. This is for the question of, does the program operate the way it was intended to? Is it running smoothly? Is it working as designed, essentially? And so back to our example about the app. This would be thinking about, does the app work? Are there tech glitches that we haven't discovered that we need to really test out? And so methods for this, again, if you've launched just at the beginning of launching a project or something or it's a beta version of some technology tool that you've developed, you can interview the people who have used it. So maybe that's your program staff who are testing it out. You can observe how it's working. You can, if you've done a beta version and you're actually, you've made it available to some clients, you can survey those clients or ask them questions about their experience with it. And then also administrative data, you know, for example, if you've got some kind of, you know, fillable or a thing that walks people through a process, do they get through the whole process? If it's a tool that asks questions, are people completing the set of questions that would be probably captured in some kind of administrative data set? And then so effectiveness in terms of this method, it's going to tell you, you want to do this right away and it's going to tell you if this, if there aren't problems or glitches that you definitely want to have sorted out before you go live or before you expand it statewide, for example. So with, and then, you know, another example of how this might be relevant to this bankruptcy app. Again, it'll tell you if there's anything confusing in the language, for example, of the questions that you've asked, maybe that's because you're trying to reach people who are, where English is their second language and some of the language that you've used in the app is difficult. So testing it out early on to really make sure that you've ironed out those glitches is important. Again, it's just descriptive though, just like our previous, the formative evaluation that I previously talked about, in that it's not going, it's still not going to tell you if your app is leading to people being able to file for bankruptcy or improving their financial situations, but it's going to give you a sense of, are people able to get through the process that you've set up? And then, again, like with the last evaluation method that we talked about, if, depending on the method that you think about, whether you're doing interviews or surveying or looking at data sets, you want to ask the question of how representative is that information that you're looking at of the population that you're trying to reach. So if you're doing client surveys, those clients that did the initial beta version, are those really representative of your client base that you're trying to reach? Or, you know, especially think about if you're, if you have program staff testing out your app, well, your program staff might understand, especially like legal language, better than the clients would. So think about the differences between how you're testing it out, the process and whether that matches with the population you're trying to reach. Okay, and then the next method I wanted to talk about, I'm sure a lot of you are very familiar with this, this is economic benefits. So the question is, what are the economic benefits of the programmer project compared to its costs? Is it cost effective? Is it sustainable? And so there are plenty of reports out there about economic impact analyses and return on investment. And these are great in terms of just establishing the economic viability of a project, whether you're going to be able to continue to fund it, how, you know, whether you need to apply for more grants to sustain this particular project or program, financial feasibility, and so forth. Some limitations to talk about, it's, they definitely are, these are difficult studies to do because, and especially when we're, as we are in a human services environment, some of these things are really difficult to quantify. And so measuring those kinds of things is difficult and then also comparing your, your evaluation to other evaluations that had been done where things have been quantified differently or they're measuring different things, it's hard to make those comparisons. So it doesn't necessarily provide a very holistic understanding of a project or program. There are better and worse ways of doing it, of course, but again it's, it comes down to the fact that it is, it is very difficult to quantify and measure these kind of very human impacts that we're trying to have. But they're, you know, certainly very effective for communicating, at least to some extent, the need for more funding and, you know, if you, for example, with this, the, the bankruptcy app, if going back to that example, let's say you have gotten some seed funding to develop this thing in the first place, getting some sense of what it's going to cost to maintain it after you've developed the app for the purposes of just your own knowledge, of course, for budgeting within your program, but also for applying for additional resources to sustain it, that's important. And then also thinking about expanding the app, maybe you want to think, you know, that the app has, has seemed very successful in the bankruptcy arena and maybe you want to develop it for another area of law. What is going to be the cost of that, maintaining that? Those are questions you want to ask and certainly before trying to apply for a grant to develop another app like that. Okay, so then the next one I wanted to talk about is observational methods. So, so this question that you would be asking is, is my project or program or tech tool associated with some positive, hopefully positive outcomes for the clients or the recipients or users? And the question, you know, I'm being very clear to italicize on this slide associated and I'll get to why I've said associated and it's opposed to caused in a little bit, but the question is, are there, are you seeing an association between use of this project or program or tool and positive outcomes for the clients? And so this is observational and you may have heard of pre-post tests. That's an example of an observational approach that would get at this association question. And what that means is looking at some, some data or surveying or whatever specific method you've chosen from before you started your project or program and then looking at that same data for after your program is in place. And then you can draw some conclusions about, you know, if you see, you've, you, you looked at the data on bankruptcy filings for a year before you develop this app and then you put this app out, out there into the world and you do all this outreach and you put it out there, you might then observe a real spike in bankruptcies being filed. It won't tell you, so one of the limitations, it won't tell you for sure if it was your app that did that. But it might suggest something along those lines. So it's one way to evaluate the possibility of their, of your program or project causing without getting at causal information. Without having to do too much to, to, to affect who's using the app and, you know, not getting towards an experimental approach. So it's very, it's not invasive. It's usually involving data that's already out there in the world that it doesn't, it's not going to necessarily involve having to do anything too invasive. So that's a great way to try to, to get at some kind of information about whether your program or project is making a difference. So with the limitations, as I was saying, with this pre-post example, you wouldn't necessarily know if your, your app made that difference. And the reason is because you've collected data from a different time period for the pre-post. And there can be all kinds of things that have happened in your environment during that time period when you introduce the app that you can't necessarily rule out. And so it can be, it can be difficult to then say for sure that it was your app or your project or program that made the difference. So really thinking through, if you're going to use this approach, really thinking through whether there's some other things going on. Like I could imagine, let's say you've put this app out into the world and maybe at the same time, a bunch of, a bunch of self-help centers in your area have like beefed up their self-help materials for bankruptcy filings. So that could affect the trends. And so you might want to just think through what those other factors could be and find out what they are and, and really think, think that through. Okay, so another approach is cause I experimental. And this is getting at the question. It's a similar kind of question. You want to find out if your project or program is related to positive outcomes for your clients. This approach will get you a little bit more towards the the answer to the question of did the project or program is, did it cause those positive outcomes as opposed to just an association. And so this involves thinking about making some kind of comparison between the people who have used your project or programmer tool and the people who have not. So that's kind of called in research terms a control group, which is the group of people who have not benefited from the thing that you're measuring. And so in this method, you look for some natural ways out in the environment that have created some kind of control group. And we all have these in our programs, right? So like all legal services, organizations are turning away clients, right? So they are coming to the organization with some problems and not able to receive those services. So that's a potential control group against which to compare something, some intervention. And so that's something to think about in this quasi experimental method, because you're looking for something naturally in the world that has created this control group, you also have to think if that thing that made that distinction is also affecting the outcomes that you might be viewing. So clearly, I think most obviously you can see that if, you know, legal service agencies turn away people for specific reasons, they're not the same as the population that does receive services. So income eligibility cutoffs, or you know, or not having an identified legal problem and so forth. And so thinking about how that might affect those outcomes. So if the control group is a group that was turned away because they were over income, well then they also naturally would have some possibly more access to resources. And that is the thing that might be driving outcomes that you're viewing. So it it's a little tricky you have to think about the control group and how that was created. And was there something, is there some way of finding a control group where there was something more arbitrary that led to them not getting the intervention as opposed to getting it. One way to think about it with the app example is if let's say you rolled out the app to a set of people in one out of one of your offices and you've advertised in one area and you observe the bankruptcy filings go up in that area versus another area where you haven't yet done outreach to tell people about the app. That could be some way of making a comparison. So just there are some things out there in the world that you can find ways to look at the difference between a group that got some services and people who didn't. So it's really effective in terms of again, not being very invasive in what you're doing to try to do an evaluation. You're looking for some things that have naturally happened in the world that you can make some comparisons about. But the limitation is that it is still, it can mimic a causal approach, but it is not quite to the extent of ruling out all those other factors. And it can be, it can be really difficult again, just like the last slide to figure out what all those other things in the world are that might be affecting those outcomes. But still, if you if you can find a naturally occurring control group where the reasons they were turned away or the reasons they didn't get the intervention or the services do not seem to relate to the outcome. That's a potentially good control group. So like one example I can think of is if there are conflicts like a case, they've conflicted out, but otherwise, you've got a group of people who have conflicted out from services, but they otherwise look the same as your intervention population, that gets you a little more towards a good control group. Okay, so we are going to move on to the experimental approach. So this is the question of, does my project program or tool cause the positive outcomes for clients? And I think this is this is the question of what we all really want to know. We want to know is, you know, these these services that we're providing, are these the reason for positive outcomes that we hope to see out there in the world? This is really difficult to tease out that causal pathway. This is actually what the Access to Justice Lab specializes in. It's you do randomized control trials. That's what that's what it's it's called what we do. And this method actually creates a true control group against which to compare the group that got your services. And so what we do is we we take a population that is eligible to receive some service or intervention. And then we randomly assign them to either receive the service or not receive the service. And so that's it's just a lottery system. And what that does is it makes those two groups, the people who got the services and people who didn't truly comparable such that the only difference really is that they got is the the effect of that service. And so then you can tease out whether that tool or project made the difference for those people. So it's effectiveness is it's great because it gets at that causal information. The limitations, however, are really just kind of implementation limitations. They're very these are very resource intensive ways to evaluate things. You really have to think very carefully about whether it is something that you can do within your environment. And and then also it gives you a pretty narrow answer to the research question for a specific case type, for example. So with this, with the app, the app example, you could do something like this by having some kind of initial page that or you know, in your app that diverts people to either go through the set of questions that you've designed or send them to other resources or something. A lot of times that's that's that's how we approach our research studies is there's some kind of intake process where we determine eligibility and then divert people to either receive an intervention or not. And then you follow up and you see how effective the two different how well the two different groups turned out. So the reason to think about this difference between associations and causal pathways is because there are a number of things. I love this slide because it I mean, I hope you're all laughing, but there's this is actual data that shows over time films Nicholas Cage appeared in and number of people who drowned by falling into a pool from nineteen ninety nine to two thousand nine. And so this is I think the site is great because intuitively we know we just know that there is no causal relationship between the two. But if you're not thinking that way and you're just pulling data and looking at it together, you might look at this and say, wow, Nicholas Cage appearing in films is causing people to fall into pools. So that's just a funny way of getting at the idea that just seeing an association doesn't mean that one of those things caused the other thing. So here's an example in the research evaluation world. There have been a bunch of studies that have not been the randomized control trial approach of lawyers in delinquency juvenile proceedings. Seven of these studies showed that there was an increased likelihood of incarceration if there was a lawyer involved in the case. And five of them showed no effect. So that would seem to suggest that providing a lawyer to juveniles in delinquency proceedings probably increases incarceration rates, right? Like so that's what the data would suggest from those conclusions. I'm wondering and I don't know if any of you are comfortable just speaking up or typing in an answer to this, but what do you think is the problem with these studies? So again, these were not randomized control trials. These are just studies that we're looking at naturally occurring things in the environment. Do you believe that lawyers provided to juveniles actually increase incarceration rates? If not, what else might be going on here? I don't know. My two guesses here is either that the sample size was very small or that the lawyers are taking cases that are more likely to have incarceration as an outcome. So there's a selection bias. Yeah. Yeah, thank you. So exactly these were not they weren't randomly assigned. So wouldn't there would be a difference in the kinds of cases that lawyers are taking and they're the more serious ones where incarceration is more likely to be the outcome. So this is an example of why you want to be careful when you're not doing kind of causal research to really think about that that kind of thing that when you see an association, it might not be so obvious immediately what is causing that association, but you want to think it through so you're not drawing the wrong conclusions. OK, so we're moving on. Thinking about where to get data to measure outcomes. There are this is this is just a quick list of different ways to think about what kind of data sets are out there. So there's numbers that are already gathered. Administrative data sets that your organizations all have. So thinking about how to pull that information out and draw some conclusions from that. Observations. So if you're thinking about doing an evaluation of a self-help center, a great place to start is to go sit in that self-help center and watch watch things just observe what's happening. And that in and of itself is a data collection. It is also a way to help guide maybe further evaluations and help you understand what are the kinds of things that you should evaluate in that setting. Interviews and that can be interviews with your staff, interviews with with your actual clients or focus groups in the community. And then clients or attorney surveys are also a great way to figure out what's going on. And then again, back to one of my takeaways from the beginning, think about using more than one approach. You don't have to pick just one. Some quick guidelines when you're using administrative data. It can be it can be messy and it can be incomplete. And so you want to think about are you do you really understand what that data means talk to the people who are doing the input to make sure everyone's putting in the same information the same way. And if not, then you want to know that who's keeping it how, where do they keep it. Is it accurate. How is it shared all these kinds of questions you need to you want to really think through. And I'm happy to if you're trying to access, you know, thinking about it, you know, analyzing your own organization's data, but also accessing external data sets, I'm happy to help you think through how you can how you can maybe get that data and connect it up with your data. So again, just to reiterate, because I think that these are important takeaways, really think through the question you're asking. I think what I see is that often people are answering that association question, thinking they're answering the causal question. So really just being clear on what is the question you're asking and is that method that you've chosen actually going to answer that question. And then again, consider using more than one type of evaluation, especially just the formative assessments at the beginning. That first slide I talked about when I went through the methods about understanding the landscape and the context within which you're doing your program. That's a great approach to take not just once at the beginning of a project or intervention, but periodically, really, and in conjunction with other evaluation types, because the world is always changing around us and keeping track of how things are changing and in what ways and how that might affect whether your tool or instrument or project is reaching the people you want to reach is really important. Be aware of the limitations. So I hope I pointed to some of the limitations that you want to really consider, especially if you're going to make some program or policy changes based on some bit of evaluation that you've done, you want to make sure that you know the limitations of that evaluation. And then if you've got a new project or program, it is never too early to start thinking about evaluation. And that will also thinking about it early on will make sure that when you are finally ready to to collect some, you know, to evaluate some data towards, you know, whether that's six months in or a year in or at the end of the project or when your funder is asked for information that you have collected the data, the way you needed to to answer those questions. OK, so I know all of you are wondering about resources to help you do this, because some of these things are things that you can do yourself internally with a little support. Some of these things, I think bigger projects like a full needs assessment, you're likely to need a consultant to help. And then there are other things where there are people in university settings, for example, that you can connect up with who may be interested and willing to help you do something that is more research oriented, that that, you know, more towards that causal end of the spectrum of the questions asked, that they may be able to help you do that and collaborate with you to do that at no cost. So we will talk about that. So again, you you may recall this slide with the different types of questions. So the first four of these questions, there are resources out there that I think a lot of your projects and programs can if, you know, if the evaluation is fairly small as opposed to a statewide needs assessment, if it's an evaluation of a particular program, you may be able to do this internally without needing to hire someone. And there are some resources out there. LSN TAP has a survey bank. So I get questions all the time about surveys, projects and programs are trying to develop, but they just want to see some examples. And I refer them over to the LSN TAP survey bank. Both SRLN and NLADA have reports of evaluations that have done, that have been done, which can help you think through some of the questions you need to ask. But then also sometimes what you can do is reach out, you know, look at what the organization was that put out that report and reach out to that organization to ask some questions about, you know, whether they are willing to share their survey instrument that they used for that report or some of their other evaluation tools. I think we could be doing a lot more of that kind of collaborating in the community and it would be helpful for everyone. And then here at the AdaJ Lab, we actually have launched a project that I think can be pretty helpful, which is if you have an evaluation tool that you've been working on, whether that is a logic map or a client survey or a set of questions that you ask your attorneys upon case closing or a web survey or something, you can send it through this project and we will connect you up with some expert evaluators who will give you feedback. And so this is all, we're asking them to do this pro bono so it's not super extensive feedback, but you can get the benefit of someone who has been doing program evaluation out there and is trained in program evaluation to give you some advice on how you're asking your questions and how you're thinking about your evaluation. Then towards the right on this set of questions, once you get more towards like some more rigorous kind of research about the impact of your project or program, there are some resources. So I just put Sam on there and I know you don't know what this is yet, but Aurora Martin is up next to talk about that and that is a project that can really help you with some of these more bigger impact study, you know the impact and effectiveness of your program studies. And then LSN TAP has a list of universities with some data analysis capability. I just wanted to say universities are a really underused resource in our community in that there are lots of researchers out there who would be interested I think in collaborating with our field and possibly some of those are graduate students who are looking for data sets upon which to do their master's, write their master's paper, but they want to use some real world data set and they don't have access to it, but they could provide our community with some you know really intelligent evaluation and analysis capacity. There are opportunities there that I think are really underexplored and LSN TAP does have that list. I don't have an easy way of telling you where to find it, but I'm sure SART can help with that. So I just dropped a link to the survey bank in the chat and I will also drop a link to the data analysis framework, which has a lot of that information on it, which Rachel Perry put a lot of that together. Yeah, and I wanted to mention actually for the feedback tool that I'm about to talk about very briefly, Rachel Perry is one of our collaborators with that, so Rachel Perry and then also Kelly Schoss Lutherland who is the program evaluator at Legal Aid of Nebraska. So these are just some people in the community to think about when you're needing help with evaluation. And then finally, that last question with whether your program or intervention causes positive outcomes, that is what the ADJ Lab does. We specialize in that. I can certainly direct you to resources for the other kinds of questions, but if you have some project that you think is appropriate for a randomized controlled trial, please do reach out to us because we're happy to collaborate on those sorts of things. So I'm going to quickly talk about the evaluation feedback project. I think I kind of already mentioned it. So the project is on the ADJ Labs web page, I think, under Resources. How it works is you submit an evaluation tool. There's on the page, there's a submission button and so you can upload your evaluation tool and tell us a little bit about it. There's a form to fill out. If it falls within the scope of our project, which tends to be things that are not big projects, small projects like client surveys, because again, we ask evaluators to donate their time to provide some feedback, so we don't want to give them too much. We will match your submission with one to three evaluators, and then in a couple of months they will return some feedback to you about how to improve your tool. So we hope that you will feel free to use this as you're developing your tools. And with that, I'm all set. I'm going to turn it over to Aurora Martin, who will talk about Sam. It's April. And thanks for allowing me to piggyback on your presentation. Sorry, are you going to send it over so that I can actually drive the presentation? You? Yes, definitely. I will see why you did not get that pop up one second. Let me present it again. A quick question that was asked, the location for the expert feedback group, that's reaching out to you, correct, April? What is? Yes. So there's a portal on our the A to J Lab webpage to submit something. And there's an FAQ section so people can read a little more about it. I will grab that link and put it into the chat for people also. Thanks. And start. Can you see the screen now? I can. Looks good. OK, great. Should I just jump right in? Go for it. All right. Thanks for having me. My name is Aurora Martin. And April was generous enough to invite me along for the presentation. And really, I think that it's it's it works well to actually build upon April's presentation in that I think the essentially the matchup between those who can help with evaluation scholars and other kinds of experts can really it's an example of a great cross sector collaboration to enhance our work for justice. And so this whole concept of the scholar advocate match platform that is right now in a kind of beta form we're kind of having a public dress rehearsal right now. And so this this slide here to me represents the the diversity of different issues that legal services actually makes a difference in whether it's food security issues, issues of family, issues of education, housing, racial justice and it really just kind of one day a couple of years ago when I was actually executive director of Columbia legal services, a number of our advocates were reporting in about how their advocacy partnerships with different researchers and scholars really have made a difference in our different types of work, whether it was foster care, whether it was housing, whether it was reentry. And I asked them how did we meet these people and and really it is very much like as we look out into this crowd of random people who we are in the hustle and bustle with the strangers and people we might know on the street. But it's really been happenstance, happenstance about how we have met whether you know through conferences, panels or personal networks or professional networks, email introductions, six degrees of separation. So it got me thinking about how can we actually have a more effective and efficient meeting of the minds across sectors. And so the idea behind Sam, the scholar advocate match platform was born and essentially it is about coming up with collaborative solutions at any stage really. And certainly this platform has been inspired by my work in legal services, but has implications beyond legal services, I think. But what I would like to see is to beta test whether reducing the transaction costs of finding each other and finding your partner makes a difference. And I think that there is a sense of urgency in the work that we do on a systemic level and perhaps even on an individual level that would give reason to actually use Sam. So what we have is sort of a double market. What we have is a population of scholars who are, let's just say we're talking about those in academia and us advocates in the field providing direct services. And we have different needs and whether you're a graduate student or a tenured professor or somebody who's about to be tenured or a clinical professor in whatever discipline. And you have a need for data and we have a need for the expertise from the advocacy side whether it's the expert in your piece of litigation or whether it is a policy and practice piece in which you want to explore and find that right question. That April had actually mentioned is an important point when you're evaluating a piece of advocacy or a project. Know what the right question is but what do you do when you know what the issue is but you don't know what the right questions are. So there. So this is the website I think Sam.org and let's see here. So what it intends to do at this stage is to be pretty simple about just bringing thinkers and change agents together for to be quicker and for the quick pro quo that aims to bring collective impact on whatever it is that you're working on. Why Sam? You know strategic analysis of data and problems is often incomplete for policy and practice and when I say policy and practice I'm using it pretty broadly in terms of whatever that particular area of law it is that you might be working on but also when we're thinking about say treatment of for mental health providers in a particular particular segment of the population. You may find that the advocacy that you've long been working on there is a need for change up in that kind of practice. Secondly, why Sam? Research can be expensive and although this platform is not intended to be totally pro bono there are many people who will actually say yeah I wanna work on X with you from a researcher's perspective or even from an advocate's perspective and let's do this together and the examples that I'll talk about later that's what happened. There's certainly expert witness databases already out there that a number of plaintiffs attorneys especially use but this again is something different. It's broader, it's simpler it's actually saying if you from these different sides as a scholar as well as an advocate have a thing, an issue to work on and you wanna do something together collaboratively punch it in and hopefully we can actually have a conversation with one or a number of potential partners. Field data from the scholar's perspective is also hard to access and so setting aside the questions of ethics and privacy and who owns the data all those things that April also touched on those are things that once you find your thought partner you can discuss but for now the beta version of Sam is essentially just testing out the question of does a platform for the meeting of the minds answer a critical need. So finding a potential thought partner as I was saying earlier can be pretty time consuming but when I asked the advocates who were working on a variety of different issues over a long period of time they said that they met them through these different ways they met them through the networks through panel presentations it was really truly by happenstance and there was often times the side comment about oh darn it you know had we met earlier had I known then we could have actually done X, Y, O, C. So Sam is a virtual community right now where there's probably just about just under 150 people who are on it right now but I'm now having after a couple of months being away from it having to sort of recharge it but there's just under 150 people right now and at this stage I kind of called time out and wanting to see okay first of all you know how do we get people to start interacting so I'm gonna ask some of those questions by survey a little bit later but right now Sam is a beginning virtual community of leading the clients so that there's a cross sector of expertise to address the urgency of untapped knowledge and issues to work on. So here are some examples of actual collaborations between scholars and advocates that were intended to impact practice policy and also the public narrative it can have I think a pretty dynamic impact once it gets going but even now through happenstance these are actual examples that made it through the headlines or through publications and these are very specific examples that I am familiar with most of which are from my work with Colombian services. Now mind you I'm not taking credit for any other work that the advocates live here I'm just saying that these were the observations from the work that the advocates engaged in. So on the one headline you see that there's a headline about the mass incarceration civil rights. Murphy Men the new executive director of Columbia Legal Services has been working with Professor Catherine Beckett at the Department of Sociology at the University of Washington. Farm workers win case against non-toting foreman. This is an example of the kinds of this was in litigation and having connected with regard to farm workers having connected with certain experts it actually just in a traditional litigation sense helps. In farm worker cases for example labor economists can be very helpful as well. Children are used like cons by domestic abusers behind bars. That one is an example of a collaboration between the domestic violence unit at the King County Prosecutor's Office and Amy Bonomi who authored this particular piece from Michigan State University. She and the domestic violence unit at King County Prosecutors who had a long happenstance kind of working relationship on making a difference both on policy and practice with regard to domestic violence. Washington's three strikes law and falling through the gaps. These are two examples of reports about issues that are complicated that advocates at Columbia actually paired up with were got informed by different professors as well at the University of Washington. And of course here it was happenstance because it was also proximate. This other headline here from the Seattle Weekly where is paying your debt to society Professor Alexis Harris is etched here. Even though professor came out with a new book a couple years ago called A Pound of Flesh and our Columbia Legal Services attorney, Nick Allen who is now the Directing Attorney of the Institutions Project had been collaborating with Alexis for quite a long time not only to contribute his thoughts in review of the book but also long-term wise in terms of different conferences different position papers and different studies about the impact of fines and fees for those who are trying to leave society. So what those examples show is that just by happenstance the advocacy went a long way. It went a long way in many different ways in the sense that by virtue of partnering with each other the right set of questions were able to actually be developed. And also on the one hand expert witnesses as attorneys we know that there's always a meeting period well not always but periodically for expert witnesses. When you think about it on a very broad level you can think about it in terms of at different points of your case and at different points of the issue on a systemic level. So with regard to that next section on expert researchers and analysts what we saw at Columbia Legal Services was that Peter were, well and then the last example is not Columbia Legal Services but with foster care reform for example it was happenstance that Dr. Eric Trupin from University of Washington Department of Psychiatry and Chair of the Evidence-Based Practice Institute was very, very helpful and instrumental as a contributor to I think it was like essentially 12-year mitigation for learning foster care reform. Barriers to housing and employment in the entry. Katherine Beckett as I mentioned, she collaborated, excuse me, with Murph-Evan on a number of things including offering certain papers as well as presentations. Because of the work together she is now a familiar expert here in Washington State and has been invited by the State Supreme Court to provide her perspective at certain symposia. With regard to economic barriers to the entry Dr. Alexis Harris, University of Washington Department of Sociology also was a representative scholar that CLS has worked with. And then again as I mentioned with regard to domestic violence, treatment, rehabilitation, prosecution, even the NOMI from Michigan State has collaborated with the DV unit of the King County Prosecutor's Office. These are only a handful of examples that I'm sure all over the country there are many, many examples. But the big question of how people believe is I think left unanswered. So essentially this is just a graphic of how I was thinking about Sam initially. And right now it really is kind of just a meetup space. And that I was thinking about it in these different ways in terms of advocacy, community collaboration, scholarship. The tools and the reach are enhanced. Community impact can also be enhanced and scholarship. And so when I said earlier about policy and practice that's really what this is about. It's essentially a match.com on a very platonic level I'm assuming between scholars and advocates. So here are some data test questions I wanted to throw out to people that I would probably, I'm hoping to actually send out in a survey if people are willing to answer. And I'm in the process of putting together this interdisciplinary roundtable of advisors a number of different folks from different institutions and programs have agreed to actually kind of get together and not homerously over like 14 times, but a few times to provide some feedback. But here are some key questions about whether something like this would be useful and what connecting with a scholar or expert improve your advocacy? And if so, how do you search for your mind match and what functional features would you find useful in the matching platform? So that real quickly is essentially Sam. Excuse me, I've got these terrible allergies. So that's all I got. And I'm hoping that if there are questions I certainly welcome them. And I can show you a real quick thing on here's the website. It's really super basic. I'm going through a phase I think where I really like comic book drawings. And so it explains the sort of artistic slant here. But if you go to, I think Sam.gord as it develops and hopefully if it does, maybe the name will end up changing because I think Sam does necessarily roll off. But I did go through a phase where it was like, I think there's more than Sam. But that's probably just me. Nobody else is probably gonna do that. So anyway, check it out. So Rory, you mentioned that we're gonna have a follow-up survey. We're gonna send it out to people that were at the webinar and also generally out to the LSN tab email list. What is the, should people just wait for that survey or is there somewhere for them to connect at a profile? That type of thing. Yes, if you do that, I think Sam right now, you can just kind of start tooling around there. But the survey will be forthcoming in the next two weeks that I hope to actually have available on Sam for the members that are in the register. It's all free. And also send it your way to the LSN tab. Definitely, we would be happy to blog about that and share that and try to get it out to the community for feedback. I like that in doing research, you are, or in trying to connect people to do research, you're also going to do research on the people who are gonna be using it. I appreciate it. Is this in the slide wheel, Sarge? But anyway, thank you. Thank you.