 Welcome to Dalhite Community Center. I'm Jackie Tramby from Orange County Alliance for Community Health Research and Humanity. And we're here today for our fourth and a series of five workshops focused on program evaluation. And today, we're going to get the pleasure of listening and working with Michelle D'Arlo. This is from the Center for Community and Cooperation at Cal State Fullerton. It's a lot of dense material items, but really important material, and hopefully all of the strength and work that we're doing with real organizations as well as in the community as well. So just to kick over again, at this point in the bar, the Alliance is providing a series of trainings to increase the knowledge and awareness around community-based participatory research with the intent to help build some of those partnerships in the community. So bridging community and academic partnerships to address our health issues here in Orange County. And these are the partners, including Cal State Fullerton, UCI, the Lookout Foundation, Enterprise Institute of Children and Families Cooperation of Orange County, O'Cat O'Cup, and the County of Orange. And this is the objective of the network today. We have in your packet all the handouts and worksheets that we'll go through today. So I'm just a little bit small. I apologize that you didn't have the slides in your packet and hopefully they're large enough. And if not, let me know your application and we'll take care of that for you. So this morning, we have the pleasure of having Michelle D'Arlo, who's our speaker. She's the director of the Center for Community Collaboration at Cal State Fullerton. The mission and goals of the Center is to provide professional development, applied research, and scholarship in community capacity building. Michelle is responsible for providing clinical assistance to various agencies in strategic planning, program evaluation, program planning and implementation, collaborative development, and research and data again, and management. One of the primary products of the Center is the production of the annual report and the Conditions of Children in Orange County, which is utilized by community organizations and public agencies in community areas to track the quality of the initial action in Orange County. Michelle has been an adjunct faculty member in the Human Services Department at Fullerton since 1997 and has been appointed to a full-time lecturer position to keep courses in program evaluation, program design, proposal writing, and case management. Michelle received her bachelors of social welfare from UCLA in 1994 and her bachelors in science and human development with a minor in F5 to develop mental psychology from Cal Colleen and Maria Visser. In addition to the above mentioned positions, Michelle has worked on behalf of children and families as a day counselor at a residential treatment facility for IAS employees, an eligibility worker, and a social worker for the same program with Cal Work, a special education teacher in Los Angeles and as a managing associate in community planning for the United States Orange County. So she seems like the perfect fit today for talking about program evaluation and program planning. So please join me in welcoming Michelle. Thanks. Thank you. Thanks, everybody. Great to be here this morning. I didn't expect the traffic coming down the 55. I'm really fortunate that I get to take side streets to Cal State Fullerton and get there in 20 minutes. And if I stop through Starbucks, it's 25. So I'm driving down the freeway thinking, people do this every day? Like, really? So I appreciate you making the time to get here really early and get started on a four-hour training that usually I like to do in about 16. So we're going to try and go pretty quickly through it. But of course, if we get off track or we need to stop and slow down, if you're not understanding something I'm talking about, please just raise your hand and stop me to clarify. Sometimes I get going faster than I should or use acronyms that you might not be familiar with. So please don't hesitate. The agenda is set, but it's flexible. I really want it to try and be as practically applicable as possible for you today. I don't want to stay up in the theory and all that. I want to get you down to the hands on and give you a chance to do some exercises in between. You've got some worksheets that we'll be working on as we go. And please don't. I know that some of you were all leaning that way to get up and go to the bathroom any time, any of that stuff. I think logistics are the bathroom is out towards the lobby on the right-hand side. So don't worry about that. That doesn't distract me. I'm used to teaching college students who do whatever they want to do. They get really good at texting and listening at the same time. And as a matter of fact, I have one of my former college students here. Luce is here. So it's nice to see her again as well. So I've been teaching at Cal State Fullerton now since 97. And I've been there since 95 as the director of the center. And that's when we started working on evaluation and working on developing and helping agencies develop their capacity to do evaluation. And that was when nobody understood the language of what's an outcome. Everyone was like, outcome? What's that? So it's been great to see the evolution over the years of the nonprofit's capacity to understand the importance of evaluation. I mean, back then, everyone was like, oh, do we have to? And now everyone's like, OK, we have to. A little bit more enthusiastic. And where I find people get stuck in nonprofits now is not so much in understanding the importance of evaluation and understanding what outcomes are and results are, but how to really practically apply that in the work that they're doing and make it simple enough, keep it simple, right? Make it simple enough so you can get some effective measures and that you can really understand what it is that you're doing within your agency and then communicate that to additional funders and others that are needing this information. So that's really, I think, where we find the challenge is not on knowing that you need to do outcomes, but how do I go about doing it in a simple way, in a direct way, that I can manage it, that my staff can manage it. So that's really the way that I like to try and teach program evaluation within nonprofits when we go out and do our work in the community, really trying to make it so that it's user friendly and doable and feasible for you to accomplish. So hopefully we'll get a chance to show you some techniques today or some things to get you started out with at least that in the foundation. Then the other challenge what we've seen is people understanding the whole data management issue and what to use and how to use the data, what software to use and what analysis to do and then once you get that, the results. And that whole lecture lesson instructions would take a whole nother eight hours probably. So we're gonna touch on that a little bit, but that really I would hope that maybe next year in the series when we're doing these series again of workshops that we actually go into that piece of it like connecting to the data and implementing data management kind of systems and how to do that more hands on. So, but stop me at any time as we get going here. Now let me see if I can get going here. Some notable quotes. I always like to share these. The measure of any society is the way we care for our children. They are a part of us, a reflection of our values and they are our future. You could change this word children and you can put our homeless, you can put our vulnerable populations, you can put our disabled, whatever you wanna put in that category and it still means the same. And the reason I like this quote is because of this one word, measure. How do we know if we're having an impact in our society and making a difference and if we really actually are valuing those that are more vulnerable in our communities and helping them? And the only way we can do that is if we actually measure that we're making a difference. And so that leads to the next quote that says wanting to help is not the same as knowing how to help. Likewise, having the money to help is not the same as knowing how to spend the money in a helpful way. And that speaks specifically to evaluation of nonprofits that we're all out there wanting to help, wanting to do the right thing, feeling good about what it is we do, seeing some great smiles on our client's faces as they leave the door, feeling like we made a difference in their lives, but do we really know that we're helping unless we measure it and that it's really important for us to use evidence-based research to know that what we're doing is making a difference and that it's not just good enough to receive money from a funder and say, yeah, we're doing a great job and the funders realize that they're not gonna let you do that anymore, that you can't just take money and not be able to report back whether or not you made a difference. Not that you served 10,000 people, but what was the change that occurred in those 10,000 people's lives that you served? What was the impact of your services? And so it's really our responsibility as human services providers, as health providers, it's really our responsibility and ethically we're responsible to our funders, to the clients, to everybody to make sure that they know that what we're doing is actually helpful and that we need to know what we're doing is helpful and that we've done that through rigorous research and through our evaluation efforts of our program specifically. So that's kind of the quotes that I like to start with to get us on track to saying, yeah, this is why this is important. But before I go any further, I just want to, I know I got to see the list of who's here, but some of the important part of this series of workshops and everything is the networking part. So I want, if you can, don't mind, I just wanna say your name and what organization you're with as we go around so that's others in the room if you're wanting to always hook up or collaborate with one of those agencies, making sure, ah, I'm gonna have to talk to her during a break or him during a break. So, Lucy, you wanna start? Thank you, Deuce DeBaird. Very good. Chris Tannis, I'm with Susan G. Coman for the Cure. Roxanne Garza, I'm coming back into the health education field and I just retook my chest and passed it. Good, yay. Congrats. Dad, never let it go and so I'm volunteering right now for Orange County Capture Coalition and also UCI's Healthy Initiative under Dr. Hodant Del Culver. Good. Getting back into it. Very good. No, I'm just a general from my side, my self-association. We're gonna be working on the public health and health care agency as well as I'm a student at Fullerton, but again, I'm a master in public health. Good, it's a good program there. I'm Ellen Del, I work for a volunteer for Bill Futures. It helps homeless children ages 18 to 20. I am a Superstitio and hands me Healthy Smiles for Kids at Orange County. We provide oral health prevention services for children throughout Orange County. I'm Rita Varshtad, I'm also with Healthy Smiles and we provide, I work with the school part of the program. We go to the schools and we ice screen with full life and do a lot of assistance for the children that don't have any and get to home. Have I met you at Healthy Smiles before, Rita? How long have you been there for a while? I've been there for seven years. Yeah, might have met you up there before. Yeah, very good. Would you hear it? It's eight years in my passion keeping them mentally and physically fit and socially active. With what organization? I'm retired, I'm on a number of committees for the county and for the city all of it is a volunteer. Great. Angela Sevedo, I am a community outreach nurse navigator for the Center for Cancer Prevention and Treatment at San Jose Hospital in Orange. Very good. I'm Maria Maza, I'm an assistant professor at Cal State College in New York City. Hi. I met you. Yes, we did, didn't we? Yeah. My name is Amelia Ramos, I'm a community volunteer trying to find new resources for community. Okay, good. My name is Sarah Sevedo, and I'm also a volunteer. Good morning, Ashley Sherry with OCAPICA, he's the program's coordinator. And OCAPICA, everybody know what OCAPICA is? Go ahead. Yeah, go ahead, say it. It's Orange County Asian and Pacific at everything that we align with. You notice I didn't say it, because I would have gotten it all mixed up. I can't even say which ours is, our alliance is, ah. Delhi Center and other than having beautiful rooms for people to have workshops, we also have over 50 programs in health, financial stability, education, and community engagement. How many of you are right? I'm also with Delhi Center, building Health for Love. Hi, my name is Sarah Marr, I'm from UC Irvine, I'm an institute with clinical and translational science that's in the engagement unit. I'm Lauren White, and I'm with OCAPICA also. Hi, I'm Zahra, I'm the director for Civic and community engagement at UC Irvine. Hi, I'm Amel Hedana, I'm from the community called NECCA, and I'm from multi-ethnic collaborative community, and I'm from the NECCA, I'm a collaborative multi-ethnic non-profit sector of the multi-ethnic underserved community throughout Orange County. And my unit program is a socialization program where we integrate isolated adults back into the community by providing them with healthy activities. Hi, my name is Lorena, I'm from Orange County, FAA. I'm Eric Wilson, and I'm a project worker right now. Good morning, my name is Sean Kavich, I'm the executive director of the Cancer Legal Resource Center of our national program that provides learning information and resources on any kind of cancer related issues. Very good. I'm Maggie Durdha, I'm the program director for the Healthy Changes program. We have a community family here in Santa Barbara. Nice. Ann, let's see, we had one last person join us. Maggie, introduce yourself, just where you're from. Hi, I'm Maggie Moreno from the Office of Center for Korea. Very good, thank you. All right, well, great, so you can see we have a wide variety of populations represented from the elderly to children to smiling faces. So it's great to see and know that you're all here interested in evaluation. And we want to also give you a chance to hopefully network and talk with other agencies and find out what they're doing and where you might be able to collaborate at time. So hopefully we'll make some time for that as well in today's presentation. OK, so let's just start first of all. I know that this isn't everybody's favorite topic when how many people really are so excited to do program evaluation whenever it's time to do that in your agency. I'm glad you're all honest. I get that same response you can imagine from my human services students when they come in and all they want to do is the touchy-feely stuff. And I said that goes over to the counseling center. If you start crying in my class, it's like, OK, the counseling center is right across the way. So it's not our favorite topic. But if it's done and made to be a little bit more simple and a little bit more practical, it really doesn't have to be the burden that agencies see it as. And so we try and really make it so it fits into the system of your organization. And so the definition of it, one of the ones I'd like to use, is that it's really a systematic collection of information about your activities, the characteristics of your clients, and the outcomes in order to make judgments about your program, how to improve your program's effectiveness, and then how to inform decisions in any of your future planning. So it's really if you break that down, it's really not that hard. It's about collecting data in a systematic way so that you can then use that data. And that data is about who your clients are, are you serving the harder to serve clients or the easier to serve clients? What are your expected outcomes for those clients based on your knowledge of who they are? Then what are the activities? How does that tie to it? What are you doing? What are you providing for them? And then what are your results? What are the actual outcomes of the work that you're doing? And then you're gonna use that information to make decisions about whether or not you're gonna continue the program, how to improve the program, where is it that you missed? Where were the gaps in your services that you didn't get to the outcomes that you had anticipated? And then use that information and disseminate that information to all different audiences, including your staff, your board of directors, the public, your funders, city officials, people that really need to know that information about how it is that you're meeting the needs of the clients that you're trying to serve. So why do we do it? We do it to inform decisions, to help us make those hard decisions about what's most effective, what's least effective. In times where agencies are having to make decisions about where funding's gonna go, they really need to, an executive director and a board of directors really needs to have the information about what's working and where are we being most effective with our dollars and how we might have to shift, unfortunately, our money from the least effective to the most effective programs. And if you don't have good solid evaluation data, that's gonna be really hard to do. So it also allows us to leverage more dollars. If we are able to communicate our results to others and other funders, then you're able to leverage more dollars with other collaborative partners and begin to actually see more dollars flow into your services and provide you with a better foundation and basis for continuing your programs. It helps you set priorities when funding is scarce. When it is hard to get dollars, you actually have to set some priorities about what is most critical and where does that dollar need to go that's gonna be most effective. It improves morale. Now this one, how many of you, when you tell your staff or you're in a meeting and you're doing evaluation or you're hearing about the forms that you have to complete or any of that stuff, feel like your staff just feels so excited about doing evaluation and it really gets them motivated. Anybody? So then why did I put it up here that it improves morale? Why do you think evaluation would improve morale? Anybody? I don't think it's doing the evaluation. It's when you see the results and the outcomes and then you're excited. So it improves morale because you can see the impact that you made. Okay, very good. Anybody else? Yes? I think they react to the fact that you're taking notice and so at some point you're putting attention to what's going on and it implies not only that you're measuring it but that you might even wanna change it as a result. So there are more, I don't know, they buy in more. Exactly, there's a buy in. Somebody else wanted to say something. I saw a hand flash. Well I was just, to your staff and your funders, what is happening and the impact that you're making on the population that you're serving? Right. It's a real power. Exactly and that's exactly why, like you said, if an evaluation is done right, it's communicated back to the staff. Now that's the problem is that often it's not communicated back to the staff and it's not communicated back to the board in a way that they can understand it and see that what they're doing is making a difference. And we're working in this population that's really hard to serve. It's stressful. It can be really emotionally draining on us time after time and yet we see, anecdotally, we'll see clients leave our office and we'll feel really good about that. They'll tell us thank you and you can see that there's been change. But like you say, Nikki, that when as a staff member you feel somebody else is paying attention to the good work that you're doing, you feel more motivated. You feel better about the work you're doing. It's not just that you on a one-on-one with your clients are seeing a difference but you know that your management and the board is paying attention that you're making a difference. So when an evaluation is communicated appropriately and disseminated appropriately, then you will improve the morale of your staff. I actually, I had a grant not that long ago that we worked on for an organization and we had to get, it was really an intense evaluation where we had to get four different surveys done. We had to get the teacher, the mentor, the mentee and the parent to complete a pre-test and then to do a post test nine months later. Okay, so that's a lot, right? The staff, you know, hard to get them to get them to complete it. We came in in the second year of they had already started this process. So we came in and saw that the first year that they had done it, they only had out of 500 mentees, they only had 37 complete four sets pre and post out of 500. Some they had, there were another like 30 or so that they had three out of the four sets that they could match. So the data that they had been working hard and all, you know, the staff had just talked about, oh, these evaluations, these forms, you know, we keep calling to do this, having to do these, you know, there's so many of them trying to get the parents to do it, just felt endless to them. That struggle of trying to get the teachers to try and complete these forms at the end of the year. Teacher wanting to do that at the end of the year, no way, right? So they were just frustrated. And so when we came in and looked at the process and we tried to streamline it a little bit and get it down so the questionnaire wasn't five pages for the kids. And we got it down to two pages for the kids that helped a lot. But, and then when we got the next year, we got an intern from Cal State Fullerton to help them go out and make sure we got matches and track which ones matched. And at the end of the year, we were able to say, okay, this family's got three, we just gotta get one more. Go find this family, you know, go do something. So the interns were out there struggling and getting them. And we ended up with out of the 500, we ended up with 129. So we felt pretty good about that and that we had better results and we were able to use those results. But once we got those results and got them back to the staff, we had a staff member meeting and showed them all the results from those 129 surveys. The next year, there was this huge commitment on the basis of those case management and staff members to actually get those results back. They were like, and those surveys completed, that you could see this increased interest and they themselves were tracking whether or not they had two or three or they had four of the surveys completed. It was a three-year grant. So by the third year, we had had a tremendous increase. Again, I think we ended up with about 179 that year. I can't remember the numbers exactly. But we had really increased the ability to get those done. Plus, I kept throwing interns at them. Let's get those interns. They're good free work, you know? So that really helped a lot. And then they actually internalized it and they actually hired on an evaluation staff that they hadn't hired on before. So it made a big difference too as well, realizing that, oh wow, we can really use this data going forward and we better continue to do that so we continue to get some funding for it. So they actually, in their system change of their organization, hired a half-time staff to help with that whole process of managing the evaluations. All right, so it does improve morale. It does help when you get to see that the work that you're doing is making a difference. It's our ethical responsibility, I mentioned that before. If we are telling clients in the community that we're here to help and we open our arms up and we say, what we're gonna provide you is helpful, then we better know that it is. That's our ethical responsibility, not to just put our shingle on the window and say, come on in, I can help you without really truly knowing that what we're doing is helpful. It's like me taking my car to a mechanic and he's got his mechanic sign out there so I trust that, oh yeah, he knows what he's doing. He's got his little sign out there. He's got his licenses. So I'm gonna pull my car in there and ask him to fix it. I get it back and it's no better than it was when I turned it in because he really didn't know what he was doing. He didn't really know that that was the area that he needed to focus on. Maybe he focused on the wrong part of my car. If we as human services providers don't know that what we're doing is effective, then we shouldn't be doing it and we only can know that through evaluation because it's our ethical responsibility to our vulnerable populations that we say every day, I can help you, come to my door, come in, come see me. And so it's really our responsibility to be doing this work and to do evaluations. You get me on my soapbox too long. But anyway, and I start that from my class at the undergrad level. It's like this is your responsibility to ask your supervisors, ask your staff, do we know what we're doing is effective? Are we making a difference and how do we know it? It's not just good enough to feel like we're making a difference. We really have to measure that. So it really ultimately will increase your long-term resources, both networking, collaborations, financial and funding resources when we know what we're doing is been measured and effective. Anyone wanna stop me yet? Any questions while I take some water? Yes. Sometimes we don't see the outcomes for years. Right. Because especially in health, it takes a long time. So how do you measure in between, so to speak? Are you gonna address that one? Yeah, we'll talk about long-term outcomes and measuring over time and what I call tracking beyond your doors. So how to go about doing that in a manner that can be not too costly and more effective and maybe with sampling and things like that that you could do. So there are techniques in which to do it. And also if you're tracking beyond your doors, where did those clients go? So we remember we talked about how it builds collaboration. So whatever a collaborative partner did they go see or where are they next that you could get information back from. So we will talk about that. And that is one of the challenges. Especially if you're working with populations like the homeless or the mentally ill or other populations that are more transient it becomes a little harder to track. But it doesn't mean you shouldn't try and that there aren't ways in which you can potentially do that anyway. Good question. Other questions as we're continuing on? Okay. So program evaluation activities. These are the things that you do not necessarily in this order but these are the things that you do as a part of program evaluation. You're gonna analyze the problem. You're gonna determine what is the need or the issue that you're trying to address. Identify your goals and outcomes to be evaluated. What is the outcome, the result that you're actually trying to get to? And I think sometimes people go to measuring first before they even think about well, what is really the outcome that we want for our clients? What is that long-term outcome that we're trying to get to? And then develop a clear plan. When we work in the community we help agencies develop an evaluation plan which really has it laid out very specifically. This is our background of why we're doing it. These are our tools that we're gonna be using. This is how it's gonna be used. This is where the data's gonna be reported and this is how the data's gonna be reported so that there's a real plan in place that all people within your organization have a clear understanding of that plan and why and where the evaluation and when the evaluation is gonna take place. So you need to have that laid out. You describe and standardize all activities so that you can then train your staff so there's a consistency among the way in which your evaluation activities data collection is implemented. I've been in organizations before where they're doing parenting ed classes and the one, there's three or four different volunteers that will come in and do the parenting class and they have the 10-week sessions and one volunteer doesn't like to do the evaluation so she puts it off and puts it off and gives it to them on the third week. Another one gives it to them on the first week, the minute they walk in the door. So there's no consistency on the activity of actually collecting the data and so that will change the data in which you get. So we really have to train staff on the whole aspects of evaluation, why it's important and how to do it and the techniques of doing the evaluation and administering the evaluation. Then you collect the information and this is really an interesting part where I've actually seen two, I went into an organization once and they had a big box of forms, surveys, just sitting there. Oh yeah, well we did them but we don't know what to do with them now. Where do we go from here? And that becomes daunting, the software, do we get SPSS, can we just use Excel, do I have to use some sort of crazy software program to do this? So it just sat there and then they were like, well we don't have the staff or the timing to do the data entry. So we set them up with an Excel spreadsheet and said okay, because they really didn't have a lot of complicated relationships to have to do, they just needed pre and post tests for the clients on each question. And they said, well we don't have anybody, we're not gonna ask, and it was a counseling office, we're not gonna ask our counselors to enter data every time they do, I'm like, yeah of course, counselors can't do that. They're the touchy feely ones and they don't wanna touch keypads, they just wanna touch your heart, right, your soul, I don't know. I have my master's and social work remember, so I got a little bit of the touchy feely, but not that much. So anyway, I sat with that organization, I said, well okay, and at many a times we're there working with them, and I realized the receptionist sat there a lot, that there was not a lot of action at the receptionist desks. And I said, do you think that maybe we could train the receptionist to do the data entry while she's sitting there? Oh, do you think we could do that? Would that be in her job description? I said, you can change her job description. You have the ability to do that. Oh, that would probably be a good idea. I'm like, yeah, you know, you got it. So be creative on how you do that data collection and data entry, with good training obviously and oversight and quality, because I know Kurt is the quality assurance guy making sure the data is accurate back there. So making sure that the data is accurate and it's put in appropriately and trained and that we have good data, because the expression is garbage in, garbage out, you've heard that? Anybody not heard that? It means if you throw bad data into a system, you're gonna get bad data out and it was a whole waste of your time. So you gotta make sure, remember the definition, a systematic collection of information. So there's gotta be a system in place and it's gotta be an accurate system in order for you to get accurate data out. Okay, so you collect information, then we're looking at measure of change and everyone says, oh, statistics, oh my God, please don't make me do my stats class. I can't remember that from grad school. Oh, really? Do I need to do that? Well, guess what? On Excel or any of those software programs, the T test button, boom, results show up. Hey, wow, that was pretty easy. You know, just, I don't have to know the numbers or the formula behind it. I just need to know which button to push. And so there, it really isn't as hard as it seems to actually do the data analysis. And then to present that information, you really need to look at how to disseminate the information. I skipped a step there, but so another thing that we do in evaluation is cost based analysis too, cost effectiveness. And that is a little harder to do. And it usually takes the next step of looking at your budgets and determining, okay, based on what results we got, how many people we served and how much money we put towards that program, then what is the cost effectiveness of that program? A lot of agencies don't like to go to that step until they're forced to. So I really try and get them, let's just get to the effectiveness of your program first and then look at the cost basis of it and measure the cost effectiveness of it. Reporting the findings, that's where it's really important. How do you make the data look sexy, look interesting without changing it, right? Nobody likes looking at tables and tables and tables of data. Charts, I'm really visual. I have to see things in charts and you'll see in a minute. Everything's laid out in matrix. So making pretty bar graphs really makes a difference in helping to communicate the effectiveness of your program instead of just giving a bunch of numbers. And so really understanding the audience in which you're presenting the data and that there are different audiences in different ways in which you present your data. I'm gonna present my data to the staff at a very micro level more than the board. The board of directors gets it at a more of an aggregated macro level of the organization. The funders need to get it at in between that level. They don't need the details down to the dime and down to the point such and such. So each audience has a different need, public relations. When you write your results up for public relations, it's very much different than the way that you present it to your staff. When you're gonna get it to city officials and to organizations that have decision-making power, you wanna give them the level that's understandable to them that's necessary to them, not to overwhelm but to communicate your results. So that's something I think that we don't do as well in evaluation as well is really understanding how to communicate the results so it's understandable and usable. And if you don't use the next one, if you don't use an evaluation, there's no point doing it. Unfortunately, I have worked in the past with agencies where we spent and they spent their money hiring us to do their evaluations. We come set it up, we help them develop a tool, we set up the data collection and we do the final report because they got a grant and we did the final report and that final report went to the funder and then went on their shelf. And that's where it stayed. So you really, if you're not using it for program improvement and if you're not using it to inform decisions about your organization and your agency, then don't do it. There's no point wasting your time. If it's only because the funders are requiring it, that's not also a good enough reason to be doing it. You need to be, the funders, yeah, do it because the funders require it obviously, but then do it whatever they've required you to do, utilize that within your organization to make the improvements that the organization needs to make. Questions, thoughts? No, I have to watch the time, so I'll keep going. All right, so next steps, developing the outcomes. When I remember I talked about early on that we really have to understand first when we start with the need or the issue in the community, but some of the things I want to just make sure that we're in the same language and have the same definition is the term outcomes because we use the word outcomes a lot and I use it actually interchangeably with results. You've probably heard me do that already quite a bit, results and outcomes, I use that a lot and that we really have many different levels of outcomes that we start with the client outcomes, program outcomes, agency outcomes, system outcomes, cross system shared outcomes and then community wide outcomes or conditions. So most agencies will work with what level do you think, what level do we look at most often? The bottom, right? The client. That we're looking at the client outcomes. Actually, agencies will stay this about here. Most often we stay here. Not always do we get to the systems. Now when I talk about systems, what am I talking about? They're using different areas that are similar across state or across the nation. Like most of you in here are in the health system because this is the health research alliance, coalition, whatever, that name that I can never say. And so most of you are in the health system. So what other system do we have? Say that again? Education is a system. What's another system? Social services is a system. Legal. Housing is a system. So basic needs, basic services. Criminal justice is a system. So we have those systems that have outcomes, right? Is anybody else, the noise a little too loud from the hallway, or is that just me? Well, I can close it, but I'm just wondering if it didn't bother anybody else. You notice it? You're just used to being around kids, huh? Yeah, you want the ones that are, all right, sorry. All right, so usually we stop down here. System outcomes are those where in the health system we have data and research and health. We have social services and all child protective services. If it's office of aging and elder abuse and things, that we have the system of criminal justice system and probation, kids there. So we have all these systems in place and previously those systems didn't communicate very well with each other, did they? They really were in silos that each of those systems had their own clients they worked with. They didn't really communicate well or collaborate well with each other. I guess fortunately or unfortunately with the lack and the resources there had to be this need of sharing and sharing information. I remember when we first started with the, well I can give you an example of cross system shared outcomes. So if I'm working in child protective services and I'm working with a family that's needing services because of substance abuse, who's the other organization that's probably most likely working with them? What's the other system that's probably most likely? Mental health and substance abuse, orange county behavioral health system within the county and their substance abuse services. In the past, those two systems didn't know whether or not the two families were both receiving services from healthcare and from social services. Now fortunately there's a better match, a data match of who are both in the systems together and we'll know that and we share that and confidentiality issues have been eliminated through release of information and legal issues and things like that. There's a better understanding of who's in the both of the systems so that we could better coordinate and work together. Children's system of care came into play which really helped to make a difference. The county heads meet every month under children's issues to try and make sure that they are working together in more collaborative and open way. So there's definitely been a move towards data sharing and data matching to make sure that we are doing a better job of sharing outcomes. Because ultimately the shared outcome for the family and child protective services is improved family functioning, right? And the outcome in the behavioral health substance abuse with the father is better family functioning, right? That's the outcome we're getting at. So we're sharing that outcome. And so it's important that we look towards the bigger systems and the cross systems of agencies that we're working with within the community. And then we have the community wide indicators. So if I was looking at substance abuse as an issue what would I look for? You can find that within many different county data sources about how many people are using substances, how many people are receiving substance abuse treatment at all ages. If you're looking at it for children you could get it through the conditions of children's report. So you're really looking at the community wide indicators to see whether or not you're making an impact. Now, should you, your organization have an impact on that community wide? Can you take credit for it when that indicator drops? I see some head shaking yes and not sure how. How do you take credit for that community wide indicator? How do you, how is it that you can say, hey my services that I serve 50 people made a difference at that level? Because you can measure. If you, so you know it made a difference with the 50 that contributed to that number. Exactly, so that you can take credit for that movement of that community wide indicator if you've been able to effectively measure that you've been making a difference with the 50 that you've been serving. Does that make sense? All right, onward. All right, so let's do some definitions. We talk about outcome definitions, process short-term and long-term, and Roxanne, this is where we get to your question about long-term. So process outcomes are what we call units of service. Those measures that we are gonna track in order to know whether or not how many people we served, the logistical issues of our organization, they're the, that's the data that says I served 100 kids last year. And next year I plan on serving 120. Or that's the data that said I had 100 kids and I had 100 people in the program at the beginning and at the end I ended up with 97. My attrition rate was three, okay? So that's the process outcomes. That's about your logistics of your organization, your staffing ratios, that's about your outreach efforts. That's about how many people you served. But just because you served them doesn't mean that you actually made a difference, right? So this is where we used to always stop at the units of services. Funders used to just ask us to report how many clients did you serve this year? How many did you plan on serving next year? Did you meet your goal of serving the number of clients you planned on serving? And that's all they asked, right? They still asked that, correct? But now they want short-term and long-term outcomes. What was the impact of those services? And that's right, that's exactly right. Because just because they're giving money for you to, I can remember, I see on the LA Times or you see other places, give me $5 and I'll send a kid to camp, right? A kid gets to go to camp with $5 or nowadays it's $10 or $50. But okay, give me $10 and a kid can go to camp. Anyone seen those ads in the LA Times, right? Does that make you think, yeah, I've got to do that? Well, if it was my camp experience when I was a kid, please don't send me. I see that I'm like, oh my God, I remember having the worst experience at camp. It was 10 years old and I hated every day going. And they only had outhouses. So really just today, to this day, if I see an outhouse, it's like I walk the other way. It's like it just, you don't send me to camp. So unless you know that that camp is effective and is gonna make an impact, I'm not gonna send my $10. I wanna make sure that I know that camp is gonna be a positive experience for that kid. And yet we always used to only stop at the units of services or the activities that we were providing, okay? Don't get me wrong, you still need to collect all that information because it's critical to the logistics and operations of your organization. Then you get to move to the short-term outcomes and long-term outcomes. So short-term outcomes measures the extent to which we made a change or an impact on knowledge, attitudes, beliefs, behavior, and hopefully at some point condition. Condition is really the longer term. But I always say impact, change. Outcomes is the change that you should be measuring in knowledge, attitudes, beliefs, behavior, and then condition. And that we usually need to start with a change in knowledge, attitudes and beliefs. Because until we can impact that, then you're not gonna get a change in behavior. It's gonna take some time to get that change in behavior if you don't get a change in knowledge, attitudes, beliefs, behavior, and then your condition. So as we're driving down here on the freeway, as you said, like I said, I was thinking, okay, I got a text, Jackie tell her I'm running late, and I'm thinking, okay, I'm not supposed to be texting while I'm driving, and I have a knowledge that that's against the law, and I could get a huge fine. I have a belief and an attitude that it's really important not to be doing that while driving, but I'm stopped in traffic. I can take the time to do this. So my behavior, Jackie, did I send you a text or an email? Yeah, I did, so I can't lie, I did. So my behavior didn't change even though I had all this knowledge, attitudes, and belief, right? I still did what I wasn't supposed to do. So until I can get that knowledge, attitude, and behavior change, knowledge change, then I'm not gonna get to the behavior or the condition change, okay? It's kinda like how many of you have ever had to sit through driver's ed? Not driver's ed, but you got a ticket? Driver's, what is it called? Traffic school. Traffic school, how many of you, raise your hand, be honest, how many of you had to do it more than once? Oh, I see some hands up. I've done that before, and I've seen people hold their hand up like, oh, I did it four times. I'm like, really? Oh my goodness, miserable, right? All right, so the purpose of that meeting, their outcome, what they're trying to focus on is to make you better drivers, right? To get you to be a better driver is to put you through that training. So that's their outcome. What is your outcome? What is it that you want? What is the result that you want? No more tickets. No more tickets. Not to have it show up on insurance, right? You're not going there, you're not going there because you want to learn how to be a better driver. You become more aware. You become more aware. Yes, you're not going there. That's not, your outcome is not to go there and be like, oh yes, I'm gonna learn how to be Mario Andretti. You're there just to keep it off your record so that you don't have to pay as much money. So you go and you sit there for an hour and you learn and you see some really gory, yucky images and people texting and getting hit or hitting somebody, and you see that, okay, this was probably not a good idea. So that day when you're leaving the class, you drive in the slower lane, you stay 65 miles an hour, you're paying attention, you go to the store, you buy a Bluetooth and you think, okay, I won't do that again and I'm driving along. All right, well, the next morning, and if it was like me this morning, running late because the kids couldn't get up earlier than they needed to get up and I hit that alarm one too many times, where am I again? Back in the fast lane, trying to get to where I'm going, 75 miles an hour texting Jackie saying I'm late. Unfortunately, I was only going five miles an hour but running late. So it didn't change, I have a knowledge, attitude and belief that these laws are important, traffic laws are important. If you've ever driven in Italy, you really know that traffic laws are important. Yeah, oh my gosh, it's scary. And so you know that these things are important and you believe in them but it didn't change your behavior any. And until we can get to the point of measuring whether or not we changed our client's behavior, do we really not know whether or not we've had the impact we intended to have, okay? No, hopefully rewards and incentives that you change their behavior, you know? Brush your teeth, they'll come after you. Brush your teeth, don't give them anesthesia and the next time you drill their tooth. My son had a cavity the other day. Look, I got you all there. What's up with these possible rewards, yeah? I have social services in the room, right? I can't say that. I told my dentist to do that to my son. Now you can't. I can't say that. I was tempted, I was really tempted because he was not, he was refusing to brush his teeth and cavities just kept showing up. I'm like, can you just drill once without that? He'll start, he'll start brushing his teeth or give him the shot right where it really hurts, you know? Nowadays the shots don't hurt like they used to, you know? Used to be punishment to go to the dentist. Now they get candy and they get sugarless candy as they get a movie, yeah. I'm never gonna win that battle, am I? All right, that's short term and then long term it really measures the same but yet the longer term condition of the client or the family and the community that we're trying to impact, okay? So long term, short term, what does anybody, what's the time frame for that? What time frame should be between the short term and the long term? Very good answer, good. I was trying to do a trick. I was trying to trick you there, very good. Exactly, very good. It really does. People ask me all the time, well, what is my short term and how much longer should my, is it six months? Is that the standard? Is it 18 months? I said it depends on the services and the client that you're providing services to. Will depend. We worked with a nonprofit that was providing teen pregnancy prevention programs to 12 year old girls in middle school. No, yeah, it was middle school. In middle school. And they were measuring their long term when they graduated eighth grade. That was their long term measure. I said, wait a minute, your teen pregnancy prevention, aren't they 13 when they graduate from middle school? Yeah, I said, well, teen pregnancy prevention is till 19. If you really wanna know whether or not you're preventing teen pregnancy, you have to track them until they're 19 to really know whether or not you've made an impact. How can we do that? That's too expensive. We can't do that. We don't know where they, your middle school feeds to what high school? Oh, right over there. Oh, yeah. Okay. Do you think we can make a relationship with the high school to find out whether or not at least they get through 18 in high school? The ones that you, yeah, we probably could do that. I think, yeah, we could probably do that. But you need to have that collaborative relationship with the high school to make that happen. And that's where we talked about earlier, program evaluation builds collaboration. It builds, they have the same outcome that you had. They wanna get those girls through high school without getting pregnant as well. So let's work together to help each other in measuring our effectiveness. So that's the long term for them. If you're working with a 12 year old, you've got a long term of seven years if you're doing teen pregnancy prevention. If you're working with a homeless family and you're trying to get them into shelters and you're trying to get them in stabilized, that generally is usually 18 months to two years. So like you said, Melinda, right, or Melissa? Melinda, like Melinda said, it could be that by the end of this workshop, I'm hoping to change your knowledge, attitudes, and beliefs. I don't know that I've changed any behaviors yet, but in the short term, we wanna change some of your knowledge, attitudes, and beliefs about evaluation. But in the long term, maybe in a month from now, if we touch base with you again, hopefully you may have gone to a staff meeting, talked to somebody about what you learned today, and began to think about how to implement your evaluations. Longer term than that, in a year's time, you would have had measured and implemented an evaluation plan. That would be the outcomes for this potential workshop, in addition to hooking you up with community research on health issues, okay? All right, everybody understand the distinctions then when we talk about short term long term? Any questions about that? Because sometimes the program doesn't last as long as the data, especially in health. It's gonna take years before, and a huge change in a service or something. So when do you go to government statistics, or a larger statistics to really see the change? For example, in breast cancer, it took 15 years before people started, let's say for a mammography screening, and it took almost 15 years before that statistic for mortality went down, I mean, compared to 15 years ago. So when do you know what to use a larger statistic that you can't possibly gather? You should be using it right from the beginning, honestly, as a benchmark from where you started and where the community started. And even though you might only be serving like 50 teens in your program to try and prevent teenage pregnancy with 50 teens, we would still look at the community indicator on teen births and teen pregnancies. Well, Orange County doesn't collect teen pregnancies, that's a whole nother story, we only do teen births. But anyway, we won't go there. I work on the conditions of children's report every year, and there's some data in there that just drives me crazy, and that's one of them. But anyway, onward, as I don't get me on a tangent, okay, because I like going on tangents, to answer your question, you should be taking community-wide indicators as a benchmark of the work that you do right from the beginning. And then, as you, like you said, you might not have an impact on that the first year of your programming or the second year, but continuing to see that. And that's like the coalitions that have gotten together, that's why a lot of agencies form coalitions, because they realize that I'm only 50 people working with homeless, I can't move that homeless number alone. And that's really, I think, where the impetus for the homeless coalition started about 15, 20 years ago, was that was, we've gotta come together as a coalition to move that community indicator number. And again, so evaluation starts with, we've gotta work in collaboration, we've gotta work together, we've gotta be in partnership with other agencies that are serving our clients, whether they're in our system or out of our system, whether they're in health or in social services, we still need to make those connections in order to track that indicator down. I mean, you can talk about an indicator that had an immediate change right away was the law on 16-year-olds driving. That immediate, they immediately saw an impact on the community data on teen death rates in cars and accidents, dropped significantly. It was actually only supposed to be like a temporary law to see if it made a difference. And immediately it was impacted, they saw the impact, so they decided that that was gonna be a long-term change where kids couldn't have friends in the car and had to be 16 and a half on that, yeah. That brings the question, because at some point when you're looking for external validity, if you're doing, and you're finding that you're starting a research study and this could be even passing or whatever, a homelessness, and then you end up doing this cross-coalition kind of thing, and then you're absolutely changing your data, I mean, halfway through. And that sort of almost spoils the research at one point because you've now changed the parameters of it. You've changed the perimeter, what's influencing it? I mean, it's a great idea, but if you have a study going, let's say, three to five years or something like that, and then halfway through you decide to build up the, like, for example, the teen project. Oh, to bring in new people or? You find out that you're not being effective or just doing the thing in the middle school and then you find out that having the girl scouts do a thing, just doing the thing, then you find out you get a better data result. But you've actually, or you sort of polluted your data, oh, you've pulled it. Yeah, and so generally with those kinds of circumstances, is everybody here what the situation was, is she was saying that you started along a path of doing research and then realized that you needed to bring on other coalitions or collaboratives and they're doing something else and then it contributes towards the data that you're contributing. So it changes the data a little bit. Basically what you have to do is like, this was a point in time and now we're moving on to this point in time and we're gonna measure from here forward. So your benchmark might have changed, potentially, if that's what you're saying. If it's gonna improve, yeah, if it's gonna improve your research, always. If it's gonna improve the effectiveness of your program, always. If it's gonna improve the knowledge that you have about the effectiveness of your program, always. You can show what fact. Then you just have to talk about the limitations of what you were doing previously and why you changed it and justify the change and justify the, data can be explained always. And don't be afraid to explain bad data or why was it like that? What are the limitations and where can we improve it in the future? It's a dynamic process. I usually never find, I never find a valuation to be that simple. I never find it that we go into an organization, this is where we start and we end up exactly where we thought we were going to. Yeah. What about when the data, the only data that you can get is actually lagging behind when you start with your community-based, where are we now? But the most recent data you can get is from 2008 and we're in 2012. So you're starting your program now, but on this data that's old, how do you measure the impact that you're having? Because when the 2009 data comes, you haven't had really any effect on it. Yeah. It is hard to when the data isn't as current, but I'm finding that most data now is like lags a year, maybe at the most is that you year to two years and the data that I've been seeing, it's because of the internet and the technology that we have now, that you really can access pretty current data if you get to the right source of that data. Well, cancer registry data is about three years. About three years. Cancer is really hard. Yeah. So anyone else have a thought on that, what you do? Yeah, April. Is it simple? Yeah, and we'll talk about that for sure that in all of our own programs that you really need to just measure the benchmark from where your clients are to where they end, but in your answer to like, how do you measure against your own community, against community-wide data? That's really a lot harder if it does lag that far behind. But ultimately, like you said, that you'll see a trend, like you were saying like a 15-year trend on mammograms, it didn't show up for a while, that there was an impact. I mean, how mortality is data for breast cancer. Right. Populations that are underserved or have. But so now you know where to, because of that, you know where to target your resources and your services if you're seeing that this one's improving and this one's not, then that's where you're gonna target your resources hopefully in a community. And set those priorities, right? Around the disparities. So I don't think it really answered your question very well, but sometimes it's just a limitation. You have to say, yeah, no, I don't think so. There's no easy answer for it, so. Right. Yes. Yeah. I'd recommend that, and I'd also try and go to your local agencies and local data and local healthcare departments and trying to get as much local data as possible, because some of that, like you said, national cancer is gonna be the national and so it'll delay. It's California. If you went to UCI and work with them, get the most current on Orange County specifically for the most part in terms of the cancer registry, that was three years behind. Still three, even when they got the local stuff. And they control the cancer registry for Orange County, UCI does. So they had access to that information, but to try and pull it together, still three years behind. Yeah. Kurt, do you have any thoughts on that? Thoughts on the problem of phytostatistics that's in your rears because the state has to process stuff. I don't know what's going on with cancer registry whites. Whites that far back? I guess. Yeah, that's really a surprise. Cause usually like even the data that we get from the healthcare agency or social services or anything like that to put into the conditions report, I think our latest is like a year and a half back. Even our birth data is about a year and a half back and that's pretty current. Everyone looks at that as thinking that's pretty current. All right, so a logic model. This is the tech, now I'm trying to get into some of the practical ways of doing evaluation. And the thing that I start with with all organizations is having them set up a logic model. Anybody unfamiliar with what a logic model is? Everybody pretty familiar with that? That term actually came from business practices and we in nonprofits have begun to take on many of the things that business was doing in order to make ourselves more effective in looking at how we are serving our clients. So we use a logic model in order to visually represent what it is you're doing and how they all connect. It really is a roadmap from what is your issue or your need in the community to what are your outcomes gonna be. And I really find that until you sit down and create one of these logic models, organizations don't have a good visual of what it is they're trying to impact. And they don't have a good understanding of how their services connect to the need that they're trying to address based upon the resources they have and what it is that they're trying to accomplish with their outcomes. And it really is important I think for all organizations to have a logic model in place and have it communicated to their staff so that everybody understands where they fit in the system towards outcomes. And so I really, the very first thing I ever do when I go in and work with an agency is start there. I start with what's your logic model? What's it look like? When you're writing grants, almost all grants now ask for logic models. They all are asking for them. There are many different types of logic models out there that are used. This is just one of the ones that we've used over time. It's called the in-progressal. And it means the in-progressal is just in progress. If you're in-progressal, if I do it with my Spanish accent, it means in progress because all of this is always dynamic. It's always in progress. It's always a moving forward, improving upon it. We always start with the needs. And actually I would love to change this even though it's common languages to use needs. I'd like to call it just the issue. So the I, I put an I in there as the issue. Because sometimes when you put this word, the need, people will say, oh, they need counseling. Oh, they need shelter. Oh, they need pregnancy prevention. Oh, they need dental care. That's the need. No, that's not the need. That's the services. The need is that we have an increased number of children with dental cavities. The need is that we have an increased number of elderly that are falling into poverty. The need is that we have an increased number of children in child abuse. The need is that we have, see where I'm going with this? That's the need. Not that they need dental care. Not that they need a shelter. Not that they need counseling. So remember, this is always the issue that we're focusing on, okay? What is the issue? Yet in all grants and all places, it's always called the need statement, right? Write your need statement or your problem statement. Well, I like to call it the issue instead. So just so we understand that language. This is the issue that's trying to be addressed. And we have to start there. What is the issue you're trying to address? Where is the gaps in the services? What is, what is there's a lack, an issue could be that there's a lack of homeless shelters. The issue could be there's a lack of medical insurance for children to get dental care. So that's the issue that we're trying to address, okay? So then we set our goals based on that. Priorities and goals for our program. And I really try and get organizations to narrow down their goals to like three to four per program at the most. You could come up with a litany of goals. I mean, just a whole long list of things that you wanna accomplish. But if you really look at it, many of those will fold into three or four broad goals. And goals really need to be broad statements of what it is you wanna accomplish. And everybody used to say, well how is a goal different than an outcome? Because an outcome is what we wanna accomplish as well. And when we first started this, when I first started doing this many years ago, the goal and the outcome looked almost the same. We wanna improve family functioning. That's our goal. Or we want to decrease teen pregnancy. Well that's the same as our outcome, right? So what we've come to realize is that goal needs to stay broad and the outcome gets written in an objective format so that it's very specific. And we'll get there in a minute about the difference between how we write the goals and how we write the outcomes, okay? So we start with goals. Then we also have to take into account our resources. Resources meaning human and financial. Volunteers, we've got volunteers here in the room. Those are resources. Your any in kind resources, any funders, any collaborative partners, those are all the resources that are gonna come into play in providing these supports and services. So supports and services are actually exactly that. What is it that you're providing services support? So you're starting to see where my NPGRSSO comes from, NPGRSSO? And the O is your outcomes, okay? And we have to start with collecting process outcomes with understanding what the process outcomes are. How many people you're gonna serve? Where you're gonna serve them? When you're gonna serve them? How many staff ratio? What's the qualifications of the staff that are gonna be serving them? All that information gets here, collected here. But that contributes to then our short term outcomes and alarm term outcomes on the change in, anyone? Knowledge, attitudes, beliefs, behaviors and conditions. And that's what this is about. And that's where we want to get to. And then, if I did, did I do, oh no, I didn't do the nice one. I have this little, maybe I have it. Did I believe it in there? Where it has the loop to the feedback? Maybe that's next. That's at the very end. Yes, it's at the very end, so I'll leave it there. All right, so once we're able to do that, then we're going to begin to identify and consider what outcomes we want to collect. So in collecting, and I kinda wanna get through this so that you get a chance to do a little exercise here this morning. So we talked a little bit about most of the stuff already, but these are things that you need to consider when you're identifying your outcomes. Your outcomes need to be client centered. Remember we talked about where you are? Your outcomes have to be about what change the client is gonna see. Not what you as an agency is gonna do. You're measuring what client's lives improve. You're gonna track them beyond your own door. It takes time and a lot of series of meetings. It takes effort. It takes a commitment from the board level down to the line staff. It asks the question, who do you need to succeed? Who else? What in the community do you need to succeed to get to those outcomes? It talks about fair measures. Now this is an important one. Fair measures of your work. What do you think, anybody, what do you think that means? Realistic, yeah? Yes, along those lines. It means that as an organization, you need to determine what your outcomes should be, what level of success you should expect from your clients or anticipate from your clients based on the knowledge of the client. Remember in the beginning, the definition of evaluation was understanding, collecting information about the characteristics of the client? So if you're serving the harder to serve client, you need as an organization to determine what is the fair measure of success for that client. If I'm serving a family in a family resource center and they're coming in and they just went through a small glitch, lost a job recently, there's no substance abuse in the family, there's that, you could probably get them a referral maybe to some job retraining or something like that that they can get back on track pretty quickly, hopefully. But if you have a family coming in to your family resource center and they have substance abuse in the family, they have domestic violence in the family, they've got child protective services working with them and they've lost their job and now they're homeless. That's a harder to serve client than the other one, right? So your outcomes for the first client is gonna be different than the outcomes for your second client and your expected outcomes for them. So you have to understand the characteristics of your client so that you can communicate to your funder that I am serving the hardest to serve client here. I'm not creaming off the top and taking the great clients and saying, well, I'm gonna only accept these clients into my program and refer the harder ones because if I refer the harder ones, then my outcomes are gonna look better. But you as an organization generally take the harder to serve clients and you have to understand the characteristics of those clients and the evidence behind those clients being harder to serve and why they're harder to serve so that you can then justify what it is as a fair measure of your outcomes. So for example, if I'm doing a substance abuse treatment center and shelter, whatever, treatment center, residential program, and I write to a funder and say, okay, I'm gonna have 100% success rate in six months with my clients. Is that realistic? No, it's because we have evidence and we have base knowledge to say that it takes on average a person three times in treatment before they become sober. So you're really only gonna have probably a 30% success rate. So if you're going and touting your horn and say, okay, I'm gonna have 100%, that's not a fair measure. If a funder says to you, I want you to get 100% of your clients sober in six months, you're gonna have to say to him, sorry, I can't take your money because I'm just not gonna make that. It's just not realistic. A fair measure of my work would be 30% of my clients getting sober. So you have to be the one driving the decision as to what's the fair measure of your success, okay? Of your impact. And ultimately, it comes, oftentimes your mission of your organization comes into question, are you really, the services you're providing, the effectiveness of your work, is it really addressing and feeding back into the mission of your organization? We worked with an organization that was providing Friday nights food. Friday afternoon they would provide food for the homeless and the needy. They had a food line and the clients would come through and they would get food and they'd move on. And we asked them, well, are you tracking? Are you paying attention? Is there an impact to this? Oh yeah, they're not hungry for the weekend. They get, you know, it's hard for them to get food over the weekend. So yeah, we've given them a good meal. That's our outcome. Our only goal is to make sure that they get a nutritious meal on that day. I'm like, really that's your only, and how often are they coming back every Friday? Are you seeing the same people? Yeah. Okay, so are you really making an impact? Is that really what it is that you wanna be doing? Or is there something more long-term impact that you wanna have? Well, we really can't impact anything. We just wanna give them food for that night. I'm like, well, is that really all you wanna be doing? No. Well, you wanna see them the next week? No. Okay, so then what can you do to make an impact so that they aren't seen? Well, you know, it's the economy, it's the, what could you do? So we finally, they brought their board together, they brought their whole staff together and started to look at this and discuss this because we say, you've gotta have that short-term. That's a short-term outcome that you're feeding a family every Friday night. Okay, what's your long-term outcome? Trying to get them more stabilized. And so they started to really look at that and think about their mission. So they changed up the program so much that they then hired a case manager and put a case manager at the end of the table every Friday night. And it wasn't required, it wasn't mandatory for them to go see, but they had somebody there with a resource table and a case manager to begin to look at, okay, we can't just feed them. We need to look at why they keep coming back and how we might be able to impact them. So it really looked at the core mission of their organization and the program that they were providing and whether or not they were really needed to make some changes. So it can do that for your organization too if you're doing your evaluations appropriately. All right, and ultimately sometimes you might have to re-allocate funds to make sure that there's enough resources to do the evaluation, to put it in place. Questions about that? Thoughts? So in another program where they were talking about logic models, it was recommended that you start with a SWAT analysis. Strengths, weaknesses. Opportunities and threats. So that you start with that and then start to plug things into your logic model. What do you think about that? If you're doing an organizational strategic plan, yeah, I would say start with a SWAT analysis. If, and that's where you're at within the organization, definitely, it never hurts to do a SWAT analysis. Just to see where you are when you're in an organization. Yeah, and then you could do, but I think the logic model is actually more like a more program, can be more programmatically based than agency-wide. And I see the SWAT analysis generally as a broader strategic planning tool than specifically to a program tool. But yeah, it never hurts to do one. Okay, so let's do this. All right, in your handout, we have an Improverso. Thanks to Jackie for putting all this stuff together and her team and making it so well organized and colorful, because that's the way I love it. So it's easier to find. I have in pink, pretty in pink today. The Improverso, you have a blank one and you have one that's kind of filled out as an example. All right, again, remember, I like things logically, things visual. So when I lay this out for you, what we're gonna do is an exercise now and spend some time so that I'm not always just chatting at you. You can work in pairs next to each other, you can pick your own program, you can do it on your own, however you wanna do that. But I want you to begin to quickly try and fill out a logic model for your program. Oh, well, yeah, you have to work today, you know? It's just not about me. Not about me just doing it. But we're trying to make you get some hands-on interactive about what it is we're doing. So I'm gonna have you complete a logic model and this is, again, what's included in it and how, so you can see this, I don't know that I kind of went over it already. So we got the needs, the problems that you wanna address and, of course, coming from a social work perspective, we always wanna look at strengths. So sometimes we wanna look at what assets, what strengths the client bring to the, or what other assets there are in the community agencies that you might wanna collaborate with that you could then put into your resource column at some point. So I'm always not just looking at what's wrong but also what's good and working. So your assets column could also include other agencies that are doing this work that you might be able to copy or duplicate. So then you have your priorities and goals, okay? Always it's client-focused. Your goals need to be client-focused to improve, to enhance, to increase, to decrease whatever it is about the client, okay? Then you want your resources, you could list all those and what supports and services that you're providing. List those, like an after-school program, a Meals on Wheels program, a senior day center. Those kind of things would be your supports and services. And then, again, process outcomes would be the units of services, what is the participation rate, how much time you anticipate them receiving services. If you're in the medical model, it's dosage hours, they talk about a lot. How many dosage hours is somebody gonna get? And then short-term and long-term, again, change in knowledge, attitudes, beliefs, behavior, and then, of course, condition on this side. And there might be short-term and long-term, kind of like short-term would include intermediate behaviors that you're gonna see change, and then the long-term behaviors later on that you might see for change, okay? So, in front of you, this is your little exercise. You have a blank one, you don't wanna mess it up. I didn't. You're gonna have an extra, I'll give you mine. All right, so spend, and during this time, this is like quarter till 10. So, we're gonna, at 10 o'clock, we're gonna come back and talk, so you can take some time to get up and move around if you need to go to the restroom, try and fill this out with your partner, or by yourselves, either way. So, this is a chance to do some work. Okay, so, you had a chance to try and fill in the logic model. What was that experience like for you? Easy, hard, you got stuck at, you got stuck at, everyone was getting stuck at the issue, I noticed. What is the issue? The issue in the what? Say that again, Rita. And the prior, and the goals. And the goals. Okay, so, let's run through a couple examples just so we get to help, and we can run through the example that I gave you as well. But, so Rita, what did you put as your issue? Well, we have it here. Okay. So, what is the goal, that to educate and improve the oral health? Okay. But, is that the same thing? No, not really. So, the need is that there's an increased number of children with dental cavities, and they don't, dental caries they call it, right? Yes, I learned that a few years ago. I'm like, what's a dental carry? When did it change from cavity to carry? Really, okay. So, increased number of kids, high-risk youth, low-income kids, and even other kids with dental caries, correct? In addition, there's a lack of metal cow insurance for dental providers, right? So, that could be part of the issue, correct? That there's not enough dental providers that take Medi-Cal? Yeah, that would be one of the issues. So, you line those issues up. Then, your goal is to, and if it focuses on a client, what is your goal? To go and give an access to dental care. Okay, so to increase knowledge, or to improve, to increase knowledge of the issue, the need for regular dental checkups, potentially. And then, your other goal would be what? What's another ultimate goal? Isn't there another goal, actually, beyond, because remember, knowledge and attitudes, beliefs. So, you've increased their knowledge, their attitudes, and their belief, and their access, but what is it that you're really trying to change? What's the goal? So, yeah. So, you're trying to improve or decrease the number of children with dental caries. Ultimately, that's your goal, to decrease the number of dental caries among high-risk youth, okay? So, that goal focuses on the client, and you don't need to add in your goal through outreach activities, throughs, because where's that listed? That's listed here, okay? Your goal just needs to be client-focused. It needs to be a verb that shows whether you're increasing, decreasing, improving, enhancing, and to what is that you're doing, and to what population? Okay? Discriptives, again, enhance, improve. Enhance, decrease, improve, eliminate, prevent. Okay, so someone else, give me their goal. Go ahead, okay? So, you're trying to improve the safety in the community, sounds like. To increase the community well-being, improve the community well-being, that's a real broad one, right? That's really pretty broad, right? So, you could break that down by to improve the health among children in the community. You could improve the safety in the community neighborhoods, to decrease crime among youth in the neighborhood. So, you could actually bring those a little bit more specific than as broad as you had it, okay? Others? Price was to improve the early detection of cancer for adults over 50. To improve early detection of coloreal cancer. For adults over 50. For adults over 50. And by improving the detection, what are you ultimately trying to do? Save lives. Okay, so you're gonna decrease the amount of colorectal cancer. So, sometimes we stop at the intermediate goal when we're really needing to do what is really a goal is the broad, ambitious statement that in and of itself is not measurable. The way you write it, a goal statement, it's not measurable. It's not specific enough to be measurable. It is a broad, ambitious statement. Yes? Increasing the well-being of the caregiver. Increasing well-being of caregivers of, who are they caregiving to? For the Alzheimer's patient. For Alzheimer's patient. So, you gotta add that little bit in there so I know a little bit more. Cause it could be a caregiver for a disabled child. It could be a caregiver for elderly. It could be a caregiver for foster youth. A lot of different things. So, making sure you define in that goal statement who it is the population you're targeting that you're gonna improve. So, improve the emotional well-being of the caregiver. Decrease stress among the caregivers of Alzheimer's president. Yes? I have a question. In regards to what she had added about education and access to providers, what if your goal is to increase oral health education and it is to increase access to dental providers? Those are two different goals. You can have those definitely too. Yeah. As long as it's not client-focused, to increase the knowledge of whomever about whatever. So, that's your client. Like if you're trying to increase the knowledge of dentists about your whatever, then that's your dentist in that regard, or your clients ultimately. Okay. Others that wanna share theirs, yes? Angela. Okay. So, by completing the cancer treatment plan, what's gonna happen? And then what's gonna happen? They're gonna cut. Okay? Okay, so really ultimately, you're trying to increase the length of remission or in health or to, so it's your, stay focused on the client there. What it is you're trying to do for the client in the long term. Enhance the quality of life, improve their quality of life. To increase the length of time and remission of clients with cancer. Okay? Increase, decrease, we'll say that increase decrease, what'd you say? Increase disease free. Okay. So, you guys are using language, you know, again that I learn every new language all the time. Others wanna share theirs? Yes, Zara. Okay. International Gregor. Okay. Along and preparedness among international freshmen. Okay, stop right there. Everybody like that? Say yes. Yeah, we like that one. It's specific, but I mean, you could go broader to increase or to improve, to enhance the socialization of international students. That's broader, but you went more specific, which is okay. You could leave it broader because then, like I said, remember you could come up with a whole long list of goals and they would all kind of go under an umbrella. And then when we get to our outcomes, when we get here, our outcomes are gonna be really specific about what aspect of socialization we're gonna measure. Okay? Does that help? So I didn't, now go on to the next one. What do we think? Yes? Yeah, I think it's like to improve or to enhance maybe, or to facilitate would be focused on the agency then again. So what is it you're trying to ease or transition? So you're trying to in that, what's that transition about academic transition? Is it social transition? Is it emotional transition? So define what that transition is that you want to improve. What is about the transition that's challenging that needs to be improved? Is that making sense? Instead of leaving it at the transition, transition's a little bit too vague as to what your goal should be. Okay? And remember goals need to always stay client focused. Always ultimately on the client. And when I, I teach us in my proposal, program design and proposal writing class, if I see anyone write a goal statement that says to provide, they get a zero. Your goal is never to provide something. In providing that, you will be enhancing somebody's lives, improving something, but that's really not your goal because if I'm going to provide you with counseling services, that's about me, the agency. That's not about what you're going to get out of the counseling, okay? So really focus your goals on the client and be client focused. You have an opportunity to write what it is you're going to do in your supports and services column to meet that need, that goal, okay? Yes? But what you're defining- Correct, correct, correct, correct. Very good, yes. We talk about client outcome, we talk about outcomes in a minute that client level outcomes, agency level out, but there are also different clients within your program. So you're going to track those. Like even mentoring programs, you have a mentoring program that you're trying to do in the back. Are you planning on measuring the impact on the mentor? Because the mentor themselves is one of your clients. They are going to benefit hopefully from the program as well. And you need to be measuring what it is that you want them, what's the outcome for them, being a part of this. Even all your agencies that have volunteers, you should be measuring the impact of the volunteer work on your volunteers in order to know whether or not you're being effective at meeting their need and their issue of being there. So yes, good point. You have met multiple levels of clients, potentially within the same program. A parenting ed program has the parent outcomes and the client outcomes. Okay? Good. So we understand goals now. Resources, good with that. Everybody kind of understands that. Just list everybody you can think of. That you might want to play with, work with, get money from. Yes. They can be, yeah, sure. Now you got to tap into them. However, if you're working with preschool kids, elderly could be a resource because designing those elderly programs with preschool, yeah, intergenerational. They just did that on campus. It was a great experience. They took the, we have the Glee Center. I never know what the initials are, but it's the Gerontology Learning Center on campus at Cal State Fullerton. That is, does anybody know? CLE. CLE. Yeah, it's a great program. I mean, it's run, it operates its own thing on campus and it's a huge number of older adults come there for a lot of different things. What's that? Ollie, thank you. That's it. That's it. They come there for Ollie, O-L-L-I. Yes. Yeah, it's a great center on campus, right near my building. And they just recently, in May, took a group of them over to our children's center and had them reading to the children, at the children, because on campus we just built this beautiful children's center for faculty and mainly students' children. And it's great. So now one of our faculty members initiated that because she studies Gerontology and she also has kids and she had a kid over there and she thought, well, what a great resource that we have on campus here. And so that was the first time it happened in May and now it's gonna happen on a monthly basis. So it's really exciting to join those two. So you're right. There's a lot of different ways to collaborate if you're creative. Yeah. Everybody's your resource. Yeah. All right. So we understand supports and services are basically the activities that you're providing, the treatment you're providing, the intervention you're providing, all of that, right? Outcomes. How many of you got to this part? Okay. Someone read me your process outcome. Okay. Angela. Okay. I'm sorry. Okay. So we were enrolled in your program. Okay. Very good. Well done. You're following along with example, huh? Well done. I wanna do it. You're doing a great. You did two of the easiest process outcomes to create are enroll and complete. And right there, you're tracking your attrition, your dropouts, all that. So you're gonna have an estimated number that you're gonna enroll in the beginning of the program and then you're gonna hope to minimize your attrition but being realistic there about how many complete. So whenever you're asked to write a process objective or outcome, I call them objective, outcome objectives, you really want to start with how many you're gonna enroll and then how many you're gonna complete. Okay. And just because they completed it doesn't mean there's a change in knowledge, attitudes, beliefs, behavior or condition. At some point we'll get this into a nice little rhyme. All right. So you need that but this is where the heart of evaluation is about. Okay. So your outcomes. So Angela, since you did so well, we'll let you keep going. Okay. By what date? By June 13th? No, every month. Every month on a monthly basis. On a monthly basis. Okay. As measured by? By the. Okay. Very good. I say by the end of 2019 10 patients out of the 15 will be surviving or in remission. But to resume this, we just say increase disease, pre-survive. As measured by? By the survival. Okay. But you can do like a self-report questionnaire, you know, something like that. Like a contact, a client, case manager contact, phone call, okay. So very good Angela, you get another sucker. But I'm not gonna throw it across the room. Here, pass this back. Hand that back. She gets another sucker. All right. She gets a prize. Okay. So that was easy. I have my, that was easy button somewhere too. Zahra, very good. Very good question. That came up already, already up here too. So let's talk about that. Let me move on though a sec. Okay. So, and I'll get, don't forget that question because I wanna get to it. All right. So what you saw in the example that Angela read and what you see in the example that I provide here that outcomes are written in what we call an objective format. Remember I told you outcomes sounded too much like goals and so we had to really make a distinction there? And really the outcomes really need to give us detailed, specific information. That's right. Oh, you're right in front of the screen. Yeah. That's all right. Well, welcome. No worries. Hi, what's your name? Yolanda. Yolanda, I'm Michelle. I'm Michelle. Nice to meet you. Sorry for the interruption. It's all right. Not a problem, but you get a sucker too. Just for making the effort to be here today. Thank you. Okay. So our objectives are written. You've heard this before, right? Anybody not heard smart? Oh, you guys are so good. Let's keep moving then. So specific, measurable, attainable, realistic and time limited. So whenever you're writing an outcome, it needs specific, measurable, attainable, realistic and time limited. Notice that word realistic, okay? So we'll get to that. And it has to have that in it when you're writing it. So we came up with this template of how to write outcome objectives. The first one, the process outcome objectives is bias, notice it's just straight template, always. Just fill this in. Whenever you're writing for grants and writing grants, do it like this. Buy this date. This many people will participate in what? What do you expect them to participate in? As measured by? Okay. Stick to that rhythm all the time and you won't go wrong. Thing that people will often forget is the as measured by piece, okay? So they participated, they enrolled, they completed, they graduated, they, all those things. All right, got that? Okay, so that's the process. The outcome, short term and long term. Remember, outcomes are about a change in knowledge, attitude, belief, behavior and condition. Okay, you're gonna be so tired of hearing that. So now we add to the template so that it's a little more detailed on the impact. And we're gonna say, buy a certain date, a certain percent of the 50 participants. Now what if I just put 75% of the participants and left out this 50? What's it missing there? Of what, right? You see that done all the time. Oh, we had a 75% improved. 75% of how many? If I had 75% of 10 kids, actually this 50 should be of those that complete. So like for example, I believe Angela had it right. So she had said, like for example, if I did, well I'll just start with this. If I said that 50%, I said 50% participate. Say I, 50, in the beginning I'd say buy a certain date, 50 enrolled, okay, in the program. All right, as measured by registration, whatever. Then the next process objective says, buy a certain date, 25 of the 50, completed. Wow, my completion rate is not very good. 25 of the 50, completed the program, okay? As measured by attendance records, all right? Now when I do my outcomes then, I have to do it of those 25 that completed. So my number is no longer of the ones that enrolled, but of the ones that completed, okay? Because you can't do an evaluation on those that didn't complete. So you're gonna be showing of 75% of the, in this case it would be like 25, if we said that, that completed the program, not those that enrolled in the program. So my question here is, a lot of times we have attrition rates and of course, depending on how old or whatever, we have higher attrition rates. If we have let's say an attrition rate of 10%, which is normal, then doing something like 75% of the participants would take that in a consideration that's a normal attrition rate, no big deal. However, what happens if your attrition rates of 10% is only 2%, that becomes part of what you're showing that not only do you have a program as effective, but your attrition rate is so much lower. That's, you're back, I can go back, right? Yeah, that's back here. So remember I told you process outcomes are as important as short-term and long-term, they're just reported as such and you addressed that in your process outcomes, you were able to maintain your participants in the program. Your outreach efforts were great that you were able to recruit enough people that the program obviously must be effective because your attrition rate is so low that they're not dropping out or maybe they're just mandated and they can't do anything about it. But your attrition rate won't be very low if they're all mandated. So your attrition rate is dependent on, again, the characteristics of your clients and the issue that you're serving. So that's dependent. But when you do your evaluation report, you talk about that in this section of your results because you're tracking that. You're tracking how many came, how many left and how many were happy. You do satisfaction. We talk about satisfaction surveys. You'll actually do a satisfaction survey with your clients, but that's not an outcome. That's not an evaluation. And a lot of times agencies get that confused and we'll talk about that in a minute too. Don't let me forget about that because I haven't forgot about yet your question either. Okay, so now back to outcomes. So critical that you put 75% of the number that completed because otherwise it's out of context. I don't know if it's 75% of 100 people or 75% of 1,000 people. And that's a very different number. So you need to make sure that it's in context, okay? And having both those there. And we'll improve, okay? So now I've identified even more specifically, we'll improve their use of appropriate discipline techniques because I'm trying to improve parenting skills, right? It was one of my goals. So now they've improved their discipline techniques as measured by an increase of two points on their score on the pre and post test, okay? So here we go again that I need to state what is my standard improvement that I need to get to in order for me to be considered effective. If I eliminated this and I just said that they improve their discipline techniques as measured by pre and post test, maybe they got a 69 out of 100 on the first time and then at the end of the program they got a 70 out of 100. So they improved, I could count them, I was effective, I did some great work, but they're still not doing well. So I've got to set a standard that they had to score at least an 80 out of 100 in order to be measured as an improvement or that I had an impact on them. So you're gonna set that standard improvement as to what it needs to be, okay? So it can't just be that I improve because then we all can just claim, hey, I was so effective, I improved everybody, but really by how much did you improve them, right? Or maybe, oh yeah, our substance abuse program was so effective they were able to remain sober, well they only remain sober for an hour, but we're not gonna say anything about that. Is it they remain sober for a month? Do they remain sober for six months? So you have to have a measure in here of whatever that is that you need to get to. Now, back to your question about arbitrary and so forth. We wrote a grant, we did a grant for three years with an after school program at Richmond Elementary School and it was with the national health and it was actually through the health and it was for minority health and it was looking at family violence in the community and so we got a grant for fourth and fifth graders, after school program, intensive family support work and after school stuff and we had like 15 college students there every day working with the kids, doing all kinds of things and when we wrote the grant, we had looked at the demographics, high incidents of crime in the neighborhood, high incidents of low income families with their free and reduced lunch, academic achievement wasn't great and so we looked at all those scores and we picked Richmond Elementary School and when we picked them we said, okay, we're gonna get these fourth graders and fifth graders to read, great, by the end of the program, our objective said that we were gonna get them to grade level of reading and math by the end of the nine months. That was our goal and outcome. End of nine months, they'd all be at grade level at least, okay? So we wrote that, we got the grant, we got, oh, it's all tremendous, we were all so high hopes. Well, our selection process for the kids to be in our program, because there was already an after school program at the site at Richmond, was that they had to be identified by the teachers and the principals of needing extra. So they were the harder to serve, not just the regular kids in after school, but these are also the kids that have been identified of not doing well in school, really needing extra. And we only were serving 40. So we brought them on board, we did our first sets of pretests, we did the RAT, which is the reading, academic achievement, whatever it is, we did a math test, we did a few other, and these kids were at below grade, way below, they were in third grade, most of them were not even at fourth grade reading level. So here we are, like, oh my gosh, we just killed ourselves, we're no way, are we getting this fifth grader who has a third grade reading level all the way up to fifth grade by the end of our nine months, that was a little more ambitious than we could buy it off, right? So at the end, we wrote our report, showed our data, showed what improvement we did make, and then the next year we rewrote our objectives. That they would improve by so much on their RAT scores, and so you kinda learn as you go what is an appropriate measure of the work. So your first, if you're a new program, and a couple of you had said that you're new programs, if you're a new program starting out, you have to do your best educated research-based estimate as to what that improvement should be for the population you're serving. And then you will more than likely adjust it within the next year or so. So, but it can't be a random arbitrary number, it really needs to have to be as much as possible research-based on what other people have been doing in the field, similar to yours, what the expectation of the clients are of that same similar population that's been done before. Does that help answer your question about the arbitrariness? I just wanna make sure there wasn't that one. Anybody, anyone have any other thoughts about a better way? If you had to look at this example that you get, and let's say you had one intervention, but let's say you were able to meet that with three interventions, would you just put that into the report, like let's say you were able to do that? You were able to meet it, but not by the way you had planned it. Yeah, but are you meaning kind of like that you had potentially outcomes that weren't expected and results or improvement in some other area that was not expected or in this area? Like this particular one you had, you had to increase it by two points, and let's say you had two programs for the adults, and then you realize, wow, we really need to have one more intervention, and then you're able to achieve that. Okay, that would come exactly what you need to be doing. Would it process? No, that would be like utilizing your evaluation to do program improvement. Okay, but before you end out the program. Even, yeah, you could, and we talk about the difference between formative evaluation and summative evaluation. Formative evaluation is an ongoing informing and continuing using the information you're gathering as you're doing it. And a summative is at the end, and then just at the very end you make those changes. So you can more often than not, agencies are doing formative evaluations with a summative at the end. But so yes, any time that you can make those program improvements as you're seeing things changing, you should be, and then again, funders or whomever's giving you the funds for that, will usually like to see you make those changes. Program improvement, yes. For example, we were saying like free drugs, community. But with all the things that are going on and the new dispensers and all that stuff, that we end up unrealistic goal, we can change it to say less. Probably, yeah, it looks like, yeah. Because now it's like 64 in just garden role of dispensers of buying one out. So that will be hard for us to say free drugs, community, while they increasing. The number of dispensers, yes. But different drugs, hopefully. Hopefully, but yeah, like you said, there was something I was gonna say on that and I forgot now, but no, no, no, no. Yeah, so usually you're gonna try, like you can't necessarily always eliminate something, you know. You change it, right? Yeah, to improve it. But by how much do you wanna improve it is what you're looking for in the community. And you take little steps. I mean, I don't think that they ever would have thought that 30, 20 years ago, there would be so decrease in smoking as there is right now. Nobody would have predicted that, right? It was all in the rise, it kept going, it kept going. And all of a sudden now you see a decrease. And I was driving around somewhere the other day, oh, I was down in San Clemente over the weekend and I couldn't believe how many young people I saw smoking, I'm like, what are you guys doing? You know, really? You look, wow, you know, you look horrible, first of all. But I mean, sorry, have anybody smokers here? I mean, I would think that the education is gone. I mean, they're young, you know, like, wow, where'd that come from? You know, you'd think that, you know, by now it would have, sorry if I offended any smokers, but, you know, by now the education is out there how detrimental it can be. So I'm just surprised to see it. Cigarette, yeah, so eliminating things is pretty tough in general. You can't eliminate teen pregnancy, I'm sorry, that's not gonna happen. You can potentially hopefully decrease it, but not eliminate it. Yes, Ashley, you had a question? I was gonna ask 50 parents participate in the program. Say you only have 25 parents participate in the program, but still 75% of that 25. Improved? Improved, yeah. So would that be, so that's what you would reflect in the report that your process outcomes, we actually only had 25 participants. Participants, complete. And you're saying we're gonna make a change that we'll only expect 35. Yeah, to complete, or like in the grant that we had, we called it Connect-A-Tay, it was the name of the grant because it was supposed to be putting families and communities together, in that grant, they had asked us, our grant required us to have 40 kids at 240 dosage hours, that was a medical model, dosage hours per year. So we had to get, in order to be funded again, in order to be an official evaluation, you had to have 40 at 240 hours. So we did everything to incentivize the kids to get them to those 240 hours over the nine months, but we enrolled 55 kids, knowing that we probably would only get out of those 55, 40 to complete. So you had, depending on what the funder requirement is and what you're really trying to shoot for, and you have to kind of understand that you're gonna have attrition. So depending on what's gonna make your program viable or appropriate for the funder, or for your agency, whatever the design is, how many you want to get through this, then you've got to think about, well, how many do I actually have to start to get there? And we had like 50 kids, we only had like five drop out completely, but we ended up with those 40 at 240 hours. And at the last month, when we were like close to not getting, we kept adding Saturday program, so it would come to get more hours in. They hadn't made it. And we gave huge incentives that if they came, so many throughout the semester and every month as you were coming, you got little incentives and you got to do things. And the big incentive was if you got to the 240, you got to go to, at that time we were gonna go to SeaWorld, but then we realized that first year here's another design thing, unfortunately. We were gonna bust all the kids to SeaWorld, but realized there was a checkpoint. And the, yeah, so we couldn't, we had to go north instead of south. So we ended up at Universal Studios instead, but it was really interesting to see that first year, signed out, why aren't the families coming? Why aren't the kids, you know, and wow, you know, we're so ignorant to not realize that that was the barrier for them going down. Yeah, and then we thought, well, we'll put them on the train and let them go by train, but evidently the thread is there in the train that they would still ID them, but yeah, yeah, yeah, yeah. So we went north from then on. Universal Studios, Disneyland, you know, that was the incentive of the next two years. So, yeah, yeah. All right, so everybody good on this? Let's keep moving. And again, if you need to stretch, you get coffee, do whatever, please, you know, feel free to do that while we're going here. And does any of you feel like we need to take a break or just keep moving through? Keep moving. Ever forward is my expression lately. Okay, so again, I left you the templates so that you can see them there on your PowerPoint and it just really, just it's filling the blanks. We're really good at filling in blanks. Doesn't have to be more complicated than that. Just leave it straight, forward, simple. It tells the story perfectly about what it is you wanna do and how you're gonna measure it, okay? Then you say, well, how do we measure it? How do we go about measuring it? What's that piece? How do we know how to do that? So, of course, I like little charts and graphs again. So, we're coming across this new, it's called, it's the measurement framework, which actually, can I borrow this a second? Of course, because I gave mine up. It actually is the continuation of this logic model. It's the continuation, so we're looking for the blue measurement framework and it really is just, if you see this, how the short-term outcomes, long-term outcomes, they fold over the top here. See that? If I had a long piece of paper where I'd do that, but it doesn't work that way. So, they just overlap there at the end and once you've identified your outcomes, you're moving towards now how do we know what are the indicators, okay? What are the indicators? What's the data source? What's the type and the conlection methodology? So, this is a better description of the actual what you're doing, okay? To measure those outcomes. It takes it that next step. Okay, everybody seen that? So, let's define now this measurement framework and walk through that. So, here's my fancy stuff that I was looking for earlier. So, now we've done where we do short-term, I took out the process here because I just have room, short-term, long-term outcomes, and then we have to move to the measurement framework. And how are you gonna measure these outcomes? And the things that you have to take into consideration is the indicators, data source, collection methodology, data management, database management, analyzing and reporting the information. And then, oh, see, that's still not the right site. We'll get there next. Eventually, we get to a feedback loop, but. So, the next step in this logic model is the measurement side, okay? And this is the part where a lot of people start to have a little bit of more challenges. How are we gonna measure it? How are we gonna make it so it's doable, feasible, realistic for us to implement an evaluation in our little small agency that we are? So, first of all, let's start with the definition of an indicator because I think that oftentimes, people are confused about what an indicator is. And it really is truly just a data element itself. It's a observable, measurable data element that you can collect, okay? It is something, it is the number, the score on the test. It's the positive or negative on a drug test. It's the number of dental carries they have. It's the score that they gave themselves on the social-emotional rating scale, okay? It is the data. It is not whether the data improved or decreased. That's the outcome. So, it's the data at one point in time, here and another point in time here, and then the outcome is the difference between them, okay? So, a lot of times people forget that the indicator is actually just the one data element. It's not the improvement of the data. It's just that score. Put that one point in time. Rita. My question here, my question is, we're trying to measure the improvement of the data. Yeah. Chealously, having a new source of, like, new kindergarten coming, so they're coming with new programs. So, how do you figure out how to measure this? So, the question was, is that if I have a new cohort of kids or new cohort of elderly or new cohort of groups coming in, how do I measure the improvement? As April had said once before, what we're looking at is measuring in our evaluation is the clients that you're working with at that point in time. So, you have to follow that group. You have to follow that group. So, if you are working with your kindergarteners and you get a pre-test when they come in and then you work with them all through kindergarten and you do a post-test at the end to see if they're improving the amount of time that they're brushing. That would be an indicator, number of minutes brushing, number of times per week brushing, that's an indicator. So, you ask them in the first, when they first come in, how often do you brush your teeth? Once a week, maybe? You know, they, you know. Hopefully not. They're kids. They're kids, yeah. And my mom might or might not get to, or my mom sends me up to brush my teeth and like, I saw mine the other day, turning on the water and pretending. And I go up there and I'm like, okay, this toothbrush is dry. Fortunately, now he's getting into pre-adolescence so he's got mouthwash and all kinds of stuff going on, but whatever. That's another problem we're having to deal with, but. So, you're measuring that, okay? And then after the nine months, you measure it again and ask the same questions and compare whether there's an improvement between those two scores that they had, okay? But in your case, again, if you're really wanting long-term impact, you're gonna have to measure them beyond your doors. Do they continue to brush your teeth and stay regulated and do all those things after your doors, after your receiving services, okay? We all need to track beyond our doors. We all need to, because it's not enough to just say that when they're receiving services with us, they're doing well. We need to know that when they stop receiving services from us, they're still maintaining or doing well, okay? I always like to use the example that when I go to Weight Watchers, I do really well. Unfortunately, I haven't been to Weight Watchers in two years, so, you know, where that's gone. You know? So, you know, they haven't tracked me beyond their doors because, you know, they would have bad outcomes right now. So, but they need to. It's like counseling services. If we're going to counseling on a regular basis, we're feeling better. We're doing well. But as soon as I stop going to counseling then, am I feeling bad? Do I slip back? Do I not, you know? So, we need to make sure that they're maintaining beyond your doors. If you're doing a transitional program for international students for a year, and they're staying for three, but they're only in your program for a year, how do they do the rest of the three years to graduation without getting your mentoring services? Was it enough of a dosage to keep them going? So, you need to track them beyond your doors as well. Now, that can get complicated and hard, and everybody says, well, how do I do that? What am I going to do to do that? Your kindergartners are going to some school. You've got to make a relationship with the next agency that you're handing off to. That's the whole collaborative piece. Leverages, networks, and leverages, collaborative partnerships, because you've got to establish that relationship with the next step of work. And most of our clients don't just drop off and don't receive services from nobody. There's usually a handoff to somewhere or somebody that they're receiving services from. Or you rely on county data and community data to help you get that data. Yes, April, you had a question? Time to see rights and everything, and they can't, the school districts aren't going to share data oftentimes, so what happens in that situation? Usually what we've done with school districts is I've tried to teach organizations never, ever, please don't ever, ever, write in your grants that you're going to track report cards. Oh my God, please don't write report cards, because report cards are the worst thing to have to collect and track. And if you're doing it across different schools and across different districts for elementary school, totally different levels, totally. Oh my gosh, I went through a nightmare with an organization and they still insisted, well we wrote it in our grant and they're going to write it again because our board likes to hear it. And I'm like, well teach your board that is not feasible for you two. So for example, in fourth grade or fourth grade and below, well fourth grade maybe they do grades, now third grade and below, they don't get A, B, C, D and E's and F's. They get one, two, three's or fives. Some schools does a five point scale and some schools do a three point scale. Some schools break math down into 10 categories and some schools break math down to three categories. So if you're trying to compare a scale that has five to three and then aggregate that across your program, it doesn't work. Oh my gosh, it was a nightmare. So don't do report cards for below. You can do it for higher with GPAs but don't do it for low elementary school level. But so how did I get off track on that? But anyway, what was the question? What was the question? Beyond your doors. Beyond your doors, oh privacy. Schools, what we've been able to find though is if a parent will sign a release of information and a staff member and the parents have signed that release and you give it to the school and you have staff that's willing to go get it from the school, the schools are gonna be amenable to that. Schools just don't like to do it themselves cause they don't have the staffing to do it. And they do worry about the confidentiality but with a release of information from parents, they have to release it. It's up to the parents to decide that they can access that information. But what you get resistance from is if you're asking them to do it. That's really hard. And so some districts will allow your staff to come in and to do it. Some won't though because they say, okay well you're shuffling through everybody else's records while you're trying to find your kid's records so they won't let you do that. But there's a lot of good now legal support and maybe Sean you're from the legal departments. Legal support to show that confidentiality agreements and releases of information will hold up in court and it'll get you into a lot of these issues that we used to think were a problem. Correct Sean's shaking his head. Good. Yes. We're talking about those in short term, long term. Yeah. How important do you usually typically see that it is to have a control group? Somebody recently said to me. Oh, good question. Somebody recently said to me, well we need to have a control group which is so hard to do in social services. Yeah. Good question, really good question. I have to choose a group of families not to serve. Right, no, very good question. Actually, in program evaluation, I like to say program evaluation is different than research. Now my faculty members that are here and others that are here that go for that empirical research will say yes at all and at every opportunity you can do a control group do it but in human services I usually say it's pretty unethical to do a control group. That you're not going to actually randomly select whomever comes in the door this day doesn't get treatment and this one gets treatment. You're not going to be able to do that. So it's only utilized generally in control group studies is if there's a huge long waiting list within your program and those people on that waiting list would not have received services anyway and then it provides you with a demographic of participants that are very similar to those that are in your program and they would not have received your services in the first place. I have a long story that I can't tell because I don't have time today about how I battled with the National Head Start Association National Head Start about that particular issue and we didn't win but that's a whole nother story. But it was because they determined that out of a waiting list you could take the kids. Yeah, so if you have a long waiting list then you could potentially do that but then what our situation with the Head Start was that waiting list is usually into the program by November, December, or January and they were saying that they were never allowed into the program for the whole year and we were saying but these kids if they even they get something from December to June it's an impact that we don't want to deny them. They said well you'll have to so but anyway that's another long story. So usually control groups are not generally within the human services however there can be studies and designs that do it. Our Connect2Take grant actually the feds required us to do a comparison group study which is actually a really good way of getting at that in between a control group and nothing. We actually identified a sister elementary school right next door to I think it's called Crestia, I can't remember now but right next door to Richmond is literally not even a mile away from us of kids that were of the same demographic and we recruited them to take our pre-tests and then our post-tests without having gotten our intervention and then did a comparison between our kids and their kids and we incentivized them to be in it by giving them target cards and a raffle for a bicycle, things like that. So you can do comparison groups as long as the demographics of those comparison groups are as close as possible. But like you said, unless that's written into the plan and the funding it's really can be much more expensive, much more, a lot more effort on your part to do. Generally funders look at if you are looking at your population specifically and see an improvement within your population they're generally gonna be accepting of that as a valid accurate evaluation. Granted it's beautiful if you can do that extra mile and get the comparison group and or a control group but it's not very likely that it's gonna be able to be done. Kurt would you want to add to that at all? Cause Kurt did a great study with a substance exposed infants with a group and you didn't really have a control group though, did you just kinda, yeah. But so it's usually not something that is expected because of that, it's just an ethical issue. Okay, good questions. All right, next. All right, okay so things to consider when you are identifying your indicators, what's the availability of the data, how easily accessible it is, who's gonna be collecting it, can it be tracked over time, is there a trend that you're gonna be using that you can, is it comparable to other data within other communities? So again that kinda helps as well. Like if you're wanting to look at cancer rates in your community compared to other communities, can you do that, can you make those comparisons? But when you do those comparisons you've gotta make sure that you're not, you're doing like communities and that you're not comparing yourself. Like I really get frustrated when they say, California schools are 49th in the nation or whatever number we are right now and I say, well, excuse me, Montana has 500,000 people living in it and Orange County has 500,000 kids in their public school system. So you wanna compare those two states, it just doesn't make sense to me. So you gotta have comparable locations when you're doing that kind of compatibility and ranking if you're gonna rank anybody, anywhere. You've gotta make sure that it's comparable, okay? Indicator skewed by the denominators, that's kinda a funny way of saying that sometimes there are other influences within the collection of your data that might be skewing your results. So for example, we always like to use this example because we were looking at birth rates among teens in the different communities and all of a sudden there seemed to be a really large influnks of teens that had given birth in Anaheim. They're like, wow, what went on? Well, they put in a teen pregnancy shelter there. That'll do it. That'll do it. So you got 12 girls in a house with babies, you know? That just skewed your results. So you gotta make sure that you're looking at those kind of influences within your data that might impact it. I didn't add this additional one because it just kinda hand out in your packets. I didn't think that you really need to see all the additionals. It basically is things that we've already talked about. Some in the realm of data being political. What indicator are you gonna track for your community-wide indicator? Data can definitely be taken out of context. It can be in a political context. So you just have to be cognizant of what it is that you're gonna be tracking over time in that nature, yes? As you say, it can be comparable to other states because of the quantity of the students. Then we get to start with the start test, which is it's really just get them prepared for a test and that can be resolved in outcome of what they did, for example. Yeah, I mean, I shouldn't say you shouldn't rank. I mean, obviously we rank all the time we rank our counties, we rank our cities, we rank things all the time, but it's just if you are ranking, looking at the comparable nature of the demographics behind it to make sure that if I'm comparing Orange County, I wanna find it among counties in California. I'm not gonna compare Orange County to Inuyo County or Sisku County. I'm gonna really look at comparing Orange County to San Diego County or San Bernardino or LA County because we have more demographic similarities to those counties. And so that's what you're wanting to compare to. Negative press, yeah, yeah. But you know, and people will say, California's, what are we in the nation in education? Like 42nd or some darn thing? You know, and so out of 40, it just is a fact and what are they basing that on? You have to look at what they base that on when they say that ranking. You have to look deeper into it to determine is it based on the start testing? Is it based on number of kids graduating from high school? What is it based upon? And then you can do a better analysis of what that is really saying. So it's not always the best to do rankings. Okay, onward. All right, so I have an Improgresso in front of you. What, let's see, what time is it, 11? What's next on our thing? Okay, so I think that's gonna be pretty close to where we need to be. We're on schedule, which is really amazing. For me, that's really amazing. Okay, so I'm gonna give you like 10 minutes. You can take a break in this 10 minutes and or you can complete your measurement framework. So in other words, what I want you to do is take, just take one of your outcomes. And you're gonna, you can see in your chart, it's a little different in that sometimes we have one outcome, but multiple indicators for that outcome. In your example, in your chart, that's why there's multiple lines there going across. So you're gonna take your outcome, you're gonna just tell us what your indicator is gonna be. Okay, then you're gonna say what's your data source? The data source is the entity in which you're gonna get the data from. Okay, so if you're giving a pre and post test from your own agency, the entity is your agency. If you're collecting drug testing from an agency, it's probably the healthcare agency, whatever, wherever they do the drug testing. If you're collecting crime rates in the community, your entity's gonna believe the police department. If you're collecting academic achievement scores, you're gonna get that from the data source is gonna be the school district. So you see where I'm going with that? Where's the source? If you're doing most of it pre, post tests within your own agency, it's gonna be, the source is gonna be your own agency. Okay, that's the data source, the entity in which you get the data, okay? Data type then describes, well, what's this type of data? Is it qualitative or quantitative? Is it archival data? Is it a self-report questionnaire? Is it an observational checklist? How are we getting that data in a sense like, what is it, is it a questionnaire? Are we doing like focus groups? What is it we're gonna do to collect that indicator? Is it just archival data means it's public records, things that could be public records, public data, things like that, is it qualitative? Meaning is it open-ended kind of anecdotal stories type of data? Is it interviews? Is it quantitative? Meaning is it a number, okay? And sometimes on your self-reports, you're gonna have both. You could have both, qualitative and quantitative, okay? All right, and then finally, collection methodology. This is important because this tells us, though when it's being done, who's doing it, how they're doing it, how often they're doing it. It's not the why they're doing it, but it's just telling you the mechanism for it to be done. And remember we talked about the systematic collection of information. This needs to describe that systematic collection of information. This needs to describe who's doing it, when they're doing it, how often they're doing it, what training they're gonna get to do it. Is it staff doing it, the first day that people come in, and the last day? Is it the first day and 10 weeks later? What is it, and where is it gonna be done? Is it gonna be done at the office? Is it gonna be done in their home? It's gonna, describe all aspects of that data collection methodology. So it's very clear, and usually like I said, you've really gotta train your staff on this as well, okay? So I'm gonna give you 10 minutes to fill out just for one outcome. Just pick one of your outcomes, and go across here on it. Okay? And you get a little break in the meantime. Okay, so let's get back together and talk about your measurement frameworks. We've got about 40 minutes left, so I'm gonna use all the time up. Don't think you're gonna get out early. Okay, so how is the process of doing this? How did that work for you? Was it fun? Wow, earlier in this day, I heard everybody say they hated evaluation. Now I have someone saying it's fun. That's a good change. That's my outcome right there. All right, so what else? For anybody else, what was the experience like? Yes, exactly. Very good. That's exactly right. The first part is talking about what it is that you wanna measure, and this one is about how you're gonna measure. So that's exactly right. This takes it that step further into, well, how are we gonna now implement and measure these? Anybody else? Okay, do we wanna go through and do somebody's example? Anybody wanna share their example? No one's brave. Remember, I give out, oh well, they're all gone now. I turned to look for my suckers, they're all gone. Never mind. Okay, okay, okay, okay. And let me stop you right there, because that's a perfect example. Sometimes what it sounds like, if you listen to that just on the beginning, it sounds like to count the number of times they go to a socialization activity. Doesn't that sound like a process objective, right? Does that sound like process? Yeah, how often they go. How often they're going to a socialization activity sounds like process. But with the knowledge that she has the harder to serve, isolated clients, that going to an activity in itself is a change in what? Behavior. Exactly. So there are times when somebody showing up to something and somebody participating in something is a change in behavior. And you need to know the characteristics of your clients to know whether or not that's true. So for example, we did this with our connective take kids. There is a mom that we kept trying and trying and trying to connect with. Trying to get her son kept getting a little bit of trouble. And our policy had been after three minute times we had to excuse him from the program until the parent came and wrote a behavioral contract with us. We could not get that parent to come. We called them, we sent notes home, we would try and stop her because he would be on the peripheral watching us with all the kids wanting to be a part, obviously, and he's one of the kids that needed to be a part of our program. But based on kind of the behavioral plan and all that stuff, he couldn't be there. So we tried everything we could to get her and not until about the fifth time of doing something did she actually show up. So that showing up is a change in behavior right there. So not to stop you, but that's exactly right. You are right that that is a short term outcome, the fact that they've come to a socialization. But now the next step, just because they came to the socialization, what do you want for them? To improve your quality of life. Okay, okay, yes, very good, very good, good. Sounds great, very good, exactly. That sounds exactly right. And so your data element is the score on that test. On the survey, what was the name of it again? Okay, so the World Health Organization Quality of Life, the score on that. And then the data source is going to be your own agency. Yes. Okay, okay. And the data type is? Yeah, it's qualitative and quantitative? Does it have? Yes, there's room for both. There's room for both? The qualitative is your own. Is it a self-report? Or is it administered by a staff? It's self-report. Okay. Okay, so it's still self-report then. It's them telling you what they're feeling. Okay, good. And it's a questionnaire, self-report questionnaire. And the collection methodology you started with, who's doing it? How often? Okay, very good. And in their home or in the office? Okay. That's part of the socialization. Okay. It's either in home then in home or in office. Okay, sounds right, doesn't it? Sounds great. So you've got it all laid out. And we are all in very good understanding about what it is she's measuring. Now, here, this indicator and this element can come from a score from a standardized instrument that is already being used in the community. Like I mentioned, the RAT test, the WRAT test that's used for academic achievement often. You might have a standardized depression index. You might have stress index. You might have things like that that you can utilize that have been standardized. If you can use those, those are great. But what do you find the challenge in using those? Anybody trying to use those? No one's used a standardized test? Like assessment tools that are out there that you might be using to assess your client's well-being in the beginning. Anybody using any of those? That drops her score. So it looks like they're getting worse. But in essence, they're getting better. They're getting more normal. Yeah, interesting. So we're not even geared up for that kind of thing because it's a cubital, it doesn't matter. Interesting. It's like weird. Well, there are, like if you go online, you can go to assessments.com and you can find a ton of instruments. I mean, there's a plethora of instruments out there that you can use, that it could be utilized while doing assessment and or for evaluation. Is anyone getting hot in here? Is it just me right now? It's got a little, Jackie? It was kind of cold before. It was cold, it goes up and down. Thanks, Jackie. So I thought maybe it was just my hot flash. It's another topic for another day. All right, so anyway, so you want to utilize, if you can, some standardized instruments. But that's not always feasible because they're not always cheap. If you look online and you find an instrument that measures depression or instrument that measures stress or instrument that measures any of those social well-beings or parent conflict or any of those parenting skills things, any of those things, they're not cheap and they're usually hard to get and they're a little more complicated to administer. So I don't find nonprofit agencies using all that much. So what we've done a lot with agencies is help them design their own instrument, their own pre-post test set of questionnaires. And that's again a whole nother training that we've done in the community about how to develop your own questionnaire pre-post test. And it's really, there is a skill to it. There is a need for it to be standardized and need for it to be accurate and to look at the validity and reliability of it and all that within it. But don't let that keep you from designing one and using one. If you ask, even when you find that agencies don't realize that at the time of assessment you have this whole intake form that you're asking. Clients questions. If you added five questions on that right from the beginning that has to do with about your outcomes. You've already taken into account a part of your system, a part of your structure, a part of your organizational practices and just added to it very slightly instead of this huge, big add-on evaluation. And then you ask those five questions again at the end. You now have two points in time, two data elements that you can compare to determine whether or not you've improved your programming. The Children and Families Commissions did that really well. How many of you received Children and Families Commission funds? You have you. So Children and Families Commission determined early on when they wrote their grant, the proposal whatever in 1998 that what they wanted to do was make kids ready for school. That was their ultimate outcome, getting kids ready for school. And one of the measures that they determined was an effective measure whether kids were ready for school or not was whether they were read to 15 minutes a day by their parents. 15 minutes a day of reading to your kid a day would make a difference in having a kid ready for school. So at every single agency that ever received funding from the Children and Families Commission had to ask that question on their intake forms, correct? You still do. You still do, yeah. Because no matter if you're giving dental work or if you're giving autism interventions, it was do you read to your kid for 15 minutes on every single intake form for every single agency that was ever funded and they've used that to collect data on measuring their outcomes on whether or not the Prop 10 money made a difference. So if you think about that, if you think about on your intake forms what could I add to my intake form that would be appropriate questions that I could then measure later as an indicator of my success, an indicator of an impact, an indicator of whether or not we met our outcomes or not with our clients and just ask it again at the exit or when they leave. Or at the three months, six months, nine months, whatever you determine how often you need to ask those questions. And a lot of people don't realize when they're using their assessment tools those assessment tools are asking a lot of the questions that you want to then track whether or not there was improvement in the end and they're not using them that way. They're just using them for assessment and determining what treatment they're getting and what intervention they might have but they're not then using it later on at the end to determine whether or not they had that impact or change because they think assessment beginning they don't think end as well. So think about ways in which you can incorporate your evaluation into the system or the structure that you already have in place within your organization instead of making this big add-on evaluation in addition to whatever it is else that you're doing. And I think that's where agencies get hooked up sometimes that it feels too much of a burden for staff if they do that, okay? Questions on how to complete this and it now. So interpreting the data. Now that we got the data and we've collected it all and we've decided on what software we're gonna use and got all the data entered and how to do that I just had a question right now is like, okay, do I have to, I've got this data and I've got all these people that I did an assessment on three months ago and then I did assessment on them again do I have to match them up there to test up to be able to compare and get appropriate results? And my answer is yes, that you really wanna do that. You can aggregate the data of all and then aggregate the data of the others and get results but it's not gonna give you an analysis that you really need to see whether there's been an improvement. So the best thing to do is give each individual client an ID number, put in their pre-test result, put in their post-test result and then do a T-test on it to see if it was statistically significant, okay? And so you've got your data entered into a spreadsheet, use the statistical analysis on it but sometimes what we talk about is that statistical analysis, you might not hit a statistical analysis of that P less than 0.05 on a T-test but you might have what we call programmatic significance meaning that even though you didn't get to that statistical level, if I say it one more time I'm gonna stutter all over it, even though you didn't get to that level you might have seen enough improvement in your clients to say wow we did make a change, we did make a difference and statistically you didn't because you didn't have big enough sample size, you might not have gotten the right, the validity of your question wasn't right or the reliability of it. So you gotta look at that too when you're going back to analyzing why you didn't get statistical significance but sometimes just enough of an improvement can say wow we are making a difference but how can we do it even better, okay? And that not to negate that oh just because we didn't get that statistical value that we should say that we're not doing it, we're not effective and we're not doing well. So one thing we have to do is make sure that when we're interpreting the, okay now I've been talking too long for sure. All right, you have to put it in context and I'm gonna have to stop soon, right? Good, all right, putting it in context is really important, making sure that again you show the total number that it represented, you put it in percentage, making it a relative to the results, it's not always good to just say well the score was 12 out of 50. Well let's do a percent, let's do relative frequency on that, it really makes it a little easier for people to understand and interpret when you put it in context like that. Disaggregating the data and disproportionate outcomes, what are those two big Ds talk about? What's disaggregating mean? Anybody? Could you compare California to someone's? It could, that could be disproportionate outcomes but disaggregating means pulling it apart, aggregate means to put it together, disaggregate means to pull it apart. So sorting it by ethnicity, by gender, by age, by all those things and looking at where are by geographic region potentially if you're serving in multiple regions, you wanting to disaggregate the numbers so that you can really see where you had the most impact and where you might need to put more efforts into a particular group that you didn't have an impact on. So disaggregating the data, disproportionate outcomes meaning looking at whether or not within a particular group you have a higher rate of success than in another group and that they're disproportionately represented over the base of what the ethnic distribution or the age distribution or whatever the regular distribution is that when you look at your outcomes that there's more represented in one group than they normally would be within the general population of who you served. Is that making sense? Kinda? I got a, not so sure. So say for example I had 10% of the population was African-American, okay? And 10% of the population in my program was African-American, okay? But then when I saw improvements in my program only 1% improved compared to another ethnic group where 60% improved. So there's a disproportionate representation there of outcomes within a particular, so I must not be doing something maybe culturally appropriate or acceptable to reaching those outcomes, to getting them to those outcomes. Whether it's ethnic, whether it's age, whether it's gender, any of those kind of things you would disaggregate the numbers so that you could analyze this if there was a disproportionate outcome. Does that make sense then? Okay. Political process will interpret the data in many different ways. So it's really important to making sure when you're interpreting data and putting the data in context you understand the political context in which you communicate that data and represent that data. I know that as I mentioned before Orange County doesn't track teen pregnancies. We have no idea how many teens are pregnant. We know how many gave birth but we don't know how many were actually pregnant. And that's a political determination in this county not to collect that data. So you then have to determine, like the data that we do collect and the results that we do get are in a political context and how we report that and having that we have a political representative here. With us and she's probably got a lot of issues and questions on that. If you didn't hear, what was your name, Maggie? Maggie's from Lou Carrera's office, so. Which I'm really glad that you're here. So anyway, but it is, it's definitely put into a political, we work in a political environment and we have to be cognizant of that when we are reporting our data and how we report it. Not that you change your data but you just remembering the audience in which you're presenting it to. And if you're wanting to have an impact and increase your resources, increase your leverage, you need to make sure that you present the data in a way that it's gonna be understandable and appropriate for them to hear it. You know what I mean? Yes. Now they're focusing on what they've improved in the job situation and everything. It's really, it's, we call it spinning in the ER context, but you know it's. What you focus on. It's what you focus on and how you use the data and what the data says to different groups. Yeah. And then I mean an example of that would be, I've been working with the conditions of children's report now too many years, like 15 years as a matter of fact. And we used to track, we still track, we track immunization of children in Orange County. And when we first started, it was down to 60, like 67%, 69% of children were adequately immunized by the age of two in Orange County when we first started looking at it. And then the immunization coalition came about because it's saying, wow, that's really a low number. We need to do something about that. That's horrible. And you know, everybody then said, oh, we're now up to, at one point, we're up to 77%. And it was like, oh, look at how much we've improved. We're at 77%. I'm like, yes, but you could turn that the other way around. Immunization is free in Orange County and we have 23% that are not immunized. So now what? So it's now up to, Kurt, what's it up to now? It's up to 90, almost, exactly. And it's been up there now for quite a while, but this was like way back where they were like, you know, if you look back 20 years, immunization was not that high and it was like, we need to do something about this. But you could turn it around and say, okay, at that time we could say, well, 23% are not immunized and immunization is free in Orange County. So what are we not doing to grab those? And then of course there's now, I think it's 92, almost 95, somewhere really close to that because there's just that amount that wave out that's the issue that they're looking at. But again, now we're looking at immunization. That's when they get to kindergarten, 98% are immunized at kindergarten. Like, well, they all have to be immunized at kindergarten. Let's go back to the two-year-old rate. Let's look at two-year-old. So like I said, you've got to make sure that the way in which you present the information is we're not changing the data any. It's what you're focused on in the data as well. Okay, we talked about ranking already, right? I was just curious, did they ever take an account of that new, that movement of not getting immunized? You know how they were going to have parties and to get chicken pox and measles and. They did see a huge drop in immunization there for a while because of the bad press, autism. I think they're starting to get a tick back up though, coming back up, yeah. All right, so here's my, this is the one I wanted to do. Okay, so we did our in-progress, we did our measurement framework. We've done our feedback reporting results and program redesign is what's really important to this whole process. And then it actually reports back up to what did you meet your issue or need? Did you really get to, if you don't do this feedback loop, there's no point doing it, okay? If you don't utilize information for program improvement, reporting results, then there's no point doing the evaluation. Okay, so we talked about this already a little bit, that you have many different audiences when you're presenting your data. You have to make sure you have both an internal audience and an external audience, different levels of information in the way in which they need to see the data and report the data. A public relations statement should sound very different than a report to your staff, your line staff. Because the line staff needs to know the details and the numbers and the public relations statements needs to know that you made an impact and based on evidence and research from your evaluation. Okay, so we can communicate the results in both graphic representation, which is charts and graphs, which we like. We call scientific writing, which is just give me the facts man. Again, on the conditions of children's report, we are at the Center for Community Collaboration, kind of the editors of the report, and we're always saying, okay, the book's supposed to be about the facts. We're not allowed to editorialize in it too much. I say too much. So I'm always like, and Perley knows more than any, for the Children and Families Commission and Kurt also contributes to it, but Perley edits it all the time for us and we have to always take out some of those editorial comments and statements in there because the report's just supposed to be the facts man. And so that's the scientific writing. Just give us the data, give us the number, don't sugar coat it, don't try and make it anything different or editorializing or explain it away at that point. Later in the formative writing, you're rarely trying to inform people about what the results are without getting too stuck in the data. You just give them enough of the data that they get a taste for the fact that you know what you did and that you measured it. Fund development writing, again, when you're writing for funders, you're wanting to show them that you did an accurate evaluation with appropriate measures and that you got results back and that it showed an improvement, but they don't need to have all the letter layers of detail. And then of course a PR in writing is always about making it pretty, getting somebody to understand it and care about it in the public and how to tug on those heart strings and make them care that you made a difference. And that's, but using your evaluation evidence to do that as well. I have a three minute drill before we get to the final thoughts because we got 13 minutes according to my record. So three minute drill here. This three minute drill is in your packet. It is the light yellow. I always call this the elevator drill as well, right? If you got into an elevator with Bill Gates, what would you want to say to him? Oh Lord, yes. So if I got in the elevator, I would want to make sure that everybody in my organization can do this three minute drill. I want to make line staff be able to do this. I want to be able to know the board of directors can do this. That they can in three minutes tell me who you are. What is the need or the issue you're addressing? What is the program that you're doing to do that? And what are some of the outcomes and results that you've been able to get and measure to prove that you've been effective? Most people will tell you, oh, what do you do? Oh, I work for such and such agency. Oh, good, thanks. But they don't take it to the next level of saying, this is the issue I'm addressing. This is the program in the way in which we address it. And this is the measured outcomes that we have been able to have success at. So if I had Bill Gates, I'd put a stop button on that elevator and take at least 10 minutes, forget the three, and say, you know, you got to do something here. So be able to communicate. So I'm going to give you three minutes, well, one minute to write it down. And then we're going to see who can spiel it for us. Who would like to stand up and earn a lollipop? Oh, you're bringing my lollipops back. Good, so I can pass them out. To see who earns one. Thank you for getting a lot of video in here. I have to careful what I say. Is there any more chocolate there? Or are we just all lollipops now? Oh, is there any more chocolate they ask? A Milky Way. Who wants bubblegum? Yeah, some of them are. Anyone else want bubblegum? Sugar mamas, sugar daddies. You guys went through a lot of candy today. For this health research institute. I'm so glad. Dental program. Yes. We're going to be coming to see you, Rita, with my healthy smile. One Snickers left. You want the Snickers too? Good players, this blue raspberry. Anyone finished? Ready to do their little spiel? You were too busy eating health food. That's right. I'm on video. We need 10 minutes to write a three-minute drill. If you are having to step out early, no worries, just do please fill out the evaluation form before you go. We're going to wrap up in just a few minutes. Okay, so are we ready? Who's going to stand up and do their drill? Who's ready? Who wants a lollipop? Or actually, I'll give you the whole bag. I don't want to take these home. This would be dangerous in my house. No redenables. All right, up you go. Stand up. We're doing our three-minute drill. Introduce yourself again. We're one of the chapters in the whole United States that's doing this, and we're doing four sessions. We're doing five lessons in four sessions. We're going to be doing this summer program with migrant education in crime. In one of the elementary schools, we're going to use fourth and fifth graders and their parents. We're going to incorporate therapeutic dance from the wooden floor. We've got a choreography teacher there. We're going to use nursing students and members of the Hispanic nurses to teach the nutrition and also to incorporate self-esteem building messages into each lesson plan. And what we're going to be doing is for short-term outcomes, we're going to be pre- and post-testing. And then long-term, we want to make this sustainable. So we have collaborative partners. We have the Coleman Foundation. We have Northgate Market. We have Cal Optima. We're having a whole bunch of partners help us. And so what we're going to do is make this a sustainable program that continues every year throughout the year and into the summer and build on it, make it bigger and bigger and the limited community. Nice. And improve the health of the children we're serving. Yes, that's right. Exactly. Very good. Nice. Very good. Excellent. Should I throw her a lollipop? I need oxygen. Needs oxygen. You did that three minutes without breathing. Just hand it to her. Well done. Excellent. So you see how she could communicate to us. And she talked about the outcomes. And she talked about how she was measuring it. She didn't forget that part. And it made it sound so much more credible and accountable to the funder and the dollars that you're going to try and receive, correct? OK. Anyone else? I don't know that we have three more minutes. So I'm going to have to move us along. But that's the idea that you're going to be able to communicate that you are effective in the work that you do, right? OK. So some final thoughts. Developing and measuring outcomes is not about evaluation. It's really about the basic mission of your organization. It's why are we doing this? And a lot of people think, oh, I have to just measure and do this for my evaluation because I've got to do it because the funder requires me to do it. But ultimately, it's about whether or not you're actually accomplishing what it is you set out to accomplish and being able to know that you did that or not. Bless you. OK. So in summary, these are basically questions that you have on your evaluation summary. We do take these seriously. Of course, it's evaluation. Some of it is satisfaction, but some of it is also outcome measure questions. So we do both. And we like to hear from you and let you know because we are continuing this series and we'll be doing them again next year. And I want to thank, again, UCI and our partners and Jackie for putting this together for all of us and for all of you coming today and being here for this workshop. And that people are asking, how do you get in touch with me? Best way is by email. And I'm at Cal State Fullerton.edu. And that phone number is not the best because, to be honest with you, I've had a broken phone line for months and months and months and months. So I can't get messages ever. If you call me, it hangs up on you, all kinds of fun things. So email me is really the best way. And then I'll give you my cell phone. I can give you my cell phone number now. Oh, by the way, that is one of the things. I'll give you my cell phone. Ready? 714-318-7681. And as 714-318-7681, one of my practices is that at any time I give a workshop for anybody that's a participant in the community, you're more than welcome to call me at any time. Email me with questions when you get back to your agency trying to put this into place and you're not understanding it. I heard someone say earlier today, gosh, I'm so excited and engaged, but I'm so overwhelmed at the same time this was a lot to fit into four hours and I get that. So any of you that would like to get in touch with me and have a question later on, just make sure in the subject heading you put down this training so that I know to remind me where you came in contact with me. And hopefully the university won't spam it out either. But it shouldn't. So just put in the subject heading where I would know you from. So thank you all very much. And Jackie, do you have some closing remarks? You might find me on a baseball field and I'll have to say, ah, I can't do that right now. But that's all right. We do take seriously your comments and feedback and some reminders for those of you who have participated in the series. If you complete five courses, whether it is the same individual or multiple individuals from the same organization, there is a consistent incentive. However, there must be a better mind for the next I do remember. So if you haven't filled one out, if you really love that, then you may write it back to me. And I'll let that reach you in the future. But it was a lot of information, and I think Michelle shared it beautifully and really made it much more friendly. And I really do love her last quote. Evaluation is about your mission. Why are we doing this? And we believe what we do in the council. We should be able to do it all here. So I want to thank all of you for participating in coming to Delhi. Again, thank you so much for making your lives a healthy community. Thank you, that's great. I'm going to put these outside so when you get your evaluations, when you turn in your evaluation, you get a candy. I'm not taking these home. Where's the box right here?