 dropouts affect both the poor and the wealthy alike. Number five, much money is spent, but not necessarily on what works. There are many programs and people are excited, but they usually are the result of marketing, not outcomes. We need more tools in the toolbox. We need more things that work. We need to build tools that have wraparound features, and we need to have tools that have mentors. Now, I think this was a fascinating discussion because it wasn't necessarily focused on the Youth Challenge Group, but it all speaks to the National Guard Youth Challenge. So I thought that was a fascinating presentation by a number of learned people who end up drawing our focus squarely into the kind of program that we're very proud to be associated with. Let me say that we're going to talk about trying to identify analysis that helps us identify what works. So we've heard the problem that we need more tools that really work. So how do we measure that? What are the strategies? What are the histories? Do we actually have the means and methods to be able to do this? Do people actually use it? How do they use it? So today, I would like to introduce our three panelists, Kathy Stack with OMB, Dr. Lynn Carly with Rand and Gary Van Landingham with the Pew Trust. Let me start by introducing Kathy. She's the Deputy Associate Director for Education, Income, Maintenance, and Labor at the U.S. Office of Management and Budget. In part, her division oversees budget, policy, legislation, regulations, and management issues concerning a number of departments, but particularly the U.S. Departments of Education and Labor. She's currently working on OMB's government-wide efforts to advance the use of evidence and evaluation in policy, budget, and management decisions. We'll hear more about that in her presentation. In addition, next to her, to her left, your right from the room is Lynn Carly with Rand. She is a senior economist there, a labor economist, actually. She joined Rand in 1988, and her recent research has focused on human capital investments, social welfare policy, child and family well-being, and U.S. labor markets. And she'll talk about her studies and the analysis that she's done in a number of areas, but also in the National Guard Youth Challenge program. And then our final panelist, last but not least, is Gary Vanden Lendingham. He is with the Pew Charitable Trust. He works on the Pew MacArthur Results First Initiative. It's a joint initiative of the Pew and John D. and Catherine T. MacArthur Foundation. He manages Pew's work to advance the use of cost-benefit analysis and to cultivate a climate for evidence-based decision-making that can enable states to eliminate ineffective programs and shift resources to those that generate the best outcomes. He's had a long background in state government and associations dealing with state government and advising state government around methods and ways of investing successfully. As most of us know, in the years to come, budgets will be challenged, and there will probably not be new money. In fact, there may be less money. So it becomes all the more important for us to get the most from our money. So we're going to hear from our panelists today on how we might get the most for our money in dealing with the education crisis that Senator Landrieu spoke of earlier this morning. So let me turn it over first to Kathy. Hi, so there's lots of new faces here. This is great. So I am a career civil servant. I've worked at the Office of Management and Budget under five presidents, three of them Republican, and I've seen many administrations come in with their ideological biases. But I've also seen the power of really strong evidence and evaluation to make those biases melt away and make people look at the data and say, this tells us something that we can't ignore. We need to act on it. And I think the National Guard Youth Challenge Program is a terrific example of what, of how aligned the last two administrations have been in their emphasis and the amount they valued, evidence and rigorous evaluation. Back in the middle, I think of the Bush administration, my part of OMB, which is on the domestic side, I do education, income maintenance and labor programs. I don't oversee National Guard Youth Challenge. But we had been told that MDRC was beginning work on a study of the Youth Challenge Program and had run into some serious concerns by a DOD general who believed that random assignment studies were unethical and we probably shouldn't be doing this. And OMB, a policy official named Robert Shea, I don't know if any of you know him, but he intervened and he called up DOD and he explained that in fact OMB had issued guidance that said that random assignment was in fact that constituted the strongest method for scientifically validating impact and that there were ways of doing it that were not unethical. In fact, if you have more people who would like to be in a program than you have slots, you can use a lottery system, which is very fair, and create a control and a treatment group. That study was underway, I frankly lost track of it until the beginning of this administration when MDRC had wrapped up its results and was able to come in and brief us. And they briefed senior policy officials in OMB and the White House and were able to show definitive results that this program had significant impacts in terms of GED attainment and earnings for participants. And with that data, it was on everybody's radar screen. And since then, this administration has protected that program, kept it going in DOD, but also begun to talk about it with other agencies who support disconnected youth. As one example, the Department of Labor in their budget this year has a proposal to partner with DOD, essentially take the youth challenge model and see if it can be applied to a slightly different population, including it would basically be adjudicated nonviolent youth, which are not currently allowed to participate in the current program. But under this demonstration, we'd see whether it would work for them as well. Pretty good strong hypothesis that it may, but we will test it with a rigorous evaluation. So I think it's just been fascinating to watch the layers of this evidence-building effort, building on each other. And one thing that this administration has done lots more than the prior one did was think about the potential for grant programs in the federal government to be run in more innovative ways that are focused on evidence. And it's a, you can't do it everywhere, but there are certain areas where we think there are real opportunities to turn grant programs into engines of learning. And let me contrast with the way most agencies are structured in grant programs are run right now. We have formula grants and discretionary grants. Many of them touch this population we're talking about. And they tend to be run on an annual cycle. The incentives in the system at the federal level are to get the dollars at the door and to pull back the required reporting that shows that the dollars are being spent in compliance with the legislated activities. They are, they do not have tools to analyze that data and tell us whether the programs are working or whether any particular strategies within the program are working better than others. We also have research operations in federal agencies that make grants to academic institutions. And the theory is that those grants in many cases should be informing our policies and how we run our programs. We haven't set the processes up for that at the federal level nor have those processes come into place at the local level. So we have a lot of research that's going underutilized. Similarly, we have evaluation offices and agencies where I think there has been a fear of evaluation. And I credit this particular National Guard U Challenge Program for taking the risk of saying we want a rigorous evaluation, but often evaluation offices are shunned as threats to the program rather than seeing and being seen as tools for learning. So briefly, some of the new models that we are talking about in the grant space, we have something called, we call them tiered evidence models. Does anybody who here knows about the investing in an innovation program at the education department? Is that foreign to you? I see a few hands going up. Well, we basically said, what can we do at the federal level to create the incentives for local grantees and applicants to go partner with researchers on the front end to say, what does the evidence tell us about what works for a given population? And then, if whatever we're going to do in terms of testing out a new strategy, how would we construct an evaluation that would tell us whether or not we had a positive impact? So the I, we call it the I3 program, said why don't we create three tiers? And we can ask applicants to choose what tier they wanna operate in. The biggest grants are in the scale up tier where you have to have very, very strong evidence that with random assignment or strong quasi-experiments that your strategy has been tried before and is likely to be successful in a new setting. The next tier down is a validation grant where it's somewhat weaker evidence but you're trying to take something that's been tested before, found to have some positive impacts but you really wanna validate whether it works. And that too has to have a strong evaluation that goes with it so that we can learn coming out the back end. And then down at the lowest level, we have development grants which are really a place for innovation, for proof of concept, you've got a really good idea but it needs to be tested rigorously before we know we should be investing more money in it. So that model has gotten a lot of attention. It's really, really energized the field in terms of bringing researchers and practitioners together. And we have a number of other agencies that are adopting it. Labor has the Workforce Innovation Fund, the CNCS, Corporation for National Community Service has the Social Innovation Fund. And there are probably about six or eight others. Another model is the Pay for Success, Social Impact Bonds. Who here knows about social impact bonds? A few different hands, okay? National Guard Youth Challenge has a business case to become an intervention supported with social impact bonds. This is something that's starting to gain traction at the state and local level and we have a new federal program as well and we're trying to expand. But this is a model that says if we can demonstrate that certain effective interventions that produce better outcomes for individuals and families, also save the government money downstream by reducing the need for other federal government services or state or local, then we should be able to create a attract investment from the private sector for supporting those effective interventions that yield savings. And then a portion of those savings will be used to pay back the investors. So we have Massachusetts right now and New York City that are both launching, have launched these Pay for Success models or social impact bond models with this at-risk youth population. At the federal level in the Department of Labor, we are wrapping up a $20 million pay for success competition in the Workforce Innovation Fund that is seeking applications from state and local governments that wanna enter into these agreements with local providers, investors. And a key part of this is that there has to be a rigorous evaluation associated with it. So while we are trying out this model and testing it as a financing mechanism, we're also using rigorous evaluations to try to test whether or not the interventions that they are using have impact and produce savings. The third model is, it's very relevant to this group, is performance partnerships. About two years ago, we engaged in an open dialogue with state and local communities to ask, and this began with a presidential memo to agencies on using administrative flexibility to get better results and reduce costs. And we essentially asked the question, where are our federal rules getting in the way of you using dollars more effectively to have impact? And one of the ideas that came back to us, it wasn't an idea, it was this vast collection of facts about how difficult it is to serve disconnected youth. Because of the difficulty of coordinating all the different federal programs, each with its own rules and eligibility criteria and the activities that can be supported. And a group called the Forum for Youth Investment worked with a number of states to bring forward to us some very compelling arguments for why there needed to be increased flexibility. And the administration was pretty convinced by it. It was a very ugly picture. And so we put into the 2013 budget, and it reappeared in this year's budget, a proposal to allow up to 13 communities across the country to blend their funding from discretionary grant programs, whether they be competitive or formula. And we were looking at programs in the Department of Labor, HHS, Education, Department of Justice and Housing and Urban Development. A lot of people said it was impossible. We never get that. Last year, the Senate Labor H Committee was persuaded by the states who had made their same pitch to us that this was an important policy to try and they put it into the 2013 Appropriations Bill. Unfortunately, we didn't get a bill last year, but we are reproposing it and keeping our fingers crossed that we get it. And I think it's a really, really interesting opportunity to test out this notion of collective impact. But a key requirement of this is that we've got to do a strong evaluation of these pilots so that we can really make the case that this kind of flexibility is in fact a tool that enables communities to get better results from the same level of federal investment. So that's an overview of some of the stuff that we've been working on, but I'll turn it over. Terrific, thank you so much, Kathy. Turn it over to Lynn. Lynn will talk in a little bit more detail about MDRC study and about other evidence-based strategies that she's working on and also the economic benefits thereof. Lynn? Great, thank you. It's great to be here today and I will say that I am also filling in for Dan Bloom from MDRC, who couldn't be here today after all, so I will be putting on an MDRC hat for a few moments in addition to my RAND hat. Basically, I'm increasingly seeing in my work this growing interest in using evidence-based policy, the whole results-based accountability movement, whether that's coming from the public sector, federal, state, and local level. Also, increasingly, we're seeing it in the private sector, that's part of the social impact bond movement, it's part of what foundations are interested in seeing, and there are two components that potentially go with building that evidence-space. One is doing rigorous evaluation, doing ideally an experimental evaluation or other rigorous design that allows you to identify with confidence what the causal impact of a particular program or intervention or policy is. And then secondly, to take those results that have measured a program and its impact and translate those in economic terms to be able to look at, if you will, the return to making that investment in a program or policy or intervention. So being able to compare the costs of the policy intervention program against the potential benefits. And sometimes those benefits may be positive gains. In other cases, they may be costs that are averted or savings, particularly in the public sector. So what I wanted to do, and a few minutes that I have in this presentation part of the discussion is to use the National Guard Challenge Model as an example of how that kind of strategy of building the evidence-space came about, what the results are, and what lessons we can take from that particular case. So illustrating with the Challenge Model, what I want to talk about first is the evaluation that was done by MDRC. And this is what I know Dan would have been here to talk about. There were some copies out on a table of the study that MDRC did. This is all available online for those if you want to see the details. Published in a series of reports that reported on the results of a randomized control trial for the Youth Challenge Program. And I think there's a couple things, couple points to make about the intervention. First of all, we're here today because it's the 20th anniversary of the program. This study actually involved analysis of participants in the Challenge Program in a cohort starting in 2005 and 2006. So it's almost a decade ago that this study first began and I'm sure the planning at least goes back a decade. So you can see that it takes time to put these kinds of studies in place and then eventually to see the results and to see the results have an impact. The study was done over 10 of the sites of the program. And in particular those 10 sites were in the study because they were sites that were oversubscribed. So you heard Kathy talk about the fact that with randomized control trials some will object to doing such trials because they think it's unfair that some individuals are excluded and put in a control group randomly assigned to a control group that doesn't get the program whereas those who get the lucky draw and are in the program get the benefit of the services. But when you have a program that's oversubscribed not everyone's gonna be able to participate regardless and why not use a lottery as the most fair way to allocate who gets the program and who doesn't. So essentially within the you challenge program the fact that there were enough sites around the country that had more applicants coming to the program and the programs conserve allowed them to create these randomly assigned groups that were participating in the program and not. And I think one other thing to point out about this study for those of you who go to look at the results more closely and in the numbers that I'll talk about in a moment is that the evaluation looked at the difference between those who were admitted to the program versus those who were admitted to the program and randomized into the control group. So this is a group of admittees. It's different from the group who actually participated and stayed in the program for the full length of services which amounts to the 22 weeks of the residential program combined with the year of additional mentoring and follow-up services. So the impacts that I'll talk about are average impacts over a group that was admitted to the program. In fact about half of those admitted actually partake of the full range of services. So these impacts are attenuated to some extent by the fact that we're not only looking at those who actually were enrolled and had the full dosage or treatment of the program but that's the way this kind of random assignment study works in order to randomize at the point at which individuals are admitted, not at the point at which those who choose to participate are actually already enrolled and in the program. So the MDRC study looked at results comparing those who were admitted to the program and randomized into the study, those who were randomized to the control group at three points in time at nine months, 21 months and 36 months and I'm just gonna talk briefly about the results after three years after the point of randomization. What the study found was that those who had been admitted to the program and were in the study group were 22 percentage points more likely to have a GED and of course one of the goals of the challenge program is to see that participants obtain a GED or a high school diploma if that's possible. There was a four percentage point increase in the percentage that had a high school diploma. This is relative to the control group. There was a 16 percentage point increase in some college attendance, seven percentage point increase in participation in vocational training, a seven percentage point increase at the 36 month point in the probability or actually being employed and at that three month, 36 month point, a 20% increase in annual earnings. So these are all impacts that if you were to look at the array of other interventions that are out there that have been tried and carefully evaluated, these would be considered to be very meaningful, very significant impacts to have these kinds of differences on education and employment related outcomes. I will mention that the study also did look to see whether there were impacts in areas such as crime and delinquency and health and some other areas of lifestyle. Those effects were generally smaller and not significant. In some cases they appeared at one of those follow up points but were not lasting effects. But even to have those kinds of impacts on the dimension of human capital on the investments in individuals, education and to see it manifested in their labor market outcomes at that three year point is a significant finding. So that was the MDRC study. We were then asked at RAND to extend the work that MDRC did and to undertake the economic analysis to be able to look at the costs of the program relative to the downstream benefits. And so essentially we relied upon the results from the MDRC study. Those impacts that I just referenced were the figures that we used to then translate those impacts into dollar benefits. One of the things that had not been done in the MDRC study was to measure the cost of the program and I will say that this is quite typical that when somebody comes to RAND or other organizations says, well, we really want a benefit cost study done. We've done the evaluation, we've got all those results and we'll say, well, did you measure the cost of the program? And we'll say, no, we didn't do that. We didn't think about doing that. And so there's often a process of going back and trying to reconstruct what the program costs were and that was actually part of the RAND study was to go back and look at the program in 2005, 2006, the cohorts that were part of the evaluation. What did the program cost at that time? So we developed those figures and then we translated the impacts particularly around educational attainment into what those benefits would mean in terms of lifetime earnings differentials for those who were admitted to the program and were in the treatment group versus the control group. And our analysis showed that when we accounted for a comprehensive measure of economic costs which include both operating costs as well as the cost of participants time we value the cost of the mentors which is a kind of donated time but we also need to account for that value. The cost per admittee in the challenge program as it was evaluated by MDRC was about $15,000 per admittee. And then when we looked at the benefits particularly the stream of lifetime earnings all discounted to look in present value terms accounting for the fact that some of those earnings benefits are far into the future. We found that and also netting out the added costs of education because one thing with a program like challenge which encourages individuals not only to obtain a GED but to have broader educational goals staying in school continuing on to post-secondary education is that additional education amounts to added costs so we need to account for the fact that not only are there downstream benefits but those benefits come with additional investments in the education, the post-secondary education costs. So on net the cumulative present discounted value benefits were about $41,000 per admittee or a net benefit figure of nearly $26,000 per admittee or a ratio of $2.66 in benefits for every dollar invested. And again I will say from having looked at array of other social programs and impacts that is a favorable estimate of returns. Of course with studies like this there's always uncertainty about how you value particularly the stream of benefits and in our analysis for those of you who want to look at the RAND study which also is online both in longer and shorter forms you'll see that there were a variety of sensitivity analyses so our estimates were that depending upon those assumptions about the downstream earnings gains that the ratio of benefits to costs was at a minimum $1.54 and at a maximum upper bound we estimated it was $4.98. So that $2.66 figure I gave you sort of falls within that range. I do want to say that it's important that we state that those figures that I've given you both the impact estimates from MDRC in our analysis of economic returns are all specific to the cohorts of youth being served by the challenge program. The challenge program has a very specific requirement about which youth are admitted to the program and so it's important to recognize that that is a subset of all dropout youth that we've been talking about today and so ultimately to the extent that the program is expanded to serve a broader demographic or uses different criteria ideally one would go about evaluating the impact of the program for that group of participants to the extent that it differs from the cohorts that were being served in this evaluation and in some cases we might expect that with a more disadvantaged group perhaps the benefits would be even greater. In some cases with youth that have more challenges you may need to tailor the program in a way that you're also gonna have a more costly program to get a similar set of benefits. I wanna conclude by just saying a few words about some of the challenges of undertaking this kind of analysis. Both in terms of the evaluation itself of the intervention as well as the kind of economic analysis we did. So first of all there's a challenge of just doing good evaluations. It is a process that is time consuming and costly. Time consuming in the planning, in the implementation and then even in most of these kinds of social programs and interventions you're interested in not only having short-term impacts but also long-term impacts and so the time that it takes to measure and observe those impacts in this case to follow participants until three years after the study began to see what were those longer-term gains. Because in many cases if you had stopped after the 22 months of the residential program you might have seen some gains but you're not gonna capture the full impact. Likewise if you waited until let's say after the additional year of mentoring and follow-up services that still might not be long enough. And ideally we'd even follow these participants 10 years into the future to be able to say what do we see in terms of long-term impacts and mirroring areas that I know of in some cases you'll see impacts at that 10-year point that you weren't evident even at the three-year point because some of these changes take time to manifest themselves and the way that they affect people's lives. We talked a little bit about, Kathy mentioned that sometimes the objections to randomized control trials, that is the gold standard. There are alternatives but I would say wherever it's possible to use randomized experimental designs that's the ideal and the kinds of examples like the challenge program where you have oversubscribed programs is certainly an opportunity to use randomization as a way of determining a treatment and a control group. There are cases where you may not have a program at large scale but you still know that you're not able to serve all of those who would qualify or potentially be eligible where you can use a similar strategy. It is important to also note that not all randomized control trials are executed well and so it's important to be able to look at each evaluation and to see whether or not there were any violations or deviations from the randomized design. Another challenge with such studies is that it's important to understand what the control group received in terms of services. In many cases, the control group has access to other programs out in the community that are an alternative to the program you're trying to evaluate. So in essence, you're testing your program against what the status quo or the alternatives in the community treatment as usual is and that may be something other than a no program group and a great example of that in another area that I work in is early childhood interventions, the Head Start National Evaluation randomized a group of children into Head Start programs and out of Head Start. Well, the group that was in the control group, many of them were in other kinds of early childhood programs, a state preschool program, something else very similar to Head Start and in fact, some kids actually were in another Head Start program, just not the one that they were randomized out of. So you have to carefully look at what is the conditions under which that control or comparison group, what services are they receiving so that you understand what impacts you're actually measuring. In terms of the benefit cost analysis that we undertook, again, there are challenges there. I mentioned the issue of having a careful measurement of program costs. Ideally, that's done at the time and evaluation is undertaken so that in real time you're actually measuring the resources used. A cost study can be extremely valuable on its own even if you don't intend later to do the full economic analysis. But often that's an afterthought and we have to go back and try to reconstruct what the cost data looked like. Ideally, with such a study, you want a comprehensive valuation of the full range of benefits that a program generates. A study like this is relatively straightforward because we're looking at outcomes closer to adulthood, later adolescence, early adulthood that can readily be translated into dollar terms. So seeing the impacts on educational outcomes on the labor market, on crime, the criminal justice system, those are areas where we can more readily place dollar values and attach them to potentially the full range of impacts that an intervention may have. In other cases, when we're looking at interventions that may start earlier with children and youth, we measure outcomes that aren't as readily translated into dollars and so we may have a challenge with being able to say what the dollar equivalent is of whatever the range of impacts we measure. That's true when we're measuring academic achievement, for example, as outcomes measures of social-emotional learning. Those are all areas where we're just building a technology to be able to place dollar values on those benefits. It's also important to capture the uncertainty associated with doing these kinds of analyses and to be able to reflect that in a range of estimates that reflect the sensitivity analysis, like I mentioned. I'll just conclude by saying that I think the National Guard Youth Challenge Program I think is a fabulous model for seeing how you can take a program. In this case, one that was operating at scale that had been in place for a number of years, but to say we really want that rigorous evidence base to demonstrate that this program that we're operating is as effective as it can be and to use that information to then guide subsequent resource investments. And I think Kathy is right, it's a risk. We all like to think that the programs that we're running are great programs and effective. That won't always be the case, but it's important that we learn about what works and what doesn't. And these kind of evaluations can be a way of refining and improving programs, as well as learning about the programs that are proven models that can be then expanded and brought to more individuals who can benefit from them. And I think that these kind of evaluations that capture both the impact analysis and the economic analysis provide the full picture of the value of investing in these programs. And look forward to any comments or questions you may have specifically about the challenge program in this regard. Terrific. Very thoughtful and thorough presentation, Lynn. We've heard the federal view around evidence base. We've heard a detailed view of the National Guard Youth Challenge Program and how that evidence-based analysis was done. Gary Van Landingham has a different perspective from the state view and a great deal of experience in that regard. Gary, let's hear from you. Well, thank you. The Pumic Arthur Results Purse Initiative is really designed to look at and deal with one of the issues that came up in the last panel, which is what I would call the shiny object problem which is out there. Recognizing that on one hand, we have huge problems that we all recognize. I mean, there's an educational crisis in the US. There's a criminal justice crisis in the US where we have 6% of the world's population, but somewhere north of 25% of the world's prisoners. There is a unemployment crisis in the country. There's a lot of big problems that are out there. We also recognize that as a nation, we spend a lot of money trying to deal with those things. I mean, educational spending in the US is higher than in most other countries. We spend a fortune locking people up. We spend a lot of money trying to deal with other problems out there, and we haven't really fixed those problems that are out there. And I think looking, big picture, and trying to figure out why that is, we recognized a couple of problems. One is that we spend a lot of money on things that don't work, and we do that in a lot of different policy areas, and we do that for a variety of reasons. From a state's perspective, states do policy pretty incrementally. They put money into a program, and unless something dramatic happens, that money stays in that program pretty much forever, to some extent. There are changes out there, but the amount of money that goes to various programs operates largely on inertia, and the only thing that really differs on that is how good of a lobbyist that people have. So if you've got a compelling story and a good lobbyist, maybe you can get your program funded in this state and then, until another lobbyist with another story comes which may bump your money down a little bit, but simply we spend a lot of money on things that don't work very well. And in other cases, we spend money on good programs, but we implement them so badly that they don't really have a lot of outcomes out there. So what we're trying to do with results first is to deal with that and recognize that we have a lot of information out there in terms of what effective programs are. As Lynn was talking about, there have been a lot of really good evaluations that are out there that have proven a lot of programs and have proven a lot of programs don't work very well. So how do we get that information to states? How do we bridge the gap between the growing knowledge we have about what works into the policy process which tends to run on incrementalism or today's shiny object which may be displaced by tomorrow's shiny object? And what we're doing results first is really trying to build a bridge between what we know about what works and what gets done and to do so in a way that brings us information to policy makers in a way that can be pretty compelling to them. And what we're doing really is starting with a cost-benefit perspective but possibly growing beyond that. And we're building from an approach that Washington State's been doing for about the last 15 years. And there's a relatively small research office out there that was asked some fairly basic questions by the legislature. We wanna do something on crime in Washington State rather than just putting more money into the existing programs. Tell us what works. Go out, do a comprehensive research review and tell us what the research shows are the most effective ways of dealing with crime. Is it just locking everybody up? Is it educational programs? Is it criminal justice? Different types of intervention programs. And they went out and they did a big meta-analysis looking through tens of thousands of studies, identifying the thousand or so studies that actually are pretty strong because there's a lot of bad studies out there. And came back to the legislature and said, based on the collective knowledge of the universe, here is what we know about a large variety, a portfolio of investment opportunities in criminal justice. And the legislature said, good to know, can you tell us in cost-benefit terms what would happen if we did each of these because we only wanna invest in the programs that are really gonna be the most effective? So that they retired back to their small conference room for a couple of years and built a cost-benefit model that is able to take the information out of an evaluation like Lynn was talking about and to do a cost-benefit analysis on it. And then to do that analysis using the same assumptions, the same approach on that portfolio of programs to say, well, some of these programs really seem to work a lot better based on our population characteristics than other programs do. Because it's one thing to say, here's what this program is, it's a good program. It's more important to know what would happen if I did that in my state based on my population characteristics and compare that to other things that we might possibly be doing or might want to do and to be able to give us an apples to apples comparison and say, yeah, this program would give us $2 back for every dollar we put into it. This program really looks like a best buy for us. It's about a 10 to one return on investment. Oh, and this one, wow, this doesn't work at all. The best research shows this is a totally ineffective program, sounds good, but until you start getting into it, it really, research shows it's a bad buy. So they built this model, took to the Lutwaction legislature and they've been using it now for about 10 years to move money away from things, which the analysis, all the best evidence shows doesn't work, two programs which are more effective. Started doing this with criminal justice and then increasingly doing it in other policy areas. And have achieved some fairly dramatic gains with that they've taken their arrest rate, which was above the national average and have driven it substantially below. They've taken their incarceration rate, which is now lower than national average and they're saving around a billion dollars per biennium because they've no longer started, that they've started to close facilities rather than to continue to lock up more and more people. So we saw that this was happening in Washington state and said, you know, is this something that we think other states would be able to do? And we started working with us about a year and a half ago, we're now working in 14 states to bring this approach to them and to start implementing this and get this model operational in those states. And that is working fairly well so far. We think that all of those 14 states are making good progress on it. They're all starting with criminal justice, but we're starting to build this model out into other policy areas because we recognize we know a lot about what works in a lot of different policy areas. And we also recognize that as Kathy said, there are tiers of evidence out there and there's other folks besides Washington state who are doing this meta-analysis work, that there's a lot of clearing houses out there now who are starting to go through the research in a lot of different policy areas. They're going through this research in areas such as education and criminal justice and job training programs, child welfare programs. And what we wanna do is to expand this approach and to capture more and more of that knowledge about what works and to be able to bring that to states and to be able to have them use this information to inform their policy choices in a much more comprehensive way than they've been able to do before. And this has a lot of implications for the field. I think that this is something that is going to gain increasing traction because it recognizes that we know things that work. It recognizes that we don't have an instrument of money out there. I agree with the assessment. There's probably gonna be less money in the system than there's been in the past. So the only way that we're gonna have better outcomes for our citizens is to start investing that money better. And that means we have to start using research as have to start using some hard-nosed logic to say, we can't afford to do everything. We can't afford to do programs badly. We have to start identifying the programs that work the best, ensure that they're implemented well and then start tracking the outcomes to see if we're getting what we're paying for. This is an approach which I think can get us past the political stalemate that we are in. And if you look at where the problem is, you understand where the political dynamics are coming from. I mean, conservatives are sitting there saying, we're spending a fortune, nothing works. Let's kill everything and then start over. Progressives on the other hand say, well, the problem is we're just not spending enough money. Let's spend twice as much money up and maybe things will work out better. I think the reality is we have to start spending the money that we have better. And I think that's something that both sides can agree on, that if we are able to do this, we can move money away from programs that fail. And a lot of programs are out there that fail. We can invest that money into programs that are much more successful because there are programs out there that are proven to be successful like the National Guard Challenge Program and others. And I think if we do that, we will start seeing the results that we all want to have. I think, as was pointed out in the last presentation, this is something that's going to require a community effort. All the people in this room have a stake in these outcomes. And I think all together we have to work together that if we're operating programs that we believe in, then we have to agree to do the evaluations to prove what the outcomes are. And we have to be willing to run the risk that, hey, maybe my program doesn't look that good compared to other programs. And to then either say, well, let's change this program so it works better or to step back and say, we tried something, we tried hard, it didn't work. Why do we want to continue having the sector put money into this program if it's not effective? I think we have to be willing to do that. I think that we have to be willing to bring this information to policymakers and to help support this concept. If we want problems to be solved, we have to work together to have the message to policymakers that we collectively agree that this problem has to be solved. We think there's better ways of doing it and that we can be part of that solution. From our perspective, as we build this model out and start capturing the information from all these different clearing houses, which is what our goal is about what works, one of the challenges that we're going to run into is that my relatively small team is not an expert on every program that we're going to identify that is promising or proven, but you guys are in these program areas. So when we're able to bring states a compendium that shows here's 30 promising educational programs, here's 15 very proven educational programs. Oh, and here's the other 65 things that you're doing which really don't work very well. And they say, great, what can you tell us about the National Guard Challenge program? We would like to be able to point to that program and say, here's some people who can come and talk to you in detail about this program, how it works, and then also help you ensure that you do this program appropriately because that's another critical part of this. Too many times we're seeing the programs which are designed to operate a certain way are in fact being operated in a totally different way and we're not getting results for it. We have to start paying attention to making sure that these programs are implemented on the ground in a way that respects the core elements of them. And Washington State learned this the hard way. They started investing in a functional family therapy program which the best research showed would reduce recidivism by about 22% for kids who were in that program because the research was so strong in doing that. A couple years later, the legislature said, tell us what we're getting what we're paying for. You told us that we're gonna get this 22% reduction recidivism. We wanna do an outcome evaluation, tell us if we are getting that 22% reduction because if not, then maybe this program simply can't work here. So they did a good evaluation of this program. They went out and reported back and said, you know in about half the state we're getting almost exactly a 22% reduction that we found the research showed we would get. Unfortunately, in the other half of the state we're getting nowhere near that and when we looked and asked why it's because these people are not doing this program. They're hiring therapists who aren't trained in doing this program. They're doing bone thing. And in many cases they're making the kids worse. Recidivism is going up instead of down because they're not doing this program. They're doing something else. And the legislature said, we're gonna have to start investing in quality assurance to make sure this program which is proven to be effective is proven in our own state to be effective can be effective everywhere if we ensure it's being done well. We have to have that type of discipline in program delivery. We have to have that type of discipline in the decision of what to fund. If we do that, I think we can start solving the problems out there in a much more deliberate way. But that's gonna take the involvement of everybody here. Everybody needs to be able to go to policymakers and say, we can do better than what we're doing now. We can use the evidence in a way to better inform our decisions. But there's a discipline in doing this and we all have to be willing to support that and to say that, yeah, we're gonna find that some things work better than others. There needs to be continual experimentation. There needs to be continual evaluation. If we do those things and do them well, we can solve the problems that are there. We have the resources in the system in a lot of areas to do this. We have the knowledge now it is. What can work, I think it's a problem of getting that knowledge into the system and using that to drive performance. That's what we're trying to do now in 14 states and we'd be happy to talk to other folks in other states or in other policy areas and share what we're doing. I think it is the approach that we need to start moving the needle on these problems. Terrific, Gary. Most informative, you and Kathy have both alluded to connecting the evidence to policy and spending. This idea of evidence-based analysis has been around for a long time. Lynn, you've been in the business for a number of years. Rand has been in the business for a very long period of time. So research and analysis is not something new. Trying to find out what is working is not something new. But can you tell me, any one of the three of you, can you tell me where are we on the spectrum of actually attaching evidence-based results to public policy decisions about spending? It sounds like we're in the early stages of this with perhaps the exception of Washington, both on the federal side and on the state side. Would that be a fair assessment? Can you elaborate on that? Kathy? So I'll share some perspectives. Having watched two administrations take somewhat different approaches. In the Bush administration, they were quite committed to the idea of having consistent standards for looking across similar programs and looking at evaluations and making choices about whether programs were performing or not. We had something called the program assessment rating tool for all the programs. And we valued as part of that that there should be independent rigorous evaluations. But the unit of analysis was the program as a whole. And what we found was the incentive structure was very, very threatening to many programs because the evaluation was done out here as an up or down vote on the whole program was the whole program working. And frankly, most programs are not like National Guard Youth Challenge. National Guard Youth Challenge is a model that is tight. It's been replicated in similar ways across the country and that's very intentional. Many of our programs are grant programs where there's lots and lots of variation. And when you do a rigorous study of the whole program where there's lots of variation, you're very likely to come up with a result that says no impact because it's unable to discern what are the high performing models versus the low performing models. So we found that in the last administration, a lot of programs just got their backup and policy officials and agencies in OMB would fight over the part ratings. And we didn't make a lot of progress. What this administration has tried to do is say, let's think more about program strategies and interventions where within a given program there may be many, many opportunities to try out different strategies. And if we can see evaluation as a tool for going inside the program, we have an opportunity to bring the program directors, the policy level senior officers in the agencies and the research offices all together collaborating to say, what are our strongest hypotheses about what works within this program and how do we structure our activities so we can go and evaluate them. And I've seen a huge change in attitude in terms of the agency folks recognizing that evaluation is a key tool. It turns it into the whole process into something that's fun. It allows you to bring partners in from the outside. Granted, there are gonna be things that lose and but you're providing the evaluations provide actionable information for program directors to say, ah, with this information I can make choices about where to invest more of my resources and create incentives for doing more things versus what I should be shedding because it's not working. I think it's also worth just mentioning that in several areas of policy, for example, federal legislation is now indicating that evidence-based programs are the ones that are gonna get funded. So that's happening in the home visiting area, for example, that there is now a list of programs that have the evidence-based and those are the ones that the resources are gonna go to, which now gives you an incentive to have that evidence-based backing your program in order to see that it gets that kind of support and there's the opportunities for expanding. Another change I think that's coming into play that is relevant is that in many cases when you're looking, Congress is making decisions about funding and this is true at the state level, there's a process of scoring the cost of a particular program or piece of legislation and one issue is can that score account for the fact that if we know something is proven to generate savings down the future that that'll be reflected in the scoring so it's not only the upfront, this is what I pay to have the program but there's gonna be these downstream benefits and so as more and more of that kind of revision and the way we do the scoring comes into play that that's an opportunity again for this kind of evidence to shape where those investments get made. So I think those are two examples of how this is starting to come into play I think more and more and particularly I think it's starting in those areas where we have a lot of evidence first because there's that base of evidence to build from but with I3 and other areas, the notion that we have these tiers of evidence and we're trying to build up the evidence base as we go is part of the strategy as well. I'd say that we've come a long way but we still have a long way to go and it's a couple of examples from that from my mis-spent youth. First job out of grad school I was an evaluation specialist for a school district and I was doing title one evaluations and other things and quickly ran into what I saw was kind of a timing issue which was the programs that had to have an annual evaluation and then they also had to submit a program plan of what they're gonna do the next year. Almost uniformly, they had to submit a program plan before the evaluation was going to be done which sort of told you that there was going to be no impact for your evaluation at least possibly maybe for two years at best. So just the way that the system is designed is hugely important and frequently we don't design the system to use the information that is being produced in the system. Second thing is that after that I spent many years in Florida working with the legislature doing evaluations and a large number of programs and some of those had a lot of impact on the system. The challenge was those evaluations like all evaluations are essentially rifle shots. You're looking intensely at a program and doing an evaluation of that program and that can help inform either a funding choice or a policy choice. The challenge from a policy maker perspective at the legislature is they've got to deal with 2000 programs that they're spending money on every year in a fairly compressed time period. So you may say that this program is effective or not effective, but that is only gonna be one quarter of 1% of the state budget. So if you're really trying to deal with a big picture of how can I improve system performance you've got to deal with bringing more portfolio based information to the system. And that's where I think clearing houses are gonna be very important. I think that the long-term solution for this is one, yes, we need to build an evaluation perspective into the system and to say that it's not just enough to be dealing with your daily service provision and putting out today's fires and everything else that managers and agencies have to deal with. It's hard to be able to step back and say, how are we doing? What are we really trying to accomplish? But to an extent this is where I think the private sector folks have a valid point which is you wouldn't run a company on the basis of we're just gonna continue doing what we're doing neck from year after year after year without paying any attention to things like sales or profitability. We tend to run government that way assuming that everything's going to continue on and in an item and not surprisingly, our enterprise is not performing that well. So we need to build this evaluation component of continually saying, is there a better way of doing things into the system? But then we need to capture that information and make it available to policy makers throughout the system. Because when we're able to bring this evidence into portfolio basis to say, here's your investment choices. The same way that when you're thinking about how do you wanna invest your retirement funds? I mean, you don't just talk to one person who says, I've got a stock, you need to invest in my stock. You usually look at mutual funds who are looking at these things and you wanna look at all the range of mutual funds and look at past performance which of course is no guarantee of future results. But that's the way that we make decisions. That's the way the legislators make decisions. That's the way that governors make decisions. We have to bring this information together in a way that matches the way that they make decisions which is a portfolio basis. If we're able to do that, then I think this information becomes much more useful to them and I think we'll actually drive the system. I'll ask one or two more questions and then open it up to the audience. So prepare yourself, audience. One of my pet peeves when I was state superintendent was title one and special education dollars. Huge dollars coming from the federal government, two states. And as a state administrator, state chief of schools, you know, I'm wanting and answering to the legislature on why schools are or are not performing. And I'm wanting to try to get more out of the money that we spend. Now this leads to a non-personal question and the question is we've had evaluation systems in place for a long time on title one, special education dollars, early childhood dollars. But these evaluations haven't resulted in much change. Is it because the evaluations are not good evaluations? They're not giving us the right data. Is it because the players don't want to listen to the results of the evaluations? I mean, what do the evaluations show of these big spending dollars? Do they show that we're getting the bang for the buck in title one? All of the people that I talked to out in the field seemed to think that's not the case but yet we continue to spend multibillions of dollars on title one monies and we know because it's mostly a grant that Senator Landrieu talked about this. This is why I really want to hone in on it here. Is the problem the way we evaluate and we don't really get good evaluations out? You've talked about high quality evaluations today, right, is that the problem or is it the system isn't right for even the results that we're giving them? I think there's several challenges there. One is that there tends to be a major disconnect way too often between the evaluators and the policy makers. So that they don't answer the questions that the policy makers need. That they sit in their ivory tower and they do something and it doesn't really, you can't do anything with it. I've seen cases where the legislature in Florida would set up an evaluation requirement for a new pilot program to say when two years come back with an evaluation and it was obvious that they were looking for an outcome evaluation. Is this desiring a funding? And what they got back was a process evaluation that laid out the steps that they were following to sort of a book report of how I spent my last two years. So in some cases, the evaluations aren't that useful in terms of making policy. In other cases, I think it's a communication problem that that information is not being presented to them in a way that's actionable to them. And, you know, evaluators love to give out big thick reports and, you know, a big thick report is about the least effective way of communicating to a policy maker because we all have reading piles on our desk which then slowly move to the bookcase which then slowly once a year we throw out because we haven't had time to read them. We have to do a better way of capturing that knowledge and making it useful. So I think that there's a number of systemic problems there. But, you know, there are great evaluations out there. It's just breaking through the noise of everything and becoming useful to them, I think, is what we need to do. I'll just add quickly that I think in the examples you gave of Title I dollars in special education, I think one of the challenges with evaluation is that there is not just one way those resources are used. And so by giving, you know, local education agencies the flexibility to decide how to spend those dollars, it means they're being used in a wide variety of ways and there generally isn't a constraint that says you can only use these in proven areas. And so with that flexibility, it means that, you know, there's a lot of different things happening and if you were to try to evaluate, say, the national spending in those areas, it would be this kind of mixture of things that some things that are being done that are probably working and other things that aren't. And so I think, again, it's the notion of, particularly when you have those large funding streams, thinking about ways in which we're more prescriptive about how those resources get used. Either it's on a proven model or if it's not a proven model, you're evaluating it to see in a rigorous way with impacts to see that it's actually having the intended effects. And then, you know, and if you can't do that, you don't, the resources aren't there to use. So I think it's bringing that accountability to how those dollars get used that would be the change that we'd need. So just to add, yeah, so in the last administration, Russ Whitehurst, when he came in to run the Institute for Education Sciences, basically made a point of saying those big, expensive evaluations of Title I that we'd done for decades were so kind of loosey-goosey that we never really learned very much from them. And he said, you know, if you really want to learn, you've got to do rigorous evaluations. They don't always have to be random assignment, but they have to be very rigorous in terms of what's your hypothesis and then what's the most rigorous scientific methodology you can use to test them. That work has been building a body of knowledge at the federal level that now a lot of it sits in the What Works Clearing House at Education. That's a start. But I think what we're finding is that state and local governments look at what works, they kind of say, eh, you know, some of them, there's more and more traffic, but it's not driving the kind of revolution and how we think of evaluation. I want to credit Ron Haskins who's sitting here for introducing some of the folks at OMB last year to a guy named Jim Manzi who wrote a book called Uncontrolled. And he has made a fortune in the private sector running, starting a company called Applied Predictive Technologies, where they used random assignment trials for Capital One, Kmart, the hotel industry, the financial industry, to constantly test and iterate different hypotheses about what would get a better impact. And he saw huge potential to apply those techniques in the school system, not at the federal level by doing long, expensive trials, but imagining state superintendents or school superintendents for large cities taking advantage of the data infrastructure that they've put in place, which is standardized data that tracks student outcomes and teacher characteristics, lots of different things. And essentially imagining a school superintendent at the beginning of every year deciding what are the six or 12 or 15 hypotheses that they want to test and bringing a researcher in so that they can do their own evaluation and research on what are high impact practices that made sense in that school district. And that would then provide a huge source of information that could be bubbled up and aggregated at the federal level so that we can learn that way. Very interesting. Let's open it up for questions in the audience. We have about 20 minutes to go. We have a lady over in the corner in the green jacket. You could stand up. Okay. Hi, I'm Mindy Reiser. I'm a sociologist. I've evaluated programs for the Department of Education and internationally. I wanted to hear your thoughts, particularly Kathy, on the education labs that were at least mandated to work very pragmatically with school systems in their respective states. And it looked like theoretically it had this combination of really good methodological folks and real world problems. Where has that gone? It was re-competed and I believe new people got those contracts. And where is that? Also, the What Works Clearinghouse, I know something about that. That has had a bit of a checkered career. And maybe you could talk a little bit about how that has really been perceived and how helpful it's really been. So I'm not gonna pretend to be a total expert on this, but so first on the labs that has been re-competed, I think my sense is that they are, the methodological rigor is much higher, but they are out in the regions and they're there to support state and local needs. So in large part, what they are doing is responding to the demand. And I think there has not been a lot of demand for rigorous impact evaluations. And with the result that some of the work that they're doing, I think it probably tends to have more of an implementation focus and some process focus. And so someone throughout the idea to me recently, and this was in a different field besides education, but the idea of why don't we have prizes for states and localities who embrace rigorous evaluation and impact studies to try things out, who then partner with academics and researchers or things like the regional labs to come in and do that testing. So this sort of Jim Manzing notion of what school superintendents could be doing. The regional labs could be a great resource to work with school superintendents to take that administrative data and set up all kinds of trials. What works Clearinghouse? I think, you know, standing joke when it first started was it was the what doesn't work Clearinghouse. I think that they are, they're hoping to, and they have some plans on the sort of next contract, I think, to do some improvements that they feel pretty confident about. One thing that I do know is that when we were setting up the investing and innovation program at the Department of Education where as an applicant, you were going to be more likely to get a grant if you were coming in knowledgeable about the research and it showed in your application. Well, you know what happened? The hits on the what works website went like this. And so I think we have to recognize that there's a supply side and a demand side to getting this right and the Clearinghouses are very important on the supply side but they don't necessarily have all the leverage they need to drive the demand and that has to come from the policy side to figure out what are the incentives to get people to look at the research. In fact, if I could add in my role as state superintendent we worked with the regional lab for the covered Louisiana. And I think, you know, you had some states that were very involved in it and some that were not much in this vein of what's the demand. I think one of the really interesting questions is from a policy perspective, what should the demand be by state government on the education system to use these kinds of research-based analyses or to use evidence-based, strong, rigorous evidence-based analyses. And the same could be said for the federal government. You know, rather than spend time trying to end Title I, tell people exactly what you can and cannot spend money on. Say, here are the outcomes that we want. You demonstrate to me that you reach those outcomes and we'll continue to give you money. That, to me, would be a little bit better use and then it would cause people to go to the What Works Clearinghouse and it would cause them to go to the research labs and actually make it work in a system rather than the way it is now, which is, if you're a state chief or a district superintendent, if you happen to be interested in research and getting results and somebody's pressuring you, you'll go, but if not, you're probably gonna be out of the zone. And I think that's really the issue that Results First is trying to deal with because we take this model and we put it into the state budget office and these are the folks who have to make those decisions on a daily basis and we'll bring it to the commissioner of education and they can have it in their staff office so that this information, you know, gets to the attention of the people who actually have to make those daily decisions and that, I think, helps to deal with the supply and demand balance. Yeah, there just has to be a consequence to not using it and when organizations like yours bring the information in and people don't use it, what's the consequence and what should the federal government consequence be? Should we just keep giving you the title $1? Keep shoveling those millions of dollars into your state or not, you know? As Kathy mentioned, I think that there have been some recent efforts at the federal level to start mandating that money be pushed towards evidence-based proven programs. We're seeing a number of states trying to do that in both Washington state and Oregon. There's now mandates that a certain percentage of funds for criminal justice programs have to be invested in evidence-based programs and that needle goes up over time. Other states are starting to do that. So yeah, I think it's both of these. I think that you have to give it to the information to the people who are there but you also have to have an incentive and whether there's a mandate that the money can only be spent on things or at the federal level you could have scoring that reward states that are using an evidence-based model to use the federal money because then the feds have more assurance the money will be well spent. So I think that we need to do it both ways. Yeah, and I have to credit particularly the Obama administration with the I-3 grant. I think this idea is a tremendous idea because people want to get the money and they're willing to respond and there is a consequence to the action. Same is true for the race to the top grants. It's been a tremendous stimulant to change behavior because you're connecting the whole system together. Another question from the audience? Yes, sir. Hi, my name is Benjamin Robinson. I'm with the Metropolitan Policy Program at the Brookings Institution and I've had some background not as much in education but more in workforce issues and so I'm kind of trying to shift the conversation a little bit away from the education or whatever we were before. So a lot of the time in the state level and at the federal level what we've been talking about there's a lot of capacity there but in my work where I've been working with workforce groups on local level kind of seeing the culture there it's not really there yet. It's more in education. So how do we move across sectors? Because education and workforce they go together. Education, workforce, criminal justice they're all tied together in these large intricate systems. So a lot of the times my worry is that we're siloing education. What works in education? Well, it's also nice to know what works in workforce and what works in criminal justice. In addition also one of the things that you brought up is it's also important to know what doesn't work. What things are insignificant. If I go to the database and I see what works well maybe I'm looking for something and I can't find it. So there's that culture of working across fields and across disciplines. Where do you see that going? Not just on the federal level where things are very well more easily networked and even a little bit more on the state level but also on the local level. Thank you. So just so a few things. The, under this administration the labor department has made a big push to think about evidence and evaluation. They created a chief evaluation officer who works very closely with the Employment and Training Administration on Workforce. Unfortunately we haven't had a reauthorized Workforce Investment Act in a very long time. So the incentive structures in the current system are largely unchanged in the way the formula funds flow. So we've had to work around the margins. The Workforce Innovation Fund at the Labor Department borrowed the I-3 model, the three tiered model. It's a slight variation but the hope was that by providing some competitive grant money for state and local governments to come forward and test more innovative strategies to get better outcomes that could link systems, could work across systems, that we would start to drive that and create an appetite for innovation at the state and local level. It's not a lot of money. It's 100 and, I can't remember the money. It's in the neighborhood of 120 million. So it's a bit of a challenge but I will say this, it's been very heartening to see the evaluation officials from the Department of Labor, the Department of Education and the Health and Human Services Department working very, very closely together to say how do we build a knowledge sharing capacity across our agencies? How do we think in consistent ways about what are good strong studies and evidence standards? And the Pay for Success program that the Department of Labor is implementing now with its 20 million dollars is designed to get better employment outcomes but to ensure that we're incentivizing communities to find strategies that might involve housing or education or something on the social services side for purposes of getting those outcomes. I think it's a really important issue because on one hand in a lot of policy areas we are decentralizing program authority recognizing that it's hard to sit in Washington and come up with a solution that works for all the communities in the nation. It's hard to sit in a state capital and make a solution that will work for everybody. So increasingly in a lot of areas we decentralize program responsibilities whether it's for child welfare or employment opportunities or other areas. So the challenge is then we are increasingly removing the decision-making process away from those centralized sources of information about what works. So we have to really build a strong network that brings that information to the folks on the ground in the thousands of places where these decisions are gonna be made that they can really have the basis of this information. And the good news is that's a lot more doable now than it was 10 years ago or 20 years ago when you could send them a big book of the... If you read these 15,000 pages you will understand what you should be doing which no one would do. So I think that it is something where we need to work together to share this information. I think we need to build platforms that make this information very accessible. I think we need to market that solution out there so that people understand what they have there. Basically it's a supply and demand issue. I mean it's how do we get the supply of growing evidence on what works to the people who have to make those decisions and that's gonna be the challenge. Because if not then we know what happens at the local level. It's either we'll keep doing what we're doing or it's a feeding frenzy of lobbyists or whatever and we have to start doing better ways of managing that disconnect. I'll just quickly add that I think some of the Clearinghouse models that are out there now are less driven by the stove pipes of where the agencies line up and are more focused on a set of outcomes. So just I'll plug the RAND example which is the Promising Practices Network which focuses on outcome for children, youth and families and basically you would look if my goal is to improve this given outcome it's gonna give you a range of programs that have been demonstrated to have that effect. Some of them might be things you do in school. Some of things you might be doing through other parts of the community or these systems and so again sort of thinking about these is integrated across the kind of outcomes you wanna effect rather than our traditional stove pipes and in a way that's what the WSIP model is doing too, it's not only looking, it's now broad enough that you can see if what I wanna change is a particular aspect of our criminal justice outcomes. It might be that it's an early intervention model that's gonna have your biggest impact than something that you do that's school based. So being able to look broadly across these areas of intervention I think is important. Terrific, we have time for one quick question and a quick response. Yes ma'am, right behind you. I'm Andrea Kane from the National Campaign to Prevent Teen and Unplanned Pregnancy. I have one quick question and one quick comment. One is I was really struck by the thing that you mentioned Lynn that the long term evaluation did not find a positive impact in terms of some of the health outcomes and in particular it looked like there was less use of birth control among some of the participants after going through the program which was interesting. So I just wonder if you have any insights about that and then this quick comment is that we recently learned that the youth challenge site in Louisiana actually to connect sectors has received federal funding for evidence based teen pregnancy prevention programs and is now implementing that to sort of supplement and enhance what they're doing in Louisiana and they haven't published results yet but anecdotally at least they are very pleased with what they're finding. So I thought it was an interesting case of one strong evidence based program kind of partnering with another one maybe to strengthen the program. So I just wanted to offer that. Quickly say, I'm afraid I don't know enough about those other outcomes like the use of birth control or pregnancy related outcomes in the challenge evaluation. But my suspicion is that again, given that that may not have been a core element that's common across all programs, it may have been effective in some sites on those outcomes and not in others and with some modification if that were a module that was emphasized more perhaps in the way that you're talking about the Louisiana program supplementing what they do then you can potentially have an impact there. I think one of the issues with a lot of these interventions is what are they trying to achieve and then what is the core model look like? What are the services delivered? What is the curriculum and so on and so forth? And in some cases we have spillovers into other domains. Maybe we weren't intentionally trying to affect in other cases we may not. And ultimately the education and employment outcomes were a primary focus and that's where the impacts were. Let me just close this session then by making a personal comment. I like to make all of these kinds of discussions actionable. What can you and the audience do based upon what you've heard today? And I'd like to suggest that if people focused on one single statistic, the graduation rate statistic in your states and really honed in on it and really pressed your policy people, your legislators, your governors, your local school boards and said the graduation rate in this state must go up. That's it. The graduation rate in this state must go up. If you did that, I think you would see more push from the research community to put real rigorous research into it because people will feel like they must be accountable to achieving that outcome. We did that in Louisiana. We put a very serious focus on increasing the graduation rate using the nationally adopted approach and we saw the graduation rate go up now over the last six years even after I've left by about eight points. It's really good news. When you focus on these outcomes, then rigorous research really comes into play. I wanna thank the panelists for explaining to us how it can come into play in doing such an excellent job. Thank you very much. Paul, I wanna thank you and your panelists also. This is a terrific panel. By the way, this whole proceeding is gonna be in the archives of the CSIS, so you can see it again, the entire video presentation as well as the National Guard Youth Challenge Foundation. So none of this will be lost. We're now gonna take a short break for lunch. Let me just give you the procedures briefly if I may. The lunches are in boxes right behind that panel. If you could get that and you're drinking whatever else, please come back to your chairs by one o'clock. We're gonna have a terrific senior policymaker discussion kind of what we've heard by the president of CSIS, the Craig McKinley and Hugh Price. And then after that, we're gonna have a final panel with some graduates of the challenge program as well as a program director from one of the state programs just to talk about what are the attributes that were successful with them in terms of how they dealt with their problems. And then the reception at the end of the day for us all to discuss about, that's what we've heard. So thanks very much. Quick break and please come back to your table.