 So I have the honor of introducing our panelists. So all the way to my right is Dr. Todd C. Helmas, who's a senior behavioral scientist at the Rand Corporation. He specializes in terrorism, strategic communications, and social media. His work focuses on improving US efforts to counter militant recruitment and decrease popular support for terrorism and insurgency. He has examined the networks of ISIS supporters and opponents on Twitter and identified ways to enlist key influencers in support of US strategic communications. He has identified approaches to assess CVE campaigns as well. He's worked closely with US special operation forces in Afghanistan where he served as an advisor to US commanders and led studies on US efforts to train the Afghan special security forces. In 2008, he also served in Baghdad as an advisor to the multinational forces of Iraq. He received his PhD in clinical psychology from Wayne State University. Next is Cheryl Frank, who had joined the Institute for Security Studies in 2009 as the director of the Pretoria Office. She's currently the head of transnational threats and international crime program in Pretoria. Before joining the ISS, Cheryl was executive director at Child Rights Organization RAPCAN, director of the Criminal Justice Initiative at the Open Society Foundation for South Africa. Research and program director at APCOF and researcher at the Institute of Criminology University of Cape Town. She began her career as a social worker with the National Institute for Crime and the Rehabilitation of Offenders. Cheryl has a Bachelor of Social Science, Social Work degree from the University of Natal, and an MBA from the University of Cape Town, South Africa. And sitting next to me is Dr. Matthew Leavitt, who is the Fromer Wexler Fellow and the director of the Washington Institute for Near East Policies, Jeanette and Eli Reinhardt program on counterterrorism and intelligence. Dr. Leavitt has written extensively on terrorism, on countering violent extremism, illicit finance, sanctions, the Middle East, Arab-Israeli peace negotiations, with articles appearing in peer-reviewed journals, policy magazines, the press, including Wall Street Journal, Washington Post, foreign affairs, foreign policy, and numerous other publications. So I have very esteemed colleagues joining us on the stage today. And this is probably just a snapshot of many of their accolades. So throughout the day today, I hope we have helped sharpen why we think it was a pretty good idea back in September of 2015 for international stakeholders to launch the Resolve Network. The idea of connecting, capturing, and curating locally-informed research and violent extremism and how it can promote effective policy and practice is a worthy endeavor. Almost three years after its inception and with a commitment to clarify its mission and its practice, this year's Resolve Forum is designed to push the envelope a little bit further on addressing what are our knowledge gaps on violent extremism, as well as the gaps in between research and how it influences policymaking. I'm hoping that our panel here can unpack some of the very real experiences of experts well-versed in the CVE ecosystem on research to policy conversations. And hopefully what we can do about improving our community of practice going forward. So we're going to start. I'm just going to have a little bit of an informal opening and ask each of the panelists to give opening remarks, and then we'll go to a couple questions. So we'll start with Todd. All right, well, thank you. I was intrigued when you sent the email describing what some of the topics you wanted to address, noting that part of the question is like, what are the challenges in seamlessly integrating research into practice? And I find it, I laugh a little bit because I've never had good luck seamlessly integrating research into practice. And it's so hard on multiple levels. Some ways by design and just some ways just with the difficultness of the topic we're looking at. I mean, number one, there's in general a disconnect between policymakers and academics. From my experience, academics oftentimes struggle to understand what the policymakers need, what their actual research needs are. And even at RAND, where we oftentimes work very closely with the policymakers, at least the shop that's funding whatever work we're doing, even then it's really hard to get a good understanding of what the policymaker wants, what is their decision point, and how are you trying to inform that? Just CVE research in general is just a very messy process. Getting access to subjects is hard. The research that we do rarely has like, gives you 100% variance answer on anything. So pretty much anybody can criticize most research on terrorism because there's loopholes and flaws and everything. So in some senses, researchers really just need to, they're trying to provide part of an answer, rarely do you ever provide a full or a comprehensive answer. And oftentimes there's not much on the back end. So as researchers finish their research products up, oftentimes at least from my experience, we spend all our money doing the research. We have spent very little time and money and effort interacting with the policy makers to push our products. We hope that by providing a 100 page report on the website that somehow will magically influence people and I'm shocked that it doesn't. And the policy process is hard, right? There's a lot of stakeholders involved in all sorts of decisions and rarely does the researcher get a 100% vote on anything. They probably shouldn't, right? There's a lot of stakeholders to include the general public that has a say in these things. So anyway, I think it's really hard to have this seamless integration. I think there's ways to address all of those factors, but it's a bit of a slog. And I can't think of any home runs that I've had in my work. I'm waiting for the day, but I trust that over time by answering questions, at least answering part of the question continually, that starts to get a good chunk of it addressed. And Cheryl, what are your thoughts on the similar question? Yeah, you know, if you would have listened carefully to Leigh-Anne's description of my bio, is that my entire career has been moving around all these spaces of being in practice as a social worker and then running an NGO, working in research institutes at universities. Now my kind of hybrid job of doing research and then capacity building and technical assistance in Africa. And my experiences are in Africa. And then also acting as an advocate for certain human rights issues and particularly in the child rights field. So, you know, I've been maneuvering around the space trying to find my way and find where is it that we can make the most significant change? Is it in practice? Is it in research? Is it in influencing policy? So my entire career has been a search for these sorts of answers. But I think what, you know, the main thing that I struggle with currently is these three pieces of things, which is one is evidence, the other piece is policy and what is in policy and the other piece is actual practices. And the huge gaping holes between all those things and how they relate to each other. So, and how they communicate with each other and how one may have something stick on another, such as, in Africa, we have this, I mean, I'm South African. So even, and my career started in South Africa, we used to call this problem implementitis. Is that we had the most fantastic policies straight after apartheid. And we had the most brilliant pieces, we have brilliant constitution, we had all these brilliant pieces of legislation supported by the world. And we just couldn't make it happen in many places, with very disastrous results for children, for families, for women, which is the field that I worked in at the time. So this implementitis problem and in between policy and practice at the time. And then as a research, as I developed, noticing the massive gaping hole again, between evidence and the other two things, policy and practice again. And the sort of triangle of things that was confused and really difficult to figure out how they linked to each other. So I remember, and I'll just end my opening comments with this as I remember, we were preparing as a group, as an alliance of organizations to go to the South African Parliament to have South Africa's first juvenile justice legislation passed. So we'd been working on this for years and we're taking this to Parliament. And like a good researcher, I suppose I went to poke around in the data on, you know, how does, how much do parliamentarians actually listen to evidence? Because we were preparing submissions to present to Parliament around these issues. And I sort of was depressed for about three or four days after that because there was a lot of evidence coming out of the UK, particularly in terms of their overseas development assistance analysis about how little it was happening. And I see the same things over and over again. I looked again yesterday when preparing for Leigh-Anne's comments and noted exactly the same problem. So it's really a question, I think, again, of communication issue between these three spheres of existence. And is it a practical communication issue? I wonder. And it's what you were talking about. Is it the 18-page, 80-page report on the website that is the communication problem? And is that part of the problem? Or is it a bigger dynamic that we have to address? And I suspect it is the latter. And over to you, Matt. So first of all, I think that one of the measures of success you should use is how many times it's been downloaded, not how many times it's been read. I personally got to page 80, but not past that. If anybody wants my business card, I'm happy to give it. That's one of my measures of success. How often do I have to replenish my business cards? And you laugh, but that's a little bit how we tend to measure some of these programs. And at the Washington Institute, I lead a CVE working group with some wonderful people involved. And we talk about these things. And then I've also led now three different bipartisan study groups, three different reports that we published. They're all available on our website. And you can download them from my metrics. I'd appreciate that very much. And in that, we brought together we're nonpartisan think tank, but we brought together Democrats and Republicans put together the support. And then we kind of walked it into policymakers offices. And I want to give you a little bit of feedback from that. The first point is that in this country, we don't have a political system that allows for failure. So I give our European counterparts, for example, tremendous kudos for trying and failing and trying again. Lots of people like to beat up on the Brits for their various iterations of their contest strategy. Fair enough, but they have tried and continue trying and don't always succeed. And I'd like to see a little bit more of that in this country. But we don't have on either side of the aisle, we don't have a political system that tolerates much of that. So I walked into various House and Senate committees and met with members and staff with Democrats and Republicans from our task force. And we briefed them on a whole bunch of these different issues. And ultimately, they'd come down and they'd say, well, why should I fund any of these programs if I don't know that they're going to work? And we said, you won't know what will work until you fund something and you build metrics and evaluation into it and you can limit it. And then we'll have something more to talk about. They only saw it from the political perspective of you're asking me in a very, very sensitive area. We're talking about terrorism here, to take a risky stance. And if you can imagine how risky it is at the front end when you're talking about off-ramping people who haven't necessarily committed any crimes yet but seem to be inclined to go down a wrong path, you can imagine how much more sensitive it is for an area where we are going to need to spend a lot more time and effort on the back end as people finish serving their terms in prison, for example, and come out of prison. And as you heard this morning, there are no effective. There are no reentry programs to help people be able to reenter society, which brings me to my next point. Very often, policymakers in this country on both sides of the aisle come at this through a very ideological lens from both sides of the aisle. And that is to say, there are many people who feel uncomfortable putting in place, as several people have put it to me, social welfare programs for terrorists. That's not how we deal with terrorists, right? Our purpose here is not to embrace terrorists before or after. We have other ways to deal with them. That is extremely short-sighted. I said to one person, you know, we have programs for rehabilitation for people who carry all kinds of really heinous, horrible crimes. This, the T-word, becomes so emotional and so political and so ideological that somehow we put it in a completely different context. Every administration that comes into the White House has a period of policy review. That's always the case. The policy review for what I still call CVE policy, but the Trump administration now calls terrorism prevention, has taken about until now. And the pendulum is just now beginning to come back into the larger middle after what was a very ideological discussion, mostly about radical Islamism, when that is not what this is actually about. And that's only, if anything, a small piece of it. And so I think that there is an opportunity now to have some of the discussions that have come up over the course of today. And that may surprise some people. But I think that that's good. The downside is that for many people this is an ideological issue. For example, some members of the task force that we put together really didn't like it when we wanted to talk about domestic terrorism, white supremacists, violence. That conversation has changed post Charlottesville. But at the time people said to me, and these are smart, great people, we've had those problems in the United States for years, that's not new. What's new is the radical Islamist terrorism that's coming from abroad. So we have this ideological piece too. I think we need to think of this also, however, not only from the inside the Beltway, federal and legislative perspective, we need to think about this at a local and state level. Not only because I think we are seeing now that there is more headway in CVE programs domestically here in the United States, and I should say that's my area of focus more CVE domestically than internationally. But because ultimately these are phenomenal, there are happening in communities. And the communities are those that are best placed to have a sense of what they need. People within the Beltway here talk about the need to give dollars to federal, state, and local law enforcement to deal with terrorism. But if you go out and you talk to local law enforcement across the country, most of those jurisdictions, they want to deal with opioid problems. They want to deal with a high murder rate. It's not that they're soft on terrorism. That's just not something that's part of their regular lives. And if you come say you need to spend 15, 20, 25, 30% of your budget on counter-terrorism, that's out of left field for them. So I spoke to some of the people who lead the core initiative here in Maryland. Some of whom she and Abat and Aguirre and others have been here. I don't know if they're still here. I think Shannon had to go pick up her kids at school. To talk about how some of these issues of metrics and evaluation fit into how they actually run programs in communities. And one of the things I heard is that there is M-I-N-E, PhD fatigue. Enough with you PhDs coming and telling us all the different boxes we have to check. When we are running programs in communities, we have to deal with the communities themselves and we have to deal with the local and state government, mostly law enforcement, with which we need to interact. The local and state government doesn't wanna know all about the PhD-isms. What they wanna know is how can I fit this into existing programs? Cause you're not giving me more money to run these things. Budgets are limited. How can I fit this into things I'm doing already? And when you go to the communities, they don't want you to come with a bunch of big terminologies and big ideas and tell them, we've studied it, here's what you're experiencing. What they want is to be asked. And so the collective impact methodology, where you come in and you say, hey, we've done some studies and we have a sense of what's going on, but we'd like your opinion. Here's what we think. Is this what you're experiencing in community? Is there something different? And there you get basic buy-in. And the last thing, which is an area of tremendous disconnect with federal government, with legislature, et cetera, is that across the board, I haven't spoken to every program, there's a cost of the country, but every program I have spoken to unanimously says it is tremendously unhelpful to put this into terms of terrorism or any type of ideology. To the extent we can put this in terms of public safety, to the extent we can put this in terms of violence prevention, which ultimately is what this is about, we will inherently get much more community buy-in. And a lot of people here in the Beltway don't like that. And you will notice that under this administration, this effort in most, not all, but most federal agencies and departments is now referred to as terrorism prevention, which I think is undermining the ability to run exactly the same type of programs we're talking about in communities who don't want it to only be about the big T word, ultimately, what we're talking about is violence prevention and public safety. So you all have given incredibly important insights. I'm gonna pick up on the last one for the question. So one of the topics we've been trying to grapple with today is how do we get more research or expert or analytical findings to actually influence policy? What are some of the incentive structures or disincentive structures that we should be advocating for? And so maybe I'll hone in just on Matt's last point. If focusing on the word terrorism is actually a part, is not necessarily helpful, what are ways in which we can be, as the research community, as the community of experts, be pushing forward on ways to do what is more helpful? Start at a local and state level. The fixation on these terms tends to be one that is within the Beltway here. So for example, and by the way, there are parts of government, for example, the Department of Justice has programs for evaluating programs and they're doing great work. But for example, in Massachusetts, the State Department of Health and Human Services has partnered with academics at the state level to help drive some of their programs. A lot of state money, there might be some federal money, but this is being done at a more local level. So it doesn't all have to come out of Washington for it to be effective and for it to then be available to be used by academics and others to then have conversations with policymakers about how this can inform smart policymaking. I'm gonna reflect just slightly on that, which is, so the name of my title and many titles of those doing programs in this room and around in this community is Countering Violent Extremism, which I have a completely unscientific sample size that everybody hates this term. And so, yet it is the organizing principle on this phenomenon. I think there are a variety of different reasons as to why this term is alienating to communities. It is not that helpful in actually describing the phenomenon. It's not the most popular term, but it is our organizing principle. And so to take what you were just saying, Matt, but maybe reflect on it a little bit further, if what we're trying to do is solve the challenge, hopefully, what can we do about the way in which we're conceptualizing it? And from the research perspective, are there ways to either study this or to give analysis or empirically-based study about why we're sometimes cutting off our nose to spite our face, even when the goals are good, but sometimes the ways and means may be challenged? Any thoughts? Wait, see that last part again? The last part was cutting off our nose despite our face. And so this was the idea that the goals of CVE are quite good, but perhaps the way in which we're describing it are- Yeah, I think, so I think, I'll know what you can do to research at many different levels, right? So for me, oftentimes I'm doing research for very specific policy shops within the Department of Defense or Department of State, not trying to influence, like, broad US policy on all CVE issues, which, you know, pretty cool. Matt's had a chance to do this great bipartisan work for bipartisan committees. We had a chance to do that, but so it's a very different type of topic. I think, ultimately, knowing your target audience, who are you doing this work for? How are they gonna use that work? Being really integrated with the implementer or the policymaker, so you can provide them the product they need, it's really easy to provide research that misses the mark on many different levels. So I could do a study that, for a specific policy shop, but my recommendations are very broad and generic. Well, that's not gonna help the policy shop. I could do a work for broad policy goals, but I'm really too in the weeds. And sometimes the recommendations I'm making aren't even feasible on any real political level. So I think part of it is knowing the politics, what will be acceptable, what's not acceptable. You can shoot for the moon, but the moon's not always gonna get you where you wanna go. And then spending time and money and allocation to try and integrate those findings at the back end of it rather than just post your 100 page report up. So I mean, really finding ways of talking to people and communicating the results of what you're doing. Maybe that means using the word CVE, maybe it doesn't. But there's many different types of questions that researchers need to deal with when they try and think about is someone gonna actually use what I'm doing? I just have come out of really being embedded in thinking about practice when it comes to CVE, PVE, whatever we wanna call it. And certainly in West Africa, where we've been looking at lots of PVE, CVE projects, many don't call themselves that simply because it's a means for raising funds for what you think really needs to be done, which is peace building or conflict prevention. So it's a very much alliance with that experience. I think in Africa, our bigger problem is that there aren't resources being thrown at all those other things. So violence prevention, all of those other public safety issues. So there's a limited pot of money for what needs to be done. There's a very interesting observation that came out of some of the work we've been doing recently around somebody commenting that, and I think this was a comment from Niger, is that people don't ask us, and that aligns with your comments. People don't ask us what we think needs to be done. We're dealing with a whole range of intergroup conflict. We're dealing with religious Christians versus Muslims over many years, maybe. We're dealing with a whole range of ethnic difficulties between groups, nomadic groups and settled groups. We're dealing with a whole range of criminal violence, violence in homes. We're dealing with a mix of issues, and here you come along with some money, and it's focused on violent extremism, right? So now it's that little bit that we have to find our way through, and all those complexity of issues may relate to violent extremism as we say it and seem to know what we think we mean when we say it, but so it's this question of what is going on in those communities, and that's where I think the research matters, and the research, and I'm particularly talking about the practice of PVE now and CVE and changing things for people on the ground is that the research that you've done matters because we get an understanding of what people need, what the issues are. We don't assume that there's going to be violence on those campuses. We find out about it, we develop programs. You know, this entire sort of continuum of activities that we need to do to build an evidence base, that's the sort of thing we should be arguing for when we're talking to our donors, and we have a lot more influence on donors sometimes than others, and possibly naming and shaming those donors that are not really supportive of this agenda, of generating evidence that is useful for everybody. So you know, we're going to be raising it at our side event at Unger next week where we release this report, but one of the things we keep talking about is that is this a real thing we're talking about? Is this PVE something different? Is it what you're talking about? Is it peace building? So if it is something different, then let's figure out what is different about it and then hone in on those issues. So I think that the evidence building continuum really we need to be, I think batting away at that a lot more when it comes to what we regularly say, and I see Emily smiling there because it is something that we talk about a lot and we say, I remember a conversation with earlier Emily from Russie and we were saying we were supportive skeptics of the idea, but that as researchers, we do have to be skeptical at these early stages when we really do not understand what this thing is about, but that as that develops, but it doesn't develop unless you invest in the research and the evaluation. So yeah, I'll stop there. And imagine how difficult it is to convince policymakers when we are skeptics ourselves, what we need to be. Just two quick comments on the CVE terminology issue. One, I think we did it ourselves, a tremendous disservice by failing early on to distinguish between CVE and PVE. They are different things, they include different things. Failing to do that led to a situation where everything was CVE. We compounded that by making CVE the sexy thing and giving money to CVE and everybody wanted to be in CVE and people described things they were already doing as CVE to get in on the CVE money and suddenly CVE was everything from off-ramping to building a playground in a disadvantaged community, which is a misnomer of an example, a straw man, but you get the point. The second point is this and several of you in the room have heard me say it many before and you're gonna laugh. I say it so often that my former research assistant got me a mug which sits on my desk which says, hater's gonna hate. The situation is such that there are people who are anti-CVE because they're anti-CVE. There are people who have been convinced that CVE is a cover for spying and I get that, but there are also people who are against it because they're just not interested in this because they're not trying to be a part of the solution. I've been doing this long enough where I can spend a few minutes listing off you all the different terminologies we've used for this in the past. Many people may not remember that CVE was the vanilla term that was intended to be the least controversial and as we're not dealing with extremism, that you can think whatever you want, but have you acted on it, so it's only violent extremism. Many people, myself included at the time, complained, look, if we're only dealing with violent extremism then we're a dollar short and a day late, but it was meant to be more comforting and CVE has become a term that is, yes, hated by a whole lot of people, except State Department which still has to use the term for congressionally mandated financial reasons. I still use the term too because I don't think anything that would come up as an alternative has been better and I especially don't like the term terrorism prevention again because it focuses wholly on terrorism and frankly, if you're one of those people who was convinced that CVE was just a cover for police and counter-terrorism, now you think that you were right, you've been vindicated and good CVE, certainly PVE, is not at all a cover for intelligence or law enforcement. For me that really begs the question though because from an analysis point of view, from a research point of view, how are we able to adequately research certain situations if the way in which we're supposed to be doing it is for something that communities and others may find so abrasive and I ask that because one of our speakers earlier this afternoon is really talking about everyday peace indicators and how do we get this real indigenous understanding of what looks like a safe community, what looks like a secure community to you, community members and if we as researchers are trying to showcase to policy makers some of those findings what the positive trend lines look like from an analytical perspective, how do we do that under the moniker of a term that is really difficult? So I mean from my perspective, we've dealt with this a little bit, Rand a few years ago produced a report that whether you like it or not, a lot of people hate it and it dealt with sort of the moderate Muslim issue and it sort of seemed to articulate what a moderate Muslim was and was not and so it was perceived as identifying good Muslim, bad Muslim and so with that history in comes Todd Helmas to go to the same people that are really upset at Rand for doing this project and I had to articulate that I'm not that guy and I think part of it is just relationship building. Communities certainly as any community does they have their perceptions and some of those perceptions are right and some are wrong and I think it's just a matter of building a relationship with them to overcome those issues, trying to address their concerns head on like hey listen, I know it, I know what you're talking about and that's not me, we're looking at something very different and help me help you and let me carry water for you because part of our job is to is to do that to doing good research requires doing interviews and talking to people and understanding what their concerns issues and concerns are and I think if you can show you can listen then the terminology drops away very quickly. I mean no one's gonna, I don't, very few people are gonna continue to carry that water forever once they get to know you once they get to address your concerns especially if you're doing it sort of in a real legitimate type of relationship perspective. Yeah I think that to the extent that you can build relationships, you can do a lot of good things and if you can demonstrate that what some people refer to as CVE, let me explain to you how we implement that and the way if you can explain for example at a local level that you're implementing that through things like community policing and violence prevention, the types of things that parents care about for their children, you're not, I'm not coming to have a conversation with you because I think you're part of the terrorism problem because this community, whatever this community is is generating terrorists or could generate terrorists, then you can overcome some of that. It's difficult to overcome all of it because people are going to have to get people in the room in the first place and it gets to some of the metrics that we tend to get. It's not that they're bad metrics, it's just not necessarily truly CVE metrics, maybe they're PVE metrics, things like over a period of time taking a poll of people who participate in these community engagements, they get to things like what type of level of comfort do you have speaking with people from local government or if you had a problem, would you approach someone or would you know who to approach? We can improve those things through community engagements of different kinds. Whether that is a true CVE metric is a conversation. It's also true though that if you can improve those types of metrics, you can get to a place where you can then talk about other things that can be more sensitive and can get to more CVE issues. I'm gonna pivot us slightly and ask a little bit more of a specific question. If you all might share some examples from your experience of when you have seen research actually either change somebody's mind, change a policy, change a practice. If you've actually seen kind of, have any anecdotes or examples of that kind of evidence-based shift? That's not... Uncomfortable moment, wait a minute. No, I think there are small examples and this is in the days before alternative facts and facts are not facts. And so on, it relates to the earlier conversation I was having about trying to convince parliamentarians in the South African Parliament to pass juvenile justice legislation that was pretty liberal and liberal in the sense of treating kids through diversion, pre-trial diversion, a lot of alternative sentencing, et cetera. But we managed to use evidence to argue for something that was quite controversial at the moment, anything to do with sex is controversial. But young sexual offenders, particularly, the sexual offenses committed by young offenders was a big issue and we managed to argue them around purely based on international research and local research on this. But I think that there are several examples also of practice-driving policy and not the other way around. And these are also two examples from South Africa and again my juvenile justice example, criminal justice examples. But in order to enable diversion to be considered an acceptable part of a juvenile justice system, this means pre-trial diversion. Kids who have committed offenses, admitted to committing those offenses, mostly nonviolent offenses, being allowed to not be prosecuted based on an agreement to do some programs or to apologize or whatever. So pre-trial diversion was never part of the system. We used, in South Africa and luckily being part of a national organization, used the idea that prosecutors had discretion over these things and convinced each director of public prosecutions of nine different provinces that this was okay to try and let's try it in a few quotes. By the time it actually came to legislating over it, we had it operating in 20 different jurisdictions and really had data on the fact that those kids who, you know, a minor number of them 5% of something reoffended. So practice drove. And similarly, South Africa implemented alternative sentences, community-based sentencing around community service, community service orders as an alternative to imprisonment. As a sentence, and it was in practice long before, it was actually in legislation. So sometimes if it's, you know, these are just lucky circumstances because we were able to have little loopholes to work with. But also is that I think that we need to think upside down as well a little bit because the successful examples have come from practice driving policy and not waiting to get. Are the problem in Africa is it takes so long to make policy that you might as well try and put things into practice first, get it done and maybe the policy will come along later. Here in the United States, policy-making is quick and smooth. It's perfect. You're so likely. So in 2007, we came out with a report arguing for population-centric counterinsurgency in Iraq. And lo and behold, the following year, there's a population-centric counterinsurgency strategy. So I myself thought I must have had something to do with this because, you know, the report got some press, the bright people must have seen it. Certainly General Petraeus wrote it. Finally, when I was doing my advisory work in Iraq, I had the chance to meet Petraeus and became quite obvious he'd never read my report. And but to this day, my brother-in-law thinks that I was behind the surge. So I mean, even when you think you have an effect, you really don't know. The policy process can take like a long time to follow through. I know there's at least a couple instances when we seem to have good luck and these happen to be in the military side. And if you get a chance to do military research, it's kind of nice because a decision-making process, especially in a deployed environment, is pretty straightforward. The commander wants to do something, the commander generally can do something. So if he likes what you do, there's options. And if you get to know the system, you get to know who is reading and who can use the stuff you're doing. So we did work on opinion survey research on an operational template that the special operations folks were doing that I've heard from multiple sources helped to validate that operational approach because it just demonstrated that it wasn't upsetting the populations like people thought they were. So that's a rare chance. I got the feedback from the policy makers and I saw the subsequent reports citing our work like, oh, okay, so that made a difference. And that's so, I think the lesson for me, there's a lot of value in doing evaluation work. If you can do evaluation work on ongoing efforts, ongoing operations, ongoing CVE programs, there's such a need for that. There's such a hunger for results there that it's almost guaranteed that if you do it well, you're gonna help make a difference either by showing that the program's worth doing or not worth doing. The other piece that we did was recognizing that our audience was not necessarily the commander, it was the guys on the ground. The E-fives, special operations teams working in very four deployed areas. And so we did a report looking at best practices like interviewed a number of folks, how can you best run these types of operations in the future? The commander had no use to the commander. But we found that folks were transitioning that around for pre-deployment training over and over and over again and years later, this piece that was never published, just a PDF document had really made the rounds. So there again, it was like thinking our target audience of in that case, knowing the target audience was gonna be guys who were really motivated to learn something before going to Afghanistan and that they would be suitable audiences for this. And I think not that you can just go do research for special operations guys going don't range, but know who your audience is, how are they gonna use it? And you can frame your report in ways that can be most useful to them and it doesn't always have to be a decision maker or Congress. It could be others that could learn from it and then the outcomes are much softer. There's not like a major change that happens after a briefing like, oh, okay Todd, now we're gonna do all of that stuff. That never happens. It's much more softer than that and you gotta look for the cues. You gotta let me leave work at the end of the day and assume that everybody's read my product and is taking my advice. It's hard to write it. I give you three examples, two positive, one not. First, you had a judge in Minneapolis who decided based on the research of a German researcher who started his work on hate groups and moved into terrorism on levels of radicalization to start in his sentencing to include alternative dispositions. This was controversial. It gets back into my earlier comment about how we don't have a political system that is willing to take a lot of risk. But the other thing that was interesting about this and remains an issue today is that this has not been done across the board. So this is still one judge. And if you talk to prosecutors and different jurisdictions across the country, people are left feeling like, well, someone in Minneapolis got an alternative disposition in a halfway house for a year and then the lifetime supervision maybe, but in another jurisdiction is gonna get 15 to 20 years. So it's not necessarily consistent across the board, but you had real research that had a real impact on not just policy, but on people's lives and led is continuing to lead to a real policy discussion. Another is the debate as to what the biggest nature of the threat is here in the United States and whether that includes what we'd widely, broadly refer to as domestic terrorism, white supremacists, et cetera, et cetera. ADL and some other groups came out with a series of studies that I think really quite definitively demonstrated that the number of incidents, attacks, thwarted attacks, et cetera, number of people killed or injured by just about every metric, there was a greater threat from what you would describe as domestic terrorists here in the United States. Not to say that we don't have to be concerned by international terrorism, we obviously do, but that if you leave politics at the door, the reality in the ground was a little different. An area where research hasn't really had quite as much of an impact, and that's because it's high, high policy is on the debate as to whether or not immigration and illegal immigration separate or together are a major counter-terrorism problem, are the majority of incidents of terrorist attacks or plots in the United States by people who came here from abroad legally or illegally. There are some in this administration who push that line very, very hard. I've written about it several times now, pushing against it, and there's a tremendous amount of very, very strong data, the data unanimous that this is not the case, that this is not an immigration problem. Immigration is a third rail hot potato. Right now, and therefore it hasn't had, the research hasn't had the impact, but that's because of the specific and unique circumstances of this particular hot potato issue. Many, many times. In fact, I had an article come out about it after the vehicular attack in Manhattan on Halloween. The article came out the morning that I was speaking on one of the opening panels at an NCTC conference on what we were then referring to as CVE, and got pulled aside by a whole bunch of people saying, well, okay, not necessarily with a thumbs up, but your thing is getting a lot of attention, which was great, that's why I wrote it to not necessarily make people happy, but to prompt that discussion. Got a lot of attention in the broader community, sure. The broader and the Twitter-verse community is not the same as high policy, and with some critical exceptions, and I don't know if that policy maker saw it. But I do think these voices contribute to high policy, if not now, then maybe eventually. So even if there's not that direct policy result, I mean, to be able to change the conversation, to be able to influence the conversation, to be able to be a part of the conversation that's evolving, I think is a real special opportunity to be a part of that. And so I think those are wins, even if you're still waiting for the high policy to change, because I think the wins are going in that direction, if not now, then later. I think these are all interesting points for us all to consider as the future of research, and research for research sake, research for influencing policy, research for influencing practice, and all the different permutations in between. So you have all mentioned in some way, shape, or form monitoring and evaluation, and I'd love to kind of pull the thread a little bit more as to how monitoring and evaluation findings can be influential, and if there are meaningful benefits between kind of research for research sake and research for in the monitoring and evaluation vein, and if maybe we can pull a little bit more nuance in those type of findings, and how they might be stratified meaningfully, or if not stratified, then used in an ecosystem where we're trying to take multiple pieces of information to make better decisions. Look, when Jesse spoke this morning, he put a slide up there quoting General Nagata from NCTC. That line was from an event that I hosted. I was sitting on the desk with him, as he said, to something in effect of, we still don't know what drives these things. And I said to General Nagata after the fact that I disagree. We don't know everything, but we actually know a lot. Again, I've been doing this for long enough that can tell you how many rooms of size we can fill with these studies, classified and unclassified both that we've been doing over the years about all these issues. What we need are not more studies. What we need is programs that are being evaluated to see if they are actually having the type of impacts we want. And we can put those programs in place because we have a decent understanding of the very, very, very broad waterfront of issues that can lead to radicalization from grievances to ideology and everything in between. There are now a handful of programs that have gone through some pretty good, am I right? And that's great, but we need more so that when people like me go to members of Congress saying, hey, we need to put in place programs like these and they say, well, how will I know they're gonna work? Why should I fund them in the first place? You can say, well, because we've done some things, we've measured some things, here's what we can tell you and what we can tell works in certain circumstances. We need that evidence base. We need that ammunition to be able to get the ball rolling. The most difficult thing is getting it rolling at all. But I think we've done that now. Again, there's a good program at the Department of Justice that has been funding evaluation of programs in different parts of the country. It's not like we're starting at ground zero, but we need a lot more of metrics and evaluations of actual programs, not another study saying the role of ideology or the role of grievance and another pyramid and, okay, we've got plenty of these, okay, now tell me if the different types of programs we've put in place to address different touch points on that waterfront or on that pyramid are having the intended effect. I agree that the real space for us to focus now is on producing good M&E, and that means producing good research right at the front end of programs and being able to measure the effects of those. But I think that we're not appropriately structured and funded in order to do that yet. And that's part of the advocacy we need to do around the way M&E is funded. For example, the duration of funding and the timelines required to achieve some of these things. The violence prevention people will tell you the timelines. There are sometimes 15 years and intergenerational depending on what you're trying to do. So the timelines are not aligned with the actual funding available. And so there are a number of technical problems in terms of program design, what's how evaluation is funded and then whether you're producing monitoring and evaluation data. So the second piece of this is of course communicating it and communicating it to who for what reason. So let's just say we want to scale up this program and we want to do it somewhere else. We want to take the principles and maybe try them somewhere else. And I hate the idea of replication of programs but especially in the context I come from. But if you want to take some principles and try them elsewhere, why we're doing the M&E is to show that maybe the program works or some principles work. But I think the biggest problem and it's a bizarre silly thing that we keep doing is communicating badly and not communicating in a way that people can really understand the value of what we've done or the lack of value of what we've done and what has worked and what hasn't worked. And packaging that communication in a way that people can really understand it. So the 100 page report is required and it certainly is required. It sits on our websites because we do need the technical work to be done. However, the repackaging of that information, we actually treat some of that information now as we've reviewed our entire communications approach as an organization the last four years is actually tweeting out research findings from reports and having those be retweeted by policy makers. Yay. I mean, if that happens, so doing that as a means and then also reaching policy makers in Africa is actually about getting your stuff into the media and social media now. So putting those short little stories into the media, trying to get people to interview you and to cultivate journalists and so on because policy makers are not listening to us as researchers. They may be reading the newspaper and it is the paper and radio and television. So yeah, let me leave it to that. I'll just, we're already short in time. I'll just add in, I think the tide has turned on this. I think there's an increasing understanding that evaluations are needed for these types of things. They've been sorely missing in the past. I think there's value on doing evaluations on many different levels. If you are running your own CVE program, there's no reason all you shouldn't collect your own data to see whether or not you're producing the type of effect you're doing. There's many benefits to it. It's much more than just being able to go to your funder and say, ah, I see him working, give me more money. I mean, the evaluations are really critical for programmers to improve their own programming. Are there certain types of their audience that are responding well, not responding well? If they're not responding well, you can figure it out early and change your program so they do respond well. So it's not just to be able to have an up or down vote on, is it good or bad? It's to actually improve your programming. So that's really valuable. But of course, I think there's real value in also doing real academically sound research on this. This is a new year for me. We're doing a lot of this type of work. In fact, we're doing several clinical trials on CVE programming, which is sort of a different level. It's a different price tag to that, of course, than handing out some questionnaires to your participants. It's not cheap. It's expensive to do. But I think if an implementer is like spending, if a government agency is spending a lot of money on a program, then they should need to spend a chunk of money on an evaluation, and it probably should be an independent evaluation. So I'm gonna stay on that last point. How, I think that I'm so in agreement that the M&E movement has been really, really useful on the idea that programs have to be evaluated and has to be built in from the outset and that we need to be doing more of this. I wonder how we do that in ecosystems that are so atomized by the stowed pipe efforts that happen, whereas you only can see impact from a plethora of different efforts all at once. And so to me, some of the work for the academic community and for more external research is to look at an entire system while some of the M&E is going to be project level, program level. But it kind of gets to your point, Cheryl. How do we do this on the time horizon where you can actually see something? And I was at a lunch earlier last, I think it was this week or last week where somebody said, it's very difficult to know whether you're in a trend or a cycle when you're in it. Because cycles look like trends until you're out of it. And it struck me as something that could be very applicable to a lot of our work. How do we get that kind of metal level of analysis? How do we maximize what can be an outside look at research and combine lots of program level or project level inputs together to see what's actually happening? Because there is an idea that you can have many successful programs but not actually be seeing a successful ecosystem change or impact change because external environmental factors are changing at a more rapid pace than any of your interventions are addressing. So can we talk a little bit about how we can get to that kind of metal level from the researching perspective? I'll do my two cents on that. To me, that's like, it makes it too hard to pose the issue like that. That assumes that your outcome about the dependent measure of your evaluation is less terrorism or terrorism. Yeah, can't do it. You're probably not gonna be able to do that evaluation. You're never gonna be able to randomize enough people to find out if some people actually committed terrorist attacks and some didn't even showing changes in attitudes. Do it if you can, but it can be hard. So to our view, the key goal is interim outcomes. No program just blindly, yes, our goal is to reduce terrorism and that's our program. Nobody has that program. The program always have near-term interventions. Is it to help socially stabilize at-risk youth, give them social networks? Is it to provide meaningful job opportunities to people coming out of prison? So they have alternative actions to do rather than resort to a life of crime or maybe even violent extremism. So generally, CV programs try to do something that is an interim objective to their big panacea outcome. And that's where the evaluations, I think, need to occur. That is, and those can be done in a short-term horizon. So if you have a program that seeks to work with at-risk youth to give them social outlets, does it increase social outlets? You can test that in a very short period of time and if it doesn't, then your program's not gonna work. And it's probably not gonna have the long-term outcome you think it will have because your long-term outcome is dependent on the short-term outcome. So I think that's sort of the key goal is to shorten the limit your goals and objectives on your evaluation. Can I also just say that it's a difficult question, but other fields have been doing this for a long time. Criminal violence prevention is a field that exists for more than 40 years. They have 40 years of evidence of learning of techniques and strategies and in order to try and figure out contribution, attribution, all of that technical stuff, which is boring, but does need to be, I mean, we do need to engage with it as if we wanna do M&E. I think that those 40 years for them, and it really does, we can, I mean, they can demonstrate some evidence-based practices that have worked in Japan as well as they've worked under similar circumstances in New York City and they've worked under similar circumstances somewhere in Sweden, and so they've come really far in terms of their evidence-based and this building of evidence-based practices that may work in your area for a particular thing. So I don't think it's impossible to do any of this, but we're at such an early stage in this field that it's really difficult to try to pull it apart. One of their biggest problems, and I have colleagues who work in this field at the moment, is this business of scaling things up, right? So you may have a fantastic project that has shown that you can actually reduce terrorism in all violent extremism somewhere in the world and it's really, the issue is how do you scale things up unless you can get government to legislate and fund it fully, you know, in one go. And that process is something that that field is still struggling with, notwithstanding the massive amounts of evidence that may exist around that. So again, it's that conversation between what is this complication between evidence and policy. And why is it, and it's different all over the world. Every context has its own issues and it's mostly the politicians that are the problem, not us. But really, it is a big puzzle and we have to figure it out for ourselves in each individual case, I think, yeah. I can only add that, to me, the biggest part of the problem is everything that you both said makes all the sense in the world except if you're a policymaker, right? The policymaker isn't interested in what's gonna happen in the near, they wanna know how is this, my goal actually is to defeat terrorism. I can't tell you how often, still to this day, I get asked, well, when is terrorism gonna end? I'm a terrorist, I'm a terrorist. Anyway, yeah, exactly, how many terrorists have you stopped? Look, you know, when I was the Deputy Assistant Secretary for Intelligence at Treasury in a different area in terror finance issues, we had similar problems with finding methodologically sound information to demonstrate that the monies we were getting were actually stemming the flow of funds and improving the security of the international financial system. And so we did the methodologically unsound and politically wise thing of declassifying a small number of incidents and stringing them into a testimony, not methodologically sound, but people would be able to go forward and say, look, here are some successes we've had. And so the problem is not thought that you're wrong, you're absolutely right. It's just the, when you translate this into trying to affect policymaker's decision making, that's the barrier because they want to know something beyond that. And then, you know, your point about criminal justice, absolutely, but because terrorism has been put in its own box. You know, I get told all the time, look, oh, but crime is something different. Gangs is something different. All these things that we could learn from suicide prevention is something different. I'm not saying that terrorism is exact same thing as gangs or suicide prevention or crime, but there are things from each of those areas that we can learn from. There's a lot of study that's been done in each of these areas that is useful to us and frequently policymakers don't want to make use for that because for political reasons, this terrorism is seen as something different. And I think it's just a matter of an insufficient amount of resilience. We usually talk about societal resilience. I think there's a whole political and policy resilience that is lacking that is having a really negative impact on our ability to leverage the research that's been done already in a Venn diagram kind of way. What, this is separate, but there's a piece of it that's relevant and we are not able to use that because so many people insist that all those other things are something completely different. Well, this is the moment where I get to pet the resolve network on the back because during our discussions breakout sessions that all of you were a part of one of the areas on that kind of sticky notes framework where what are other areas that we should be learning from as the CVE community of practice? And I know at the tables that I got a chance to sit with there were a lot of different areas that I think we can learn a lot from and if we are committed to elevating the empiricism, the rigor and the knowledge base on this problem set we have to be learning from other areas not in the, this is the exact same thing but there may be some extrapolatable lessons and also there may be some non extrapolatable lessons things that are absolutely different and we can be learning from that as well. Those are no less important. Yeah, so with that we are out of time. I'd love to thank my colleagues and co-panelists for this last discussion. So please join me in thanking them. Thank you. Thank you. Thank you all.