 Next, I'd like to welcome to the stage Dr. Chris Messarro. He's a fellow at the Center for Middle East Policy at Brookings. He's an expert on religious and sectarian conflict, the causes and consequences of political violence and the impact of technological innovation on foreign policy. He's currently working on evidence-based approaches to counter-terrorism and countering violent extremism. His most recent research, for example, draws on advances in non-parametric machine learning to assess the impact of discriminatory rhetoric and the regulation on the growth of extremist networks. By coupling this novel data with new methodologies, his work offers original insights into how best to respond to terrorism and political violence. Please help out me. Welcome to the stage. Dr. Chris Messarro. Invitation to speak here. Thank you to you all for being here as well. I'm going to speak today a bit about data science and violent extremism. I'm going to talk about some forthcoming research that I have that builds on some work I've done looking at historical grievances. To motivate the research, though, I want to start off by talking about what the problem is. And I should probably clarify that Nancy Lindborg is not the problem. She is talking about the problem. This is a picture of her last week at the event on when she was, USIP had an event last week about their new interim report on the drivers of violent extremism. And one of the points that she made, which is the same point she made this morning, is that we've now spent a tremendous amount of money. We spent a lot of, in terms of personnel and lives. And yet, for all that we've done over the last 15 to 16, 17 years, the problem is only getting worse. The number of attacks have increased fivefold. The growth of extremist networks themselves has been exponential. And I think all of that begs a very hugely important question, which is, what are we getting wrong? What is it about this issue that despite all of the policy interventions we have tried, we continue to see the problem not just stay the same, but actively get worse? What is it about the way that we're approaching it that we're getting wrong? And I would offer two things. The first is that I think we've been offering some bad theories or operating under some bad theories. And I think probably the worst or most pernicious is the idea that it's a simple story about just Islam, for example. This is one of the, frankly, one of the racist Dutchmen that Mike referenced earlier, the skilled Geert Wilders in the Netherlands. And his view, his whole political platform is based on the notion that Islam is the problem. But I think we can go beyond that. And for many researchers, for those of us who research it, the folks who say that it's just poverty or that it's just a lack of democracy or that it's just a lack of education, I would say that that's still probably too simple a story. Extremism is complex, and yet we use very simple theories. The second issue, I think, is that we have bad data. And in particular, what I would say is that we very often, as we study this, we very rarely include data on the policy interventions that we are trying. And as a result, if our policy interventions or policy rhetoric are making things worse, if we're not including data on that, we won't be able to know. And so between the bad theory and the bad data, I think that's a large part of where we seem to be going wrong. What I want to talk today and what I encourage you all to think about today is that there's a way forward here. And I think one is to use machine learning to uncover the complexity of the world. We ended a great job laying out how complex this issue is. We need to have methods to match that. And the second is that we need to have data that can actually verify and hold accountable some of the policies that we've been implementing. So today I want to talk about some research that I've got coming out on veil bands and machine learning that tries to do both of those things. Use original data and use some new methodologies. I'm going to start by talking a little bit about machine learning first. Because this issue is the complexity is a lot easier to understand in a deep politicized context, I'm going to talk using an example of plant growth to start out with. And when we talk about complexity or a complex system, there's really two factors that make it complex. The first are what are called nonlinear effects. And if you think about the way that a plant grows, you might think it's a simple story about you add some water and the plant grows a little larger. But that's not actually true. If there's no water, the plant doesn't grow. If there's too much water, the plant doesn't grow. You have to have the right amount of water. There's a specific range in which water helps the plant grow. But the point I'm trying to make is that the effect of water on how a plant grows depends on how much water it already has. And it's the same thing for sunshine. No sunshine, you don't get a plant. Too much sunshine, you don't get a plant. It has to be in the right range. The effect of sunlight on plant growth depends on how much sunlight it already has had. That's a nonlinear effect. The second kind of effect are what are called interactive effects. Again, you can have all the water in the world at the perfect amounts. But if you don't have sunlight, you're not going to have a plant grow. And the same thing is true in reverse. You can have the perfect amount of sunlight. But if you don't have any water, you're not going to have a plant grow. And the reason I bring this up is that this is a very complicated story. Even though intuitively, we tend to think of it as very simple, it's actually quite complicated. And if we use the methods that we use to study social processes, to study plant growth, this is the key point I want to make, we would not be able to tell that water and sunlight matter for plant growth, which is crazy. And the reason is that all of the traditional statistical models that we use, they all assume by default that effects are linear and that there are no interactions. Yet if we think about how the world works, I think we can all intuitively agree that most of the social processes we care about and that influence the world are quite complicated and have a lot of nonlinear effects and a lot of interactive effects. And I think extremism is the same way. I would say that extremism is probably even more complicated than plant growth in many ways. And therefore, we need methods to match that. And the value of machine learning is that you can feed it data and it will sort through the data. And identify where the nonlinear effects are and where it thinks the main interactions are. So that's why I think we should use machine learning to get better traction over problems like extremism. To return to the project, so that's why I wanted to use machine learning. The other issue is veil bands. And so why would we talk about veil bands? I wish I could say that I had this idea at the beginning. I'd feel a lot smarter and this presentation might be a little tidier. But the reality is that it was an iterative process. A few years ago, when the caliphate first started and some new machine learning algorithms came online, I began to look through or gather as much data on cross-national data as I could on different potential theories about why some countries might send foreign fighters to Syria and others don't. When I ran the original analysis, the variable that came back as being most important was actually whether or not a country spoke French. And I looked in and tried to peek under the hood and see what was going on in driving that finding. And the countries that seemed to be driving the finding were France and Belgium and Tunisia. All had had major contentious debates over the banning the Islamic veil in the year before the Syrian civil war had started. And so what I did is I went through and coded every country around the world and to see the extent to which it had either, that country had either had a national controversy or debate over the veil or even imposed and enforced a national ban on a partial or full veil. And the idea is, the main theory for why I was doing this ties into what Mike was talking about earlier which is dehumanization. As these veil bans are debated and discussed, they are very often defended on the terms that you cannot be both Muslim and Western at the same time. You cannot be both secular and Muslim at the same time. You cannot have both a national identity and a Muslim identity at the same time. And it's a message that really reinforces the jihadist recruiter's sense of being dehumanized and the need to stress identity and force potential recruits to make a choice between Islam and their other identities. So that's the theory about whether it would happen. And once I'd gathered the data, there were about 40 countries that had either debated or enacted a veil ban in the years prior to the onset of the Syrian civil war. And I looked through and the bivariate relationships are pretty strong. About two-thirds of those 40 countries had a foreign fighter network go to Syria for the countries that did not have a debate or a controversy over the veil. Only about a fewer than 10% sent a Syrian foreign fighter network. So there seemed to be something there just in the basic bivariate data. But the question was, are other things mattering too? Are the veil bans responding to prior jihadist violence, for example? And to do that, I went back and rebrand the same analysis I had before controlling for things like prior jihadist violence. And it turns out there's really only two things. You probably can't see it here, but the far left column is Muslim population. And the one right next to it is veil ban. So of all the data that I passed, the machine learning algorithm, it flagged two variables as being important. And those two variables basically account for the entire predictive power of the model, which is that the size of the Muslim population, the extent to which a country debated or imposed a restriction on the veil. Those were the two factors that the algorithm picked up on as being most determinative of whether a country sent a foreign fighter network to Syria. Same thing comes true for interactions. I mentioned earlier that machine learning can pick up latent interactions. You don't have to tell it to look for them. It'll just figure out what the most important interactions are. The most important interaction it found was that countries that had large Muslim populations and also a debate over the veil ban were the countries that were most likely of all countries to send a foreign fighter network to Syria. The other thing I had mentioned is that in addition to being able to pick up nonlinear interactive effects, machine learning can also pick up nonlinear effects. Here's the effect of the veil ban data on the likelihood of a country sending a foreign fighter network to Syria. And the most important thing though, the thing that I want to draw your attention to is that there's the gap between whether there was a local controversy or a national controversy over the ban. That's where you see the biggest jump in the likelihood of a country sending a foreign fighter network to Syria. By contrast, once you've got a national controversy, if you then introduce or enforce the ban, it raises the probability a little bit, but not nearly as much as if you'd gone from having no debate to having a national debate. And this is consistent, in my view, with, again, what Mike was talking about earlier and the effect of threat detection and dehumanization that they can have on in-group formation. What matters here is that as national politicians like Geert Wilders and others advocate for these bans, they are, again, they're saying that you cannot be Dutch and Muslim at the same time. And that's exactly the same message that the Jihadist recruiters are using to try and draw new recruits into their networks. So the debates over the ban seem to be compounding the recruiting pitch of these groups. The other, one last slide I'll show you from the research, just to show you, again, the value of machine learning. This is the effect of Muslim population or the relationship between Muslim population and the likelihood that a country sent a network to Syria. What's important to note here is that there's a very narrow range where the size seems to matter. It's, you know, if you have a few thousand Muslims, there's not really any, as common sense would tell you, there's probably not gonna be much of a network. And then as you get very, very large into the hundreds of millions, it doesn't really matter if you have more Muslims at that point either. There's a range in which it matters. And the other thing is you can tell as, you know, as you get out to the far right, despite the side, each of those lines is a country and it's showing the likelihood that that country would send a foreign fighter network to Syria if it had a certain amount of Muslims. And you can see that some have a much higher likelihood than others, even if they had a lot of Muslims. And what that's showing you is that when you have separation like that, that's showing you visually that it's picking up on an interaction. And in this case, it's saying countries with hundreds of millions of Muslims and no veil ban are gonna have a much lower likelihood of sending a foreign fighter network than countries with that many Muslims and a veil ban. So it's another way of kind of visualizing and intuitively understanding, or intuitively seeing the complexity of the process itself in your data. The question of course, as with any project, any research project, is do the results make sense? Especially if you're using machine learning algorithm and you're really trusting it to tell you what it finds in the data, you have to be very careful about checking to make sure that it passes some kind of common sense or intuitive check. And in this case, if this is true, if there is some kind of relationship, we would expect to find kind of three different qualitative markers. The first is we would expect to find members of Al Qaeda and ISIS themselves talking about the veil bans and citing them as cause of recruitment. And in fact, that's very true. There's actually so many examples of this that I didn't even put a slide of it. In fact, Jesse mentioned the magazine that he had started earlier. The first issue of that magazine had an article about the French veil controversy. The second thing we would look for is that we would look for the recruiters at the local level to be able to say these veil bans in when I was building my network had an impact on recruitment. This is a picture of Fuad Bilqachem. He founded a group called Sharia for Belgium, which sent the most number of foreign fighters to Syria from Belgium. It was an instrumental group in getting the facilitation networks going from Belgium to Syria. He was arrested in I believe 2012 or 2013 and when they asked him to account for the success of his group, he said very publicly that there's a local band in Antwerp in 2009, then there was the big controversy over banning the full veil in Belgium in 2010. He called the effect of those bans a bomb that helped contribute to the explosive growth of his network. I may have been tongue in cheek as he was saying that, but I don't think there's any denying that he viewed it as essential to what allowed his network to grow. The third thing that we would expect is that we would expect women themselves who would be considering wearing a veil to also be identifying the veil bans as something that might contribute to their own journey into extremism. And again, we have evidence of this as well. This is an example of, there's a woman named Um Mansour from Belgium. She tried to go to Syria, she was intercepted at the border, I believe. She was asked to account for her own experience. Prior to the Belgian veil ban, she did not even wear the veil. I don't know that she was particularly devout. When she talked to the New York Times a few years ago and she was trying to explain her own process of radicalization, she said, it felt like my identity had been taken away from me. That ban caused more harm than good. If it hadn't been passed, I wouldn't have kept myself away from the world and maybe I wouldn't have been radicalized. Obviously, it's not that simple of a story but it's clearly a very formative point in her own journey into extremism. So we have examples of both fighters, recruiters and women themselves citing these bans as being a cause of their own extremist recruitment. That said, this is the slide where I say that there's some limitations to this. This is not an experiment. This is not a foolproof kind of causal study. There's also issues with if you're using national data to try and understand an individual process. The one caveat I do want to flag to you is that we could control in this project, I could control for prior jihadist violence but I could not control for, because we don't have data on it, nonviolent jihadist networks that may have existed in these countries. So it may, it's unclear whether the countries that had national controversies over the veil, if they were helping to form these networks or if those networks already existed and it just kind of tipped them over into violence. And then the final caution I would give is that I started this by saying that I'm very much averse as Leanne is I think to saying that this is a simple causal story in any way. I do think that this matters but it is clearly not the only thing that matters. And I would be very much remiss if anybody left here this morning thinking, this is a new simple causal story. It is a part of the story but it's not the whole thing. That said, what do I want you to walk away with? I want you to walk away with the sense that extremism is complex and that therefore we very much need methodologies that can reflect that complexity and begin to describe that complexity. And the second is that we really do need to begin taking seriously in my view the notion that anti-Muslim rhetoric and policies may be contributing to extremism. With that I will leave you with that and thank you very much.