 Let's see. OK, so my name is Brian Walsh. I currently work for the Global Facility for Disaster Reduction and Recovery. It's housed at the World Bank. And even inside the World Bank, we've got extra money, because that's a trust fund. And we try to mainstream disastrous management, best practices both inside the bank and in client governments all around the world, wherever we work. Until recently, I was working at IASA, the International Super Applied Systems Analysis in Vienna, where I was working with integrated assessment models and with systems dynamics models. And before that, I was doing my PhD in particle physics. So I had the opportunity to work with a lot of different types of models and a lot of different contexts. And on the basis of that, I'm going to take a step back and not present any specific model or talk about things, but really talk about modeling in context and hopefully pick up some of the threads that I heard in the talks this morning with Jonathan Mora and also sort of maybe take a step back and I'll try to take the economist role in a room full of natural scientists, maybe. And so I want to start with the thought experiment. So imagine you're a World Bank analyst and there are two projects that come across your desk. Both of them are C-Walls. Both of them cost $100 million. But Project A will prevent, on average, $20 million of losses every year. And Project B will prevent, on average, $5 million of losses per year. And you can fund only one. So the challenge is to figure out which one should we fund. And I just want to throw that out there and we'll come back to it in a minute. You know, it's obvious, both in general and also sort of in the context of just that last talk, that our capacity to model, predict, and indeed manipulate natural and social processes has never been greater. Cheap computing power and new data streams are sort of endlessly available. And so more than ever, expertise is a scarce resource. But it's only a matter of time, arguably, until we put ourselves out of business or we've developed machines that will do so. And if our friend at NOAA is too clever, then he will do so, he'll put himself out of a job more quickly than the Trump administration wants to. But, you know, so the question that we need to ask, and I think it's really useful in this context, in the context of sort of linking up social and physical models, is, you know, what is the value that's added by modelers, when, what is the value that we add now and what is the value that we will still add when our stochastic agent-based models are just self-generated, when sort of the machines can train themselves to do all this kind of thing. And to think about that, I, as I said, I've worked with a lot of different models in a lot of different contexts, and so I want to think about what is a model and, you know, economists not a philosopher, so it's not too deep. But a model I'll just throw out there is a fixed framework of observations, corollaries, best guesses, narratives, and indeed biases. Even if you're granting good science and we've talked, obviously, confirmation bias is always sort of, is always guarded against. But even if you grant good science, bias can be found in the boundaries and the boundary conditions of almost every model. Specifically, what kind of processes and interactions are included versus excluded from your domain. And in what detail? The scope, so to put it another way, I think the scope and the depth of models are themselves functions of the questions that we ask, the way that we phrase them, and of our expectations as regarding like what the answer should look like. And in that sense, I think machines are gonna do a lot better than we will eliminate that bias in the 10 years or so. But then, if you step back and say, what model's good for? Because we spend a lot of them all the time and we pour over them and they break our hearts. But what is their use? Even if your model represents some processes and makes predictions with accuracy, maybe they're false viable, maybe they're never falsified, maybe you're great, the utility is ultimately gonna be determined by your audience. And I never sort of internalize that more strongly than during my time at the bank. And so again, to put that another way, the information about an ecosystem or any given process is really only for curiosity's sake, especially in the fields here represented if we collectively don't sort of value the subject. If you don't, if the market or whatever other determiner of value there is doesn't decide that your ecosystem is worth saving, then it's like your interest is not really relevant. And so, the claim here is that as we build and communicate our models, we have to be aware of the values that we're communicating, the values that our models are built upon. And we have to know when to stop the facts from getting in the way of a true story. We sort of alternately encode a little bit. So to make this a little bit more concrete in my job now, I do disaster risk management in developing countries. And specifically right now, the project we're working on is quantifying resilience to natural disasters throughout the Philippines. And so, I mean, resilience is a buzzword. And so we define it in this case as the ratio of asset losses to well-being losses after a shock. As Benny was familiar with, traditional risk assessments or even if not, usually you look at the hazards or just the frequency of hurricanes or earthquakes or whatever you want or whatever you can. You look at the exposure, the location of assets and then vulnerability. And this of course goes back to the quote Jonathan had up that bloods are an act of God before losses are an act of man. So in our analysis, our group at the bank, we're trying to not only stop there at a traditional assessment, but look at who's affected and to quantify their capacity to cope with and hopefully recover from a shock. So now I'm just gonna, I'm not gonna go deeply into this model. It's relatively simple. So we incorporate data on hazards, asset types and vulnerability. Again, that's the traditional analysis, the first three bubbles on the left. But then you push further into looking at poverty incidents, financial inclusion, social safety nets, income distribution, insurance and remittances, all the things that actually sort of determine whether a person is able to, for example, dip into savings to cover a loss in the immediate aftermath of the disaster. And in that way, we try to translate asset losses into wellbeing losses. And with that, we're able to generate pretty maps and you can see on the left, you see asset losses. So that's a traditional analysis. And you can see that the Northeast Ridge of the Philippines gets a brunt of both the Pacific hurricane season, but also that's the edge that lies right on the ring of fire. And so they sort of get it all. On the right, you can see socioeconomic capacity or resilience. Now that is our estimate of, that's our model's determination of how people, when they face a loss, are able to cope with it. And so you can see that actually, obviously Manila is in this dark green region, sort of at the bottom of the Northern Island, and then sort of as you get further south, you get more rural and more impoverished. And so what we're seeing is that, you get a different lens than just a normal analysis where you would wanna build higher dams and dykes and more earthquake resilient infrastructure. But here you see that actually it's soft instruments like social protection and things like that can actually make a difference. And so to start to put numbers on this, what we look at, you can see starting from the left, I mean these are, this is the population of the Philippines broken down into income quintiles. And you can see that the, as you'd expect, the bottom 10% of the, excuse me, the bottom quintile of the population has the lowest, has the fewest assets. But when a disaster comes through, they lose a larger fraction of their assets, that affects their consumption more greatly and ultimately as a fraction of their total consumption. And when we look at welfare or wellbeing, then what you see is that a much smaller, a much smaller loss in absolute terms for poor people actually generates a much greater hit to their wellbeing. And so what we're saying is that even though or precisely because they have the least to lose, the poor, the global poor are more affected by and take longer to recover from shocks. And so, again, we can put numbers to this, we can sort of show up in Manila and try to argue on behalf of this, that they should use our model. And it can be used in several ways. You can look at the benefits of national disaster risk management policies. And that might be the traditional analysis, you can look at the placement of assets, you can look at how to reinforce them, you can look at when a disaster comes through, how do you build back better? And so that's, again, a traditional analysis. You can also target resources and assess the benefits of social and financial inclusion, early warning systems, for example, and that can be at the provincial level. So it's not just sort of like somebody, the problem with somebody above you, but indeed at every government level you can engage stakeholders. And you can also use the tool to assess the both the immediate and the long-term impacts of a specific project on resilience, for example, social safety nets or whatever it is you want. And so this brings us back to the project. So again, project A and B, they cost the same, but project A prevents $20 million a year and project B costs $5 million a year. And a traditional cost benefit analysis will stop right there and say, you should always fund project A. But in our analysis, you can imagine that you can sort of pull back the curtain and find that the $20 million of losses or project A is protecting a central business district and project B is protecting a actually a slum from flooding. And so the $20 million might be a handful of buildings in the central business district, but the $5 million are gonna be the aggregate losses of hundreds of thousands, up to millions of people, each losing maybe $5. But as we've seen, that can have a major impact on their wellbeing. And the argument here is not that B should be chosen over A in all cases, but rather that B should in some, should sometimes be given a chance to succeed, to be the projects that's chosen. And to come back to the original framework, that is a values judgment. And that is sort of driving our model. And that's the conversation we have when we show up in Manila. So what I claimed the explicit goal of this model is to quantify resilience to natural disasters in the Philippines, but it's also given that the mandate of the bank in general is to reduce poverty. So we have a more or less implicit agenda which is to show up and make the case both inside the bank and in client governments that the assets of the poor, urban slums, subsistence farming and related infrastructures are at least as worthy of protection from hazards as the central business districts and other major infrastructures. And that is a contentious claim in both inside the bank and out. And it helps a bit that we've got a fancy model to make that case, but it ultimately comes down to value judgments. And we can acknowledge that our value judgment that the poor need to be protected is based on a partial picture. I mean, there's a lot more going on. And as I said, it's not a given when you show up in Manila. And there are a lot of reasons for that. So in order of deepening cynicism, the model is gonna seem like a trick if your interlocutor isn't sympathetic to the premise that you have to worry about poor people in disaster planning. And in that case, more complexity is a clear negative. And so when I went to Manila, I actually had an undersecretary of development say, reject our argument by saying that the sort of middle class people and wealthy suburbs of Manila are more attached to their lifestyle than the rural farmers are to theirs. And therefore the downsides for the middle class are much greater and therefore they need to be protected. The assets need to be directed to them. And so that sort of leads to the second point that our model doesn't consider the constituencies or varying prerogatives of various bureaucrats and the sort of political promises their bosses have made. But then of course, the moral hazards of the international development are pretty well known. And so at the moment, the poor are the liability of the international community from the perspective of maybe, and there may be even if we agree that the goal is to get them to be self-sufficient, they might say, if it's not the time, we need to worry about our value-producing assets or the things that really drive our economy and it may not be the poor. And so the government can reasonably reject our values and therefore our premises and sort of go on with disaster risk management however they want. And Les, I'm not gonna let you think that that is just what happens in international development because I would argue that at global biome for at least speaking of a lot of integrated assessment models, it's also the case that the uses are driven by the values that sort of underlie them. So if you're not familiar with global biome that is an integrated assessment model of competition for land. So it has representations of air culture, livestock, bioenergy forestry and then trade and it sort of operates on a 10-year time scale and it's a syncretic model. I mean, I would say it's deliberately evolved but it still is continuing to evolve and it's the product of several models stitched together so probably people are familiar with Epic, G4M, Ruminant and others. And then, and it has a very strong constituency. Global biome is a principal contributor of scenarios and analytics to a lot of European commission projects, including it's a major contributor to the REDPAC program, cash transfers to prevent deforestation, impact to GHG Europe and the AgMAP program. And despite its success, they're sort of slowly maybe gradually moving toward a stochastic model and I think it's understandable despite the fact that a stochastic model would be better than a deterministic one for all that it covers. And global biome has been successful without disparaging my colleagues at IASA. It's been very successful despite a couple of major flaws. So global biome, like a lot of integrated assessment models maintains closely held secrets and in this case for these land use models, the big thing is rebalancing Chinese production and consumption of agricultural goods. It's sort of nonsense when it comes out of FAO stats so we spend a lot of time to figure that out and they don't want anybody else to know how they did it and that's true for every group that's tried to do it as far as I know. And so they protect their market share by discouraging competition in that way. It's not an open source model and I think it's reasonable, at least for now to maintain that position. It's a black box also. So again, for the same reasons as the trade secrets, there's not much in the way of error analysis but still it's very easy to publish with this model relatively speaking, relative to sort of trying to get, stand up another one, even if that one is in a total glass box. And it's easy to publish it outside its intended use. So we ran an analysis looking at sort of trade-offs and pro-benefits within the SDGs and that was sort of far beyond anything that has been used for C4 and science advances they didn't really blink. So there's a real bias toward this. And if you think about why integrated assessment models are so seductive, I think the answer lies in the fact that they still trade in scenarios and those make for easy narratives. Each of those scenarios is more or less agnostic about its probability of a painting. So if you run for long enough, everybody gets a scenario that they're happy with. And they can be particularly effective at driving policy when there's a consensus on value. So for example, if you are, it's a good job if you can get it working for the Norwegian government because they have a very clear sense, they've combined a clear sense of values with a massive sovereign wealth fund. And so you get the REDPAC program where it cash transfers to governments in the Amazon, in the Congo basin and in Indonesia to try and prevent deforestation. And the problem, and so it's very easy to sort of leverage money and get out there and spend the money. But the problem, of course, with these models, which will buy them in particular, comes they achieve great success in Brazil, under the REDPAC program, in part because the government was committed to cooperating to solving the problem of deforestation and they succeeded in doing so. There's an epic failure in Indonesia and that's because the model didn't incorporate the fact that you're dealing with a decentralized, weak central government and decentralized interests in each of the provinces. And the claim here is that's not just like, you can't just throw up your hands, that is actually a failure of the model to incorporate something like that. And so in that context, the stochastic model is probably gonna be an improvement. If you've got a dedicated clientele, they might keep going along on this journey with you. But otherwise, actually a stochastic model is going to dilute your narrative, it's going to make less clear the values that underlie your scenarios. And you can end up with a confusing or substance free world and I will, modeling world and I'll let you decide which date is worth, I don't know. So to start to wrap up, I wanna argue that the values and the priorities of any individual model may be more or less explicit, but they're absolutely always there. And so even after we surrender complete policy control to the computers, we're gonna need modelers to advocate for those values that we're now sort of espousing more or less explicitly. And those include the welfare of the global poor, ecosystems that are fragile and increasingly disappearing and also human wellbeing broadly construed. But until we do surrender that policy control, the emotional content of our models is gonna motivate action much more effectively than facts figures and fancy diagrams. And so I'll just end with a, like to put a really fine point on it, I would say that within the fields represented here, given the stakes of failing to adapt over the next, it's not 50 years, but 10 to 20. There's a moral obligation to actually consider and to advocate ever more effectively for the values that do underpin our work, and whether that's sort of disenfranchised or poor communities or ecosystems that are disappearing, but no one realizes how essential they are, except for maybe a very small group of people. And in doing so, I would argue that it will always make us better able to structure and sort of package our models in a way that maximizes the real contribution to the SDGs, the Convention, Biodiversity, Paris or whatever it is that, whatever it is a challenge of being. So thank you, that's all. I imagine that generated a fair amount of questions. So we have time for a number. Yes, Lejo. Thanks, Brian. That was a really fascinating talk. And I agree with you and I'd like to ask, is something that we as modelers need perhaps is to have maybe even frameworks in the praxis of modeling that allow us to better explicate what those values are a priori to sort of running our models and designing our numerical experiments, and would that be a vehicle both to sort of accomplish what you're suggesting, which is that we just need to be more, to A, sort of be more explicit about what those values are and how they're informing our model, but also help us to sort of better interpret the results and be able to communicate them to the particular audiences. And do you have any ideas like what that framework for explicating those values might be? Yeah, I mean, I think that the preliminary step is actually acknowledging that that framework exists, that those values exist. I mean, it's really easy, it's really tempting I should say to sort of take your model out of its data set, to sort of add little modules or little gadgets and someone comes to you with a problem and you say, oh yeah, we can spend a little time, we can expand into that and then, and you don't really consider the ways in which the model was constructed and how its functioning reflects those ends when you sort of jump into a new sphere. It's easy to be glib about that. And at the same time actually, it's easy, I'll speak for myself, not for the room, but it's easy to be a bully when you show up with a model and it's like, and it makes results that make people happy or not or whatever, but still like, if you're in a lot of policy rooms, especially when I was at IASA, but also now, most people don't have a model. So if you do, then you get to sort of drive the conversation, right? And I think that we have to be a lot, I think we just need to be more honest about what went into that, what is it good for and what is it not good for, right? So yeah, and I mean, the value project is embedded in that, I think. So I know Michael and Peter and Hugo, and I've worked closely with them and I've also done a lot of this modeling is using the impact model instead of go by them, although I was also in charge of a model and a comparison that looked at results from these various models that do the same sort of thing that go by them does. And I guess I, so one of the reasons we model is because we don't know what the outcome is going to be. And if you're not surprised by the outcome, then probably you're not doing something right. And I think that that's an important outcome that you've probably passed over that you should be coming to policy makers with results that are perhaps surprising to them. And you should be able to back those up with some kind of a science that tells you why you came to that position. And there is always going to be some value in the process by which you got there. But I don't think anybody that does this kind of modeling builds the values in per se. I would also say that the, one of the reasons why go by them is so successful is not the model, but the presenters. And so Michael is a, this is the person who's the father at least or the godfather of go by them is extremely good at presenting concepts to people who are really not interested in the details of the models. And so you have to ask the question, so what are his biases, I guess. And the last thing I would say is that funders are responsible for this. You know, a lot of people write checks to Yassav to use go by them or to if prefer impact without making the next step, which is to say, you have to tell us how you get your results, what your data are used, what data you use, look and source your code. Having said that, go by them, if impacted, if at least are moving in the direction of open sourcing their code and you have a project with go by them is that might get funded that would move that process further along. And some of the larger integrated assessment models in the context of the people in the room probably know that the GCAM or those folks have been forced by their funders to do this. And so I think you were a little bit more pessimistic about the situation of the field. And I think perhaps it's warranted, but it's useful to make these points over and over again, I have to say. So I would just say, I mean, I absolutely agree with the point, anybody who's met Michael Overshiner knows that that whole program was driven by the force of personality as well as that, I mean, a great deal of its success as I should say. But I wouldn't say I'm pessimistic, rather actually that something will be lost when go by them goes through, something of its success, some key ingredient will be lost when it goes, when it's a fully stochastic model. Because in large part, you can't assume that the model's gonna continue forever successfully because Michael Overshiner is there. And in fact, it presents easy to digest narratives which by and large, I didn't sort of weigh in on whether they're right or not. And of course they're actually successful because of the deep of work. It's the best that we got, but it's best that we have collectively, but it's not, I don't think that'll be the case for long. And I think the fact that it's still closed source, in some ways it protects what they've got and there's good reason to do that, not just sort of selfish or financial reason. But I'm, so I would say conversely, something will be lost when it goes to a stochastic model that is a risk of letting the facts get in the way of the truth. Yeah, great talk. I'm just wondering, as a modeler, you're also in control in what type of output metrics you provide to the policy makers. And I was wondering in your example, if you say the same hundred million, either you say, what was it, 20 million or five million, and then you offer the fact, I need to explain, but the one says more lives, what if your output metrics, you say that right away, this scenario saves 20 lives, this scenario saves 5,000 lives, or affects 20 lives, affects 5,000 lives, then you kind of turn it around and it's up to them to then admit like, well, are those five, what are the income of those 5,000 lives? And then it's actually a direct answer that maybe he might be the better choice. So are you, do you have that flexibility to choose your output metrics? The first answer is no, because I'm only invited into the room if I give the answer that the loan officer wants, right? So part of the reason that the GFDR has its own pot of money is because I don't have to go and bill some other team at the bank if I am working on somebody else's project, right? And so we show up and we say, we're free and we can maybe help you to be a little bit more rigorous on resilience or disaster management generally. So maybe you should consider this, but if we're gonna get in the way of them making a loan, no, you're out. And the other thing is, people are just used to disaster losses are sort of expressed in tens or hundreds of millions of dollars or as a fraction of GDP and that's what people are looking for. And so you couldn't really lead with a number, it's difficult to lead with a number that they're not familiar with because then you're already out of context for a lot of people and they might be like, oh, this is not what I was looking for. So yeah, yeah. I mean, there's a lot of that going on. So it's a big part of message control. And in this case to be clear, I mean, people, so we can try to quantify how many people will fall into or come out of extreme poverty, but of course, the model is desperately trying to avoid putting a value on lost life. Right, so that's not an argument that we're trying to get into. Thank you. So our next speaker is Robert Nichols, good friend of mine on talking about a favorite topic of mine. Don't worry.