 All right, our third speaker is Rob Lampert. Rob is from Rent Corporations. And Rob will give the following presentation, Good Decisions Without Good Predictions, Decision Making Under Deep Uncertainty. Great, let me, I'm looking for this, there's a share screen, I can do that, okay, great. Okay, everybody see the slides? Yes, perfect. Great, good. Yeah, and thanks so much for the opportunity to talk today. I'm gonna talk about a somewhat different topic, but about ways to think about using models in new and interesting ways when there's a great deal of uncertainty. And so just to start, I mean, as this group well knows, quantitative information can be vital to making good policy choices. You know, so in a whole range of areas we've been discussing climate and natural hazards. And it may seem obvious that quantitative analysis can best inform policy by making predictions in the future. But there are cases where predictions, which are obviously key to the scientific method can complicate the use of quantitative information for decision support or informing decisions. And this is the case when uncertainties are deep and there are substantial disagreements about values and evidence. And so fortunately, what I'm gonna talk about today is another way to think about the use of models and data in these contexts. So traditionally policy analysis begins with a consensus understanding of the future and then proceeds to, that can be a point forecast, but more properly is a joint probability distribution over a greed set of future states of the world. And once we have that consensus understanding of the future, we can rank different decision options by a expected utility or some type of metric. And then we perhaps do some sensitivity analysis to decide how sensitive our recommendations are to the uncertainties. And we call this predict then act because it begins with this consensus predictions of the future. And well, predict then act is incredibly useful in a wide variety of cases. And I always say you never fly on an airplane where the people who built it and designed it and flew it didn't work well in that mode or say live downstream from a dam. There are cases where the uncertainties are such that predict then act breaks down. And under conditions of deep uncertainty which I'll define in a second, there's huge pressures to underestimate the uncertainties, competing analysis can lead to gridlock that people policy recommendations are predicated on forecasts. People attack the forecast which may be more vulnerable than the policy recommendation. And this idea that we often know a lot about a problem which is not predictive but can help us make good decisions. So deep uncertainty is this situation where the parties to a decision do not know or do not agree on the likelihood of autonomous futures or how actions are related to consequences. And there's this emerging field of decision making under deep uncertainty, DMDU which can help address such conditions. And just to summarize, I mean, traditional qualitative methods are designed for systems with a single relevant decision maker well understood behavior, agreement on objectives among the audience for the analysis. And today we face many challenges, climate, et cetera where there's a diversity of priorities, goals and values among participants, deep uncertainty regarding the consequences of our actions and often decentralized polycentric decision making audience. And I give you a plug for a book we did last year which reviews a number of these methods. I'm gonna talk about a particular one just to give you an idea of this way of thinking called robust decision making which is where much my work is. So I'm gonna give you a review of robust decision making an example application and then say a few words of why I think these ideas can be really important for this community. So what's robust decision making? It's an iterative multi-scenario, multi-objective decision analytic framework that helps the parties to a decision identify strategies which are robust that is work well over a wide range of uncertainty to characterize the potential vulnerabilities of those strategies and display trade-offs. And a lot of complicated stuff here but really it was a very simple concept which is rather than use computer models and data as predictive tools we're gonna use them as exploratory tools. We're gonna run many, many cases to stress test proposed policies and use those stress tests to help identify robust responses. So in essence, what we're doing with a robust decision making is turning around the order of traditional analysis. And we begin with a proposed strategy or strategies we then use the analytics to ask answer stress test questions. So rather than rank policy alternatives we ask questions such as what are the conditions that best future conditions under which a particular policy would meet its goals and miss its goals and what are the key uncertainties which differentiate those two classes of futures and then use that information to make more robust strategies. So RDM synthesizes four key concepts. One is it's a form of decision analytics which gives a systematic way of structuring problems focusing on trade-offs but traditionally decision analysis is often very prescriptive and based on predictive information which is great when we have high confidence in it but less so when we don't. It brings in scenario ideas which have this idea of multiple worldviews, plausibility but not probability and are often deliberative that is the information is co-produced and used between information users and producers. A key idea is this idea of stress testing or assumption-based planning that we begin with plans and then we use our analytics to identify the assumptions on which it depends or to say it another way, we use the analytics to understand when plans are gonna work and when they're gonna break and this idea of exploratory modeling which is we're gonna use models not as predictive tools but to a map assumptions onto consequences. And let me say a little bit more of that because I think that's a particularly important point here for this community. The idea comes originally from a colleague of mine Steve Bankus at Rand who did this work about 20 years ago now but identified two different sets of uses of models. One is what we're often familiar with is consolidative models which gather all the relevant information used as a single package and you can validate these models and use them as a surrogate for the real world. There's another way to use models which is mapping assumptions onto consequences and use it in iterative inductive problem solving and that's what I'm gonna be discussing here and it's made useful by the ability to run models many, many times. And essentially this is an argument about the allocation of computer computing resources we could make a single much more detailed model or we can take less detailed models and scan over a very wide range of futures. Okay, so let me give you an example application to give you a sense of how this works. I'm gonna talk about some work we did for the city of Los Angeles helping it plan to meet its water quality goals. They had produced an implementation plan to meet federal water standards on the Los Angeles River but the initial go through this had not considered climate change. So the question is what does climate change do to the plan? Should the plan change in the presence of climate change? So not to go into the details here but this is the plan they ran their optimization models and allocated their investment over three types of responses, regional projects, green streets and low impact development. And this pie chart represents the optimal plan given baseline assumptions. The question is should climate change change this plan if so how? So this is the steps you go through in this sort of analysis. There's a framing the decision and we're gonna be running a workshop breakout session later today on this topic but framing the decision. Once you have the problem framing then we can stress test our policies over a wide range of futures and then use those stress tests to identify new revised strategies that meet our goals over this wide range of futures. So let's start with the first step here. We ask a question. Will our expensive new water quality investments still meet water quality standards in a changing climate? If not, what can we do about it? Turns out to be useful to organize the stakeholder discussions into these four categories. What are we trying to achieve? Those are the metrics. What actions might we take to pursue our goals? We call those policy levers. Uncertainties, what uncertain factors outside the decision makers control affect our ability to pursue our goals and then the modeling, the relationships among those. We call this XLRM as a simple heuristic. And in this case, this is what we came up with, the city's proposed plan, whether or not they meet the water quality requirements, whether it's cost effective. And we focused on two uncertainties, climate change and land use and use the standards of hydrology and optimization models that the city was already using for its regulatory assurance analysis. Okay, so now let's do a stress test. And there's a core of what I wanna talk to you about today. So we take this XLRM on our simulation model and we run a bunch of cases. Each case is the city's proposed plan in one particular future. And then we collect those all together in a large database of futures. And here the blue dots is a future in which we meet the regulatory requirements and a red dot is one where we don't. So the question is, so this in particular case is 282 futures, 47 different climate projections and six land use futures. So we have this database. And then so the question is, what do we do with it? How do we make this information actionable? And a key thing we do is this process called scenario discovery, which is essentially run classification algorithms over this database and ask the question, what are the most important uncertainties which separate, distinguish the futures where the plan meets its goals from where it misses its goals. And so that diagonal line across this space in the across the database in the space defined by these two parameters on the axis, the change in 24 hour rainfall to the climate change and the percent impervious area in the city due to land use. That is the best possible straight line that divides the cases where the futures, where the plan meets and misses its goals. And for reference, I've shown there the baseline with the green ax is the baseline case that the city used in its regulatory assurance analysis, which not surprisingly is in the region where the plan meets its goals, but there are a large number of scenarios, the futures outside of that. Okay, so now we can interpret those results as scenarios as we might in a standard scenario analysis. And we essentially have a plan misses goals scenario and a plans meet goals scenario. And these scenarios then become useful in discussions and they depend on land use and extreme rainfall. And these then become useful in discussing with the decision makers this question of whether or not their current plan is resilient to climate change and if not what they need to do about it. So the first question we can ask is compare these scenarios to the available science and ask whether decision makers ought to worry about climate change with their plan. And the answer turns out to be yes. And where we got to that is we looked at a couple of different bounding scenarios and not to go into the details, but if you look at either the full range of IPCC projections or more focused studies, which look try to estimate probabilistically the range of extreme rainfall forecast. It turns out that with current land use, plan is very vulnerable to climate change with if the city achieves its stormwater management land use targets, it mostly, but not entirely buys up the climate uncertainty. So the question is the answer is yes, they do need to worry about climate change. So what should they do? Let's identify some newer revised strategies. Turns out that it's often in looking for robust plans that's often useful to look at adaptive plans. That is ones that begin with near-term actions that monitor particular signposts and then adopt contingent actions of signposts are observed. So one of the advantages of these scenario discovery, these scenario scenarios that emerge from the stress test is that they suggest particular signposts to monitor. So in this case, an adaptive plan might begin with the current plan and then monitor the factors that would put you in that reg region, which turn out to be increasing storm intensity and a failure for the city to achieve its mandated land use goals. So if we observe either of those things, we would go to an augmented plan. If not, we can continue with the current plan. So that would be an adaptive, robust response. to this climate stressor. So should the city adopt that? Well, let's compare three different approaches. The city could just stick with its current plan. It could have this current plan, but prepare to adjust or it could begin with the augmented plan and jump back down to the current plan if conditions are such that it's staying in that green area. So which should it choose? Well, this becomes a multi-objective decision problem. And so let's look at the two scenarios separately. And we have two goals, remember water quality and cost. So in the plan meets goals scenario, this is how the different plans score. The current plan, all of them meet the water quality goal essentially by definition. The current plan costs the least, the augmented plan costs the most and the begin with current plan but prepare to adjust is slightly higher than the current plan, but not too much. Let's look at the other scenario, plan misses goals scenario. We have our same two criteria, water quality and cost. In this case, the current plan fails to meet our criteria. The augmented plan is best but the current begin with current plan prepare to adjust is again meets the water quality goal and is only slightly higher in cost. So this middle one, the begin with current plan prepare to adjust is a lower grants or robust strategy. And we can go into some questions if you like but there's a variety of definitions for robust strategy, which I list here and all of them give you the same answer in this case. So let me just conclude with a couple of comments about how this, I think some of these ideas might be useful for this community. RDM emphasizes first the use of models as tool for decision-relevant exploration, not just prediction. It emphasizes iterative learning processes and it emphasizes this idea of model pluralism and CSDMS might usefully support each of these novel uses of simulation models. So just on this idea of exploratory modeling which we discussed previously, there's really two different ways to think of models. One is consolidated, the other is exploratory. There are a variety of things you can do with exploratory models. Here's a list from a paper from a couple of years ago but you can do a hypothesis generation, reasoning from special cases and then looking at properties of the entire ensemble of which the idea of robust strategies is one case. We hear different ways of using models which might inform the models in your databases and how one interacts with them. RDM supports iterative learning processes. A key one is this idea of deliberation with analysis which is an iterative process where you gather the stakeholders, they define objectives, options and other factors. This is what we'll sketch out in our breakout session later today. That gives instructions to the model producers who then generate decision-relevant information based on the problem framing and then that problem you deliberate again based on that information which may change people's objectives, options and other factors and you iterate. We've embedded these ideas in large stakeholder exercises which in some cases have been very impactful and successful but the key idea here is that rapid and flexible approaches for updating improving models could be inserted into this deliberation with analysis process and could greatly enhance them. And finally, and I'll just go through this part quickly. What I find it really interesting was just an area where I've been doing a lot of current work is this idea of model pluralism, the use of several fundamentally different models to provide a more complete understanding of reality which represents essentially a structural uncertainty in the problem and the idea of multiple world views which include the idea of model pluralism but also note that often different stakeholder groups will have correlated sets of values, beliefs in the sense of different models and policy preferences that shape how they see the world. And so how do you interact with people in these informed decisions in such situations? Really at times, so I'm going to go to this in any detail but this is an exercise we went through looking at how you use multiple world views and model pluralism in a simple case having to do with the conflict, potential conflict between economic development and preserving a lake in a context where there are different communities which have very different views on the value of the different societal objectives. And fundamentally the idea is you can go through do these Ardium analyses with fundamentally different models and then pull the information together in what are called these utopia, dystopia matrices where you look at the world from different points of view. So in this case, we look at it we have three different world views and we accept in these matrix what is the best strategy in one world view and how does the world look from another world view if you adopt the strategy of one and use that as a way to create common understanding across these groups and then begin to develop compromise strategies that work across groups. So let me just conclude with the notion that Ardium and these decision-making under deep uncertainty methods help people make better decisions, not better predictions. The basic principles are consider multiple futures not a single future choose futures to stress test plans seek plans which are robust over a wide range of futures and this often means making plans flexible and adaptive in order to make them more robust. So rather than a predict an act we look over multiple futures and a couple of aphorisms to end with is the idea that premature aggregation either in probabilities or in values can mess up your decision support where we're seeking certainty is in decisions not predictions and this enables you to use models in new and very interesting and often very helpful ways. So questions. Thank you very much Rob for your presentation. This robust decision-making it's very much in line with using models not only in a research setting but also in a more applied way by corporations and in a more operational world basically we're running a bit out of time. So if somebody has one, I see Maura has a question. Jet can you unmute Maura and go ahead Maura. Thank you Rob. Thank you so much for this presentation. Of course I love it. You know, I've been a fan of your you and your work for a long time. And it's just wonderful to see it, you know presented so well and so in such a compelling way. I have a question for you regarding I mean, these are two connected questions in a way. So, but it's about, you know the role of our culture in terms of wanting prediction and looking for prediction. And both as a scientist, you know or a scientist, that's what we aim as a way to validate, right? But also from the policy-making world at least in my work with decision-makers there's always that pushback against uncertainty because that's not what they want. They want certain answers. And sometimes it's hard to communicate like that distinction is like what you want like what you were saying so nicely at the end the difference between, you know the right decisions or certainty and decisions rather than the predictions. So how do you handle that dissonance between what is expected by, you know in general, maybe our scientific culture but also the policy-making culture of, you know just looking at prediction as a standard for validation or validity of the work that we do. Yeah, I know that's a great question. And not one designed for a short answer. Sorry, you started it like that. No, but I will try. Yeah, so first off, I mean that's a very valid observation. And so there's two potential strategies. I mean, one is the quick aphorism that I mentioned on the previous page which you try to focus the locus of certainty in a plan rather than the predictions and that a plan which, you know works well over a wide range of futures and that is actually not an alien way to think. I mean, people, you know many decision makers, many people in their everyday lives, you know look for plans which are robust which are low regrets and so forth. So that is a way that people naturally think and though it is often alien to a culture where people are engaging with experts and quantitative information because they think that, you know that they're looking for predictions and not. Though there are, you know there are organizations where the and or, you know people or personalities which, you know are do not make that switch and really do require our predictions. And so sometimes what one ends up doing is say in an organizational setting is working with the say technical staff or the planning staff who do think in this way coming up with a robust strategy and then putting it in a more predictive language for say the elected leadership which, you know of the organization which, you know independent of their particular views find themselves in a public discourse where they can't talk much about uncertainty. So, but you do the stress testing and hedging essentially in the, you know in the technical discussions and then bring and then package it in a way that is more suited for public consumption. So I think pushing towards the first answer getting people to think about certainty and plans as opposed in the predictions is would do the society and the discourse well.