 So we're going to have, I guess, three sessions. One is going to be on the climate informed decision-making under uncertainty, user needs, challenges, and opportunities. Two of our speakers will be remote. And one of them, Jennifer Jacobs, is right here in this very room. The session two after the break will be uncertainty and climate modeling, state-of-the-art low-hanging fruits and future directions. And then the grand finale will be a panel discussion from four to five. And I think that's going to be a great discussion actually. So there we are. How to participate, okay. You're going to do my slides, right? Yes, we're going to project your slides. So for the public who is watching, thank you for joining. If anyone has any questions, you may ask them on the Slido where you can join by just going to slido.com and you can enter this meeting code here, 3833206. We will be moderating those questions and we have a representative here who will ask them for you. Thank you. Okay, so I'm going to provide the introduction to this an overview for about 15 minutes, maybe 13 minutes. And so could I have my first slide? There we are. This is an uncertainty and climate modeling for decision-making. And this is by myself, but also we have this, what I would like to call an uncertainty collective with Barnes, Fouffelage, Yorju, Leung, McGovern and myself again. So there we are. So first of all, starting with some of the basics. But is uncertainty, of course, there are many different definitions. I'm going to use one that is rarely prosaic for the climate community. That is that it's a state of lack of knowledge or incomplete knowledge regarding the past, present or future. It's usually has two components of random, the aleatory uncertainty and epistemic uncertainty resulting for our incomplete knowledge due to the complexity of the world. There are other ideas associated with this such as deep uncertainty, which is as the name implies, characterized by lack of agreement parties to a decision cannot agree upon the external context of the system, how the system works and its boundaries and or the outcomes of interest from the system. And I think deep uncertainty is quite rife in the issue of climate change. And then there's the great concept of wicked problems. I would say that climate change is a typical wicked problem, defies rational optimal solutions characterized by deep uncertainties and with many interdependencies and causes that interact and positive solutions can lead to new unintended consequences and are difficult to test. And I think that's definitely the world we are in with climate change. Next please. So we're starting with why start with decisions rather than the climate. Many of us have who have worked in this area of not just climate change, but connecting it to the impacts and the people who will be making decisions. If you begin with the user's needs, you create a collaborative problem definition including all parties involved, you can support interactions and learning. And the rationale really is to identify what knowledge is really needed by decision makers and what is feasible for science to deliver and that's a direct quote from the National Research Council, 2007 report informing decisions in a changing climate. So what decisions? Primarily adaptation and mitigation at what spatial scales, all spatial scales from the global to the local. What temporal scales, we're specifically going to be dealing with multi-detect the CATL scales in our discussions today, but the information needed for adaptation in particular are on multiple scales down to the subdaily, for example, hourly. Next. So here's what I would call the typical cascade of uncertainty. This is the standard traditional one where you start with emissions, you clunk the emissions through the climate and regional modeling, you use that to come up with impacts model, with impacts using impacts models, and then you finally get down to decisions in the lower left-hand corner, my left-hand. Next. But really, a while ago it's been realized that really the decision-making should be the central focus. And so here we do have the decision-making here. It's not that these other parts are not important, it's just that starting with the decisions actually get you further, faster in point of fact. Next. Another major context, and this has been mentioned in earlier discussions today, is the whole concept of risk assessment. And this, a lot of my slides are from the IPCC. I think it's pretty well understood that adopting a risk assessment approach is also one of the best ways to get at how to build a resilient society that can handle various climate changes. So this is just the, what's called the propeller diagram from working group two, the AR5. And one of the things I really liked about this diagram is the fact that it balances the socio-economic processes with the climate. So climate is once again no longer the top dog, if you will. It's kind of like the meowing cat somewhere in the middle. Sorry, I just couldn't resist that. I should have put, put. And so risk in this context is seen as a composed of the hazards, vulnerability, and exposure to the hazards. Next. So risk management, iterative risk management is a useful framework for decision making in complex situations characterized by large potential consequences, persistent uncertainties, long time frames, and multiple climatic and non-climatic influences changing over time. And this is again from this case, the 2014 report of the working group two. Next. What climate models are considering? You know, I would get to the climate models eventually. So we're really talking about our system models. And of course, we heard a lot about that this morning from our various guests. And though it's important for use of information from ESMs for decision making, we're not really going to discuss in detail the various downscaling techniques used to bring ESM results to decision relevant scales. But I imagine some of our speakers will refer to that. Next. All right, so here's a sample of just the relationship between climate modeling and applications uses for decision making. This is from a proto program at NCAR. And here they're putting the earth system prediction in the center. Notice that we're kind of skipping from this earth system prediction, kind of skipping the data processing, virtualization and analytics, convergent research. And we're just kind of going from earth system out to applications and stakeholders. And obviously, while we're not going to be discussing this in any detail, it's come up sometime this morning. So there's a lot going on and I'm not suggesting that that diagram is in any way complete. Next slide. Related but different relevant concepts. So certainly we know that climate models, science in general, are aiming for increasing scientific understanding. And we know pretty much how to do that. And we generate a lot of understanding. The idea of reducing uncertainties was brooded about earlier when we were putting together this agenda. And I personally am always worried about the concept of reducing uncertainty because of what specifically and what will we really gain from it? Another thing is the increasing, increasing the credibility of future simulations through detailed analysis of the climate model results. So we can say, yes, we believe this model result but not so much that one. Very continue. All right, so here's just a sample of a diagram that many of you are probably seeing improvements in GCMs, deepening understanding, we know, going back from the climate models from the mid 1970s and this goes through AR5. And there's this continuous increase in the complexity of the models. They are constantly incorporating important processes. Let's say, for example, the carbon cycle was a very big one. Atmospheric chemistry land ice going into AR4, AR5. And certainly that is generating more understanding of the system. But is it generating better usable data? I think sometimes that remains to be seen. Next. So here's just further development since AR5. I just have made a list of this. Of course, model grids and resolutions continue to increase the representation of physical and chemical processes. Representation of the biogeochemistry including carbon cycle and representation of the terrestrial nitrogen cycle coupled to land carbon cycle, which does result in a reduction of uncertainty for carbon budgets. And then further work on model tuning and adjustment. Next slide. All right. So here's a sample of the model resolution issue comparing CMIP5 to CMIP6. All the scales are identical on the two slides. And you can simply see that obviously CMIP6, there are many more models. There's also high res MIP. And everything has moved to your right. And there are many more models at high resolution. The high res MIP models are down to somewhere between 25 and 50 kilometers, horizontal resolution and an oceanic resolution a little bit higher than that. So there's been very dramatic increases from CMIP5 to CMIP6. Next slide. Okay, then uncertainties relevant to global climate models. I'm just gonna give you sort of a sample here of something that was said in the AR6. Overall, we assess that increases in computing power and the broader availability of larger and more varied ensembles of model simulations have contributed to better estimations of uncertainty in projections of future change. It's better estimations of the uncertainty not necessarily a reduction in the uncertainty. And I think that's a very important distinction that is sometimes not well appreciated. Next slide. Here's another, here's one of the parameters that have been looked at throughout the history of the IPCC, which is the equilibrium climate sensitivity. And two things I wanna point out. So through each of the different IPCC reports, there have been changes not all of them have, for example, included a central tendency, AR5 it created the likely range with a P greater than 66% going from one to six. And then in AR6, they came up with a very likely and likely range and a best estimate. All right, so these things have bounced around every time the IPCC comes out. It's like, oh my God, what happened? The numbers have changed. Are we going to go to hell in a hand basket faster than we thought? But there's something else going on here. And that is the evolution of the methods for determining this. And so for example, in AR6, the models themselves, those 50 climate models were not actually used explicitly to determine the climate sensitivity, rather a combined evidence from process understanding, instrumental record, paleo climates and emergent constraints. And that's how they came up with this range. They're very likely from two to five percent to two, sorry, two to five degrees C. So I think that's interesting. Of course methods evolve too, but it's interesting because this is the iconic example of a major climate change. Two fingers means peace, two minutes, okay. Go on, I think I'm near the end here. Yes, okay, importance of extremes and compound, extremes regarding decision making. So I point out, as Mary did earlier, the Peacats report about extreme weather risk and a change in climate. And there are a number of reasons for changing climate and you should look at that report. It's really great to see this. And hopefully this will mean that more focused attention to extremes in different agencies will follow. I'm gonna point out a couple of things. Of course, we know the single variable streams, high temperatures, increased high precipitation. But we also have two other things on it. The low likelihood, high impact, LLHA events, such as the shutdown at the Atlantic Thermal High End Circulation. So there's very little confidence in the likelihood of any of those things, but they would have incredible impacts. And so the point is, sure, there's lack of confidence, reflects a lack of knowledge, but we know that the impacts would be horrendous. And so it might be interesting for us to think about how can we reflect on those LLHA events in new and creative ways. But then another emphasis these days has been high impact all results from compound extremes to more events occurring simultaneously or in rapid succession. A good example that I'm sure one of our speakers will refer to flood occurrence in coastal regions, which is a function of storm surge, extreme rainfall, river flow and sea level rise, all of those things happening at the same time. There's also very high confidence that hot and dry conditions will be paired and become more probable in nearly all land regions in the future. Next. And finally, one of my concerns is, what is the danger of false certainty for decision-making? And I don't really know if decision makers think about that. I'm sure they think about it. I don't know if they try to study it. And then finally, yes, the New Yorker Contrune from 2007. I show this, well, because I'm from New York, but also at one point I calculated how high the sea level rise would have to be in order for those many floors of the Empire State Building to still be visible. And cutting to the chase, I have almost a whole seminar on this one slide. I won't do that now. Cutting to the chase, this scenario is literally incredible. There's no way you could get this much of the Empire State Building covered, even if all of Antarctica and the Arctic melted. So, oh, to say nothing of the fact that it stops at the edge of Pennsylvania, which is very unusual, but there we are. So I've been meaning to find the time to write the New Yorker and suggest that they are spreading false information. Haven't quite done that yet. Anyway, thank you very much for your time. I think I'm done. Thanks. And I don't think I have to face any questions, do I? Because this is just the introduction, right? Okay. Hi, this is Efefufla Giorgio. I will moderate session one. Join audio? No. Don't join. Don't join. Yeah. So Linda presented a very nice overview, took us from the definition of uncertainty to the improvement in the resolution from CIMI-5 to CIMI-6, to risk assessment and management that involves all the hazardous exposure, vulnerability. Pointed out the important thing that do we really reduce uncertainty or we do get better estimates of uncertainty? And finally, the dangers of counting too much on the uncertainty quantification from the models. I would like for the sessions to take a different kind of starting point. The morning, we had here discussion by leaders at NSF, NOAA, NASA, and DOE. And we ended up with a very lovely discussion on the climate ready nation, which was defined as providing the best data that meet the user needs. So I would like to say that basically the sessions that we have this afternoon, session one will focus on what are the user needs. I mean, there are many sectors that will use the climate data and climate information. These are projections and predictions and even raw observational data. And these user needs, you know, they're from many sectors. Could be the economy, human well-being, it could be, you know, grid failure, it could be agricultural, food security, even national security. So what time horizons do they need? What kind of time span, time space scales? What confidence bounds can they live with? At the same time, the questions are here. How do we quantify uncertainty? And how do we communicate a certainty so we can be on a better path to adapt these products for decision-making? But this will be session one, which is what are the user needs? Session two will be what is the state of the art of the models? What can they provide now? What uncertainty do they have? What sources of uncertainty are there? Which ones can be reduced? Which ones are irreducible and so forth? And then session three will be the panel that will bring the two together, both directions from user needs to the climate models and the climate models to the users. So with that, I would like to start with session one and introduce our first speaker, Robert Leverth. Robert is the director of the Frederick Party Center for Longer Range Global Policy and the Future Human Condition. He is also a principal researcher at the Ant Corporation. His research focuses on risk management and this is your making under conditions of deep uncertainty. He has many accolades. I will not spend the time to present here, but he has been involved in the IPCC six assessment report and also a chair of the review panel of the California Fourth Climate Assessment. He got his bachelor in physics and political science from Stanford University and PhD in applied physics and science policy from Harvard. So Robert, the floor is yours. Thank you. Let me get my slides up. Great. Can everybody see the slides? Okay. Yes, let's go. Okay, good. So I'm going to start with a little bit of the same definitional stuff that Linda did and then dive into some examples and really organize my comments on climate information around this concept of deep uncertainty and then climate stress tests as a way of thinking about user needs. So as climate change can usefully be understood as a risk management challenge, Linda touched on this. Here are the definitions of risk and risk management that were used in AR six. And just want to emphasize a couple of things here. One is the notion that responding to climate change is an iterative learning process where we scope the problem, do the analysis, implement, learn and repeat. Linda mentioned the idea of wicked problems and this idea of multiple framings and reframings of the problem is one of the key ways of addressing wicked problems. And this idea on the right, which is very much emphasized in AR six is that while we've got our formal axiomatic and mathematical definition of risk of probability times consequence, it's often useful to see it as a much broader concept of the effect of uncertainty on objectives, which allows for a whole range of both quantitative and qualitative judgments. And then the other sort of key framework is thinking of climate information as part of a process of decision support. This again is from this NRC report informing decisions in a changing climate, which Linda touched on. Decision support represents organized efforts to pursue, disseminate and facilitate the use of data information to improve decisions, to reach better decisions. And among the key elements is to really pay lots of attention to decision processes, how people are making decisions, when, what types of information comes into the process can be at least as important, who participates in the decision making is gonna be at least as important as the decision products themselves. This idea of co-producing the knowledge among information users and producers and then a variety of other factors, institutional stability, which gets into the notion of say boundary organizations, which help connect the scientists and the users and this idea of design for learning. So I raise these, because these are all really central points to keep in mind as we think about what information is important because it really lies in this larger context. Okay, this is a simpler version of the diagram that Linda showed, but we often think about climate information that's sitting in the chain where we start with emissions and other drivers, look at the climate effects, look at their effect on natural and human systems and that allows us to think about impacts and risks. And then we can think in the adaptation space, at least about intervening in those systems to reduce risk, to reduce the impacts. And very often we think about the climate information and informing vulnerability assessments and then the actions in this predict and act framework where we try to characterize future conditions, use that to rank near-term decisions and then maybe do some desensitivity analysis. And the notion that what we want to do is reduce uncertainty fits in this predict and act framework. It could allow us to make better, more higher confidence rankings of alternative decisions. This framework, which underlies a great deal of our policy and decision analytics can be very powerful for a wide range of problems but under these conditions of deep uncertainty can break down and can lead us to underestimate uncertainties which can lead to brittle plans. It can skew the way we think about problems towards the parts of it that we can predict better with higher confidence. A clear example of this is focusing on means rather than extremes, even though the extremes may be a lot more important for decision-making. And then in some sense a real misallocation of effort where what we're really looking for is creative ways to enhance resilience to reduce risk which in part can be improved by better predictions, reducing uncertainty, but is often advanced by clever and new types of solutions. Here's the definition of deep uncertainty where there's problems gonna rise, but can be consistent with what Linda showed but basically where there's, the parties through a decision do not know or do not agree on the likelihood of alternative futures or how actions related to consequences, the system model across the full system. So what do we do? A powerful way to address these situations is we think of it as conducting the analysis backwards in the sense we start with the decision, start with what are we trying to achieve? What are ways to achieve that? And then use our analytics, our information to stress test what we plan, looking essentially at under what conditions do the proposed actions meet our goals and how did they differentiate or where did they miss the goals and then use that information to identify in your revised plans that are more robust in the sense of performing well over the full range of uncertainties. And the fundamental idea here is that we're using our science and analytics to produce, to focus on where we can provide higher confidence, high confidence information, for instance, where a particular set of actions might fail as opposed to trying to understand exactly what future conditions might be. And these sets of methods go into the broad heading of decision-making under deep uncertainty. This is a recent book which covers a whole wide range of methods. And I'm gonna touch on some of those with examples here. So let me now dive in, not giving a broad survey of user needs, but giving you a couple of examples, which I think are maybe helpful for our discussion. So this is some work we did a couple of years ago of a climate stress test on water quality plans in the city of LA and in briefs there, the city is trying to meet federal water standards on the Los Angeles River. They went through the standard regulatory assurance analysis, running hydrological models, a cross-model of the cityscape, et cetera. But the plan was made assuming stationary climate, they came up with an optimum distribution of resources between regional projects, spreading basins, that sort of thing, green streets, which is increasing the permeability of public infrastructure. And then low impact development, which is essentially changing building codes to affect permeability and water capture of private infrastructure. And they, as I said, they assumed a stationary climate. So in this analysis, we went in and looked, what are asked the question, what would be the effect of uncertainties about land use changes and climate change have on these plans? And just to give you a brief overview, took a broad ensemble of different climate projections. This was done at the time when the first scene at five was the set to use and uses data to run a wide range of scenarios. Here are a couple of hundred, stress testing the plan over a whole range of different futures, which combine different land use and different climate change. You get something, a database that looks something like this, where a blue dot is the plan meets water quality goals, red dot means it misses goals. You can then run classification algorithms against the database and ask, what are the key drivers, the key combinations of drivers, which best distinguish between the conditions that give you a blue and a red dot, and the answer you get looks something like this, where it's not surprisingly, it's the 24 hour rainfall intensity and the impervious area of the city surface, and which, and then that line across the two, there's a pretty good job, actually the best job you can do with a straight line in this multi-dimensional space, of distinguishing the red and the blue dots, and you get on the right, the cases where the plan misses federal water standard on the left, where it does the green X up there is the baseline conditions used in the plan. And so what you get from this is a sense of what combinations of land use and climate changes in this case, the intensity of this, the 24 hour rainfall, that the plan can take and what combinations which push the plan out of compliance, and given deep uncertainties in both these drivers, this is much higher confidence information. You can now take this information and turn it into an adaptive plan, one that's designed to monitor and respond and be robust to these uncertainties. So this plan consists of a current actions, signposts to monitor, and contingent actions of signposts are observed. And so you get something like this, you get the current plan, looking at 20 years of the future, you monitor the land use, building permits, remote sensing, that sort of thing, and you monitor the climate science to see what it says about the extreme events. And if you see the signposts, you can augment the plan to work better in the red region and if not, you can continue the current plan. And the idea here is this aims to describe a robust adaptive strategy with sufficient specificity to enable public accountability. And as you see here, the particular needs of both regionally, this is a particular some watershed in the LA Basin, the particular climate information is very contextual to the particular need. And also there's this strong interplay, I mean, how well do you need to know the climate is not a standalone question, but is tightly coupled to some of the socioeconomic uncertainties which also drive the importance of the plan. Okay, the AR6, IPCC report, try to pick up some of these ideas in a fairly substantial way. This is how AR6 working with one, try to characterize sea level rise projections so that there's on the left, you see a probabilistic range of sea level rise projections going out to 2150. And then there's these high end storyline estimates which don't have probabilities on them, but are described by what we know about the sorts of processes that would lead to the conditions that would give you this essentially out of sample sea level rise. So it's basically probabilistic estimates for the part we understand, but these science driven storylines of processes that would take one out of sample. On the right is presenting this information not by when we might experience certain levels of sea level rise. And so there's panels for two different socioeconomic emissions scenarios. And then it shows when are the earliest and latest states under a variety of assumptions that you might get a certain amount of sea level rise. The idea being that this, when does it occur is use more decision relevant in a variety of contexts than how much do you get by a certain date? Two minutes. Okay, great. So one way to sort of information is meant to inform adaptive plans. This is the adaptive plan for the Thames River Estuary. And the idea is that you see different levels of sea level rise, one meter, two meter, three meters and a whole variety of actions you can take seeing these various levels of sea level rise. So both the probabilistic estimates and the storylines provided in AR6 are meant to inform this sort of analysis. This is how AR6 try to use these adaptive pathways idea and link at the climate information. We can go back to that a bit of time later. Just wanted then add. So one other thing here is this is some recent work using expert elicitation to try to fill in some of the gaps in climate model ensembles as part of one of these stress test plans. So this is the San Francisco Public Utilities Commission was performing climate stress tests of their system as part of their long range, long term vulnerability assessment. And they used the SEMIC-5 Ensemble, which was the one available when they were doing this. But then we also did this expert elicitation where based on the stress tests, the agency identified a variety of factors that would be potentially relevant to their decisions, but for whatever set of reasons, not covered or not covered is sufficiently well in the SEMIC-5 Ensemble. We recruited a number of climate scientists, did an iterative Delphi process based on very specific questions the agency had put together. So just to give you an example of the sort of things that they were asking is they had three regions, very small climate geographic regions that were particularly important to their system, San Francisco Peninsula, East Bay, and then parts of the Sierras, essentially where Hashtechi is interested in both Delta T and Delta P averages, but also then seven different other climate parameters having to do with drought, duration, depth, frequency of atmospheric rivers. Which were then characterized for them in terms of how big the changes might be using bins essentially that were connected to particular vulnerabilities of the system and then levels of confidence. So again, this emphasizes both the decision, the specificity of the information often needed and the idea that the climate science community often has lots of information that is decision relevant but may not be in a formal climate model ensemble. And then just touching on this, which Linda had also mentioned, this idea of, which again is put forward in AR, or emphasized in AR six, the idea of complex and cascading risks versus the risk propeller which now has a additional blade to it, which is risks that are created by human responses to risks. So we've increased the components of risk. And then this idea that risks compound and cascade so that the climate risk that we experience is not any one of the individual ones but the combinations of a variety of different risk factors. And so this again is something that's gonna be very important decision makers. So some quick observations, user needs are strongly context dependent, can be shaped by the design of the response. I emphasize adaptive pathways here. Climate's one of many relevant uncertainties which is important for how we think about uncertainty and its presentation. High confidence information is often not the most decision relevant and this idea of climate stress test to identify particular combinations of factors that would cause vulnerabilities to current systems or plan systems can be very useful in thinking about the most decision relevant combinations of uncertainties. Thank you. We can take questions. I can probably start with one question. So I mean this stress testing kind of approach as compared to the predict and then act is very interesting. And I really believe very relevant. But at the same time in the example you presented of course was a simplified you had percent in previous area and 24 hour total precipitation. I have to know a lot about the system and do identification of the parameters that are most relevant before I even proceed in the stress test. So that can be a complex process but at the same time I agree with you is much better and more system relevant than the predict and then act. Can you add any more insight? Well, I agree with everything you said and the example I showed actually involved running a fairly detailed system simulation against a whole variety of an ensemble of climate projections and then doing a statistical analysis to see what to put on those axes. So yeah, I know it did require a complex system model. One of the places I think the field is going is as we gather example cases we're starting to learn what are important in particular starting to learn how to develop a taxonomy of what sort of situations what sorts of things are likely to be important. And so one can do that by looking across lots of these use cases and there's also some interesting work going on where people are trying to come up with different classes of abstract cases and trying to understand. So this actually I think an area that could be really useful from research is how do we develop taxonomies of cases so that not everybody has to do the full really complicated analysis to get a sense of what a stress test might look like for their system. Thank you. More questions? I don't see if there, yeah. There's a whole bunch of new work that's gone on in numerical weather prediction that uses adjoint sensitivity to determine what's going on in a forecast model as an initial condition perturbation that leads to a forecast in error or in some other perturbation from what you expect. Can you do adjoint sensitivities on your prediction models? And is that kind of what you're describing without using those words? I'm not super familiar with these methods you suggested so I'm not sure but what we're doing there and the work I showed people there's a lot of work in the style that I'm showing that uses various sort of stochastic weather generators to stress test the system find the vulnerabilities and then interrogates that back against what's understood from the models or in the case I showed from expert elicitation of experts. So does that help you answer the question? I think so, I'm not an expert in this. Michael Morgan is, it's too bad he's not still here. He'd have better follow-up questions than I have but yeah, that sounds right. Stress testing might be just another way to describe some of the same principles. It's kind of interesting. Okay. Which to me again means that this approach emphasizes understanding a system whatever that system is, the best we can. I mean, you cannot stress a system and get results on vulnerability without understanding the system. Yeah. So forward modeling in the system whatever LA water quality or whatever forward accurate modeling is a very fundamental component of that approach which is interesting. Yeah, yeah. Linda. Yes. Thanks, Evisa, Rob. Thanks very much for your presentation. So providing individual examples is really great in terms of really demonstrating how the method works. I guess it's kind of like, my concern is how do we actually produce a resilient society? And is there any way to do this beyond taking many, many, many cases? In other words, can this be generalizable or is it the only way to have a plan for the LA basin, a plan for Santa Barbara, a plan for the Imperial Valley and so forth? I worry about how this method could be more generalizable. You have any thoughts on that? Yeah. I mean, certainly just to rephrase your concern a little bit that, I mean, all of those places you mentioned and many more are all gonna have plans. I mean, that's what they do. The question is, can they develop their plans without doing a lot more work than they're currently doing but make them a lot more climate resilient? And so what, when you try to write say, guidance for these sorts of things, you often end up with screening processes, which try to abstract from the cases that have been done and turn it into a series of questions that can say, if you can answer these questions then here's what you should do. This is what a full-blown climate stress test would tell you but you don't need to do it because you're in a situation where we kind of know what the answer is gonna be. And then you maybe have three or four categories of these things where you can, simply your system is sufficiently like others, sufficiently simple that you can pretty much know what the answer is gonna be and you can go on from there. Intermediate ones where you may need to do some sorts of analyses but you can pretty quickly get to an answer and then more elaborate ones where it becomes much more difficult. So again, trying to understand how to map different situations into these broad classes, I think is one of the things that's needed to make this sort of work easily disseminatable across all the agencies, both small and large, who have to act in order to give us a climate ready nation. I was told that we have to move on, unfortunately. So Bob, do you mind putting your question in the chat box? Sorry about that, but I follow instructions. So thank you very much. We have our next speaker, Klaus Keller. Klaus is the Honson Distinguished Professor of Engineering at Dartmouth College. Before joining Dartmouth, he was at Penn State University where he was the director of the Center for Climate Risk Management. His research addresses two interrelated questions. First, how can we mechanistically understand past and potentially predict future changes in their system? And second, how can we use this information to design sustainable, scientifically sound, technologically feasible, and economically efficient risk management strategies? Klaus received his Master's in Environmental Engineering from MIT and his PhD in Civil and Environmental Engineering from Princeton University. So Klaus is all yours. Thank you. Thank you. Thanks for the invitation and a great discussion so far. I tried to make three simple points. Number one, climate change drives risks under deep dynamic uncertainties, and the key here is dynamic uncertainties. Rob discussed this very much. Second, improving the characterization or communication of climbers can improve decisions. And three, the most important one, starting with decision-makers needs can inform the design and mission-oriented basic climate research. And I'd like to start always with decision problems because I think it's helpful to focus our attention. There's a picture of the industrial canal flood wall in New Orleans. I think Rob was actually at the same excursion here. And people build a levy. And the question is, how high would you like to build a levy? If you look at then what are the hazards are in terms of water levels, it becomes pretty clear that, let's say, the 100 year return period has not just uncertainty, which is an over-density function, but there's deep uncertainty, meaning there are subjective choices that can lead to multiple PDFs. And it's not quite clear which one you would like to take or picking just one leads to underestimation. And if you start with an hypothetical levy, then the risks of overtopping, of course, is a tail area event, and it's deeply uncertain. And so the uncertainties are deep. And neglecting uncertainty, for example, adopting the best gas estimate here under estimate risk because the best gas scenario, you are safe, but with the uncertainty, you are not safe because you have flooding. So a very simple point. A slightly more complex point is that most models that are used to assess climate hazards that are general circulation models only sample a small range of the parameter space. They cut off tails. If you see here the climate sensitivity, the range of the last, not the current one, but the previous one, seem to estimates, is not covering the full range of other estimates that use simpler models and can sample the tails. So if bad things happen in the upper tail, which there is good physical reason that there are predominantly the upper tail of climate sensitivity, then relying only on the existing generation of high-resistant models has potential issues because you're blind to the upper tail area events. And the third point here is that this isn't just an academic esoteric class-beat game discussion. This can be brought really much to the bear. Here's an analysis of what is the return period of flooding for New Orleans for an existing approximated levy design. And what you have here is the return period years. The certification is for better than 100 years here. But the point here is that how the water height is depends as Rob really nicely said on the sea level, on storm activities, but also on parameters of ice sheets or how much we drive emissions as human choices. And so if you then basically make different choices, here are 18 scenarios that we are probably cannot really say which one is better. Some of them actually, they all are at expected value in the median value better than 100. But for critically for structure like hospitals, you want to have one in 500. And for that, it doesn't always work because some scenarios work and some do not work. And so neglecting deep uncertainty can change decisions. Towards the second point, it's slowing down slightly, is that improving the characterization of the communication of climbers can improve decisions. Let's have another example problem. This is a picture taken in a rural community in Pennsylvania. The person in the front is the first author of the paper we discussed soon. And people elevate houses. And the question is how high would you like to elevate your house? So it turns out elevating a house, there's a recommendation. FEMA gives you a recommendation to give you a recommendation plus a certain free board. And then you have to pass a course benefit test. From a decision to a theoretical point of view, what you would like to do often is that you could consider, for example, reliability. Let's say you never flood is 100%. And investment costs and the ideal point, of course, you pay nothing and you're safe, which is to start here. But that's not possible in this region because you flood a lot and houses to get flooded. As you increase investment, then of course you get high reliability, but you have a decreasing return of investment. So you basically have here operator front, which separates the dominated space where you can improve towards the infeasible space, which you cannot do. If you neglect uncertainty, then the best decision here is to elevate by a certain amount and you get it. However, this is the approximation what FEMA tells you, or the decision rule, this neglects uncertainties. If you account for uncertainties, your credit front actually deteriorates and you have issues and you actually change the conclusion. So from this decision, the dotted line, which neglects uncertainty to the dotted line, you actually have a change. And the optimal decision also changes. So a very simple point that accounting for uncertainties, in this case, about the geophysical hazard, but also about engineering and economics, like discount rate, changes the decision. The motivating questions for the session, we're also focusing on how do we design and communicate. And I think it's important for us when we have a climate-ridden nation design as an objective to actually take stock of what we have, what real people can see for climate risk information. What we have done here in a recent study is we look at certain flood risk information sources and hear studies or risk rating 2.0, flood factor, as well as academic studies. And on this side here on the rows, we have actually components. So we would like to integrate climate change. We would have volumetric exposure there with simplicity, transparency. Do we have open access? And is it legally accepted? And is color coded by dark blue is best and light blue is bad, or it's room for growth, more polite. And what you can see is that there is no single product with this dark blue across. And so a user in search of information has to navigate somewhat harsh trade-offs in terms of which product they would like to use. There is no product that in our assessment passes the fundamental quality criteria we would like to have for an sound, transparent, accessible, informed flood risk information source. Maybe more importantly, in addition, we have to engage also with psychologists and people who are experts in computer human actions information design because the way we communicate information impacts the way people perceive hazards and risks and how they make decisions. And it's the same paper I decided to discuss before. We compare in a stylized way how people these days actually get the flood information. So on the left side, there's this standard flood zone like the FEMA flood insurance rate map where you have 1 in 100 and 1 in 500 rates and you're either in or out in certain zones. Other studies recognize that disframing is a problem because if you're in the 1% zone, you could be 1% or 2 or 3 or 4, you could be higher. Basically, the boundary you have is the lower bound of this current classification. And so for house number one, basically yes, you are in the one, but actually because you are not at the very edge of the line, you have a much higher flood risk. And so having a different way to plot things is more complex, but it's also more information content and it might change decisions. The third one, of course, is that certain entities provide flood factors and give you a score for your house. And they can look at those places and you could, for example, be in an area like here, this house here in panel C, which is green and you're fine. The problem being that if you want to need to go to the hospital here, then you don't know from the percentage information that you could not go, for example, through the road to get to the hospital. And so the point I'm trying to make here is that not only is there a problem in how we design information and how we navigate trade-off between being inclusive of manufacturers and understandable, but also the way we communicate has an impact. And we shouldn't just leave it to the climate and science nerds, no offense. I want myself to actually think about how we communicate this. Wrapping up and coming to the third point, and this is the most important one, it really, and this is also in terms of the vision of this session this afternoon, starting with decision-making needs can help inform the design of mission-oriented basic climate research. So let's come back to the decision problem of how high to build a levy. And let's step back also how this often is being done, that this is very much mimicking Rob's approach of doing it inverse, but now inverse in the science design as opposed to decision design. In the process from the IPCC and many other assessments, we start with working with one, what are the stakes? Then we say, well, what are the risks and the odds? We do work in group two. And then we say, what are we going to do about decision analysis? And they're sequential. And we start with one and two and three. And once you do decision analysis, you want to go back and say, sorry, come back in five to six years because we're not really doing anything else for this assessment. The potential here is that you have to make choices where you focus. And this is where I think it's helpful Linda had a very nice top dog and cat analogy. When you sometimes look at dogs, they bark up a tree to go for a cat, but they're barking up. They're not very smart. They're barking up the wrong tree. And the issue is all we as natural scientists barking up the right tree because there are many, many trees we can bark at and we can look at. And so one approach is to actually do this in an inverse sense where you start with the decision analysis and then ask the questions, which parameters and uncertainty sources from the earth science component are actually driving the variance in the decision maker objectives. This is illustrated in this figure here where we look at the question of which factors strive the variance in the flooding probability and driven by the next 30 years of building a levy. This is a very specific decision. And what we have is a somewhat complex model which is joint providential function. And we do what is called a global decision analysis. It is not a joint. It is a Sobol's method to do the global variance decomposition. And we have different factors like glacial ice caps, a temperature model. We have then of course a CO2 emissions with storm surge and Arctic ice sheet, green and ice sheet. And we partition over the how much the uncertainty and certainly each parameter with injection terms drive the uncertainty in the variance in the objective of the metric of the risk of overtopping next 30 years. And what we see is that the big balls here are the important ones. If it's not showing, it's not significant. And if there are connections, then we're into the interaction terms. And yes, building how much we actually build is important, but the storm surge uncertainty is much more important. And Arctic ice sheet is important. So here, for example, the green and ice sheet is not very much important. This does not mean that there is no inherent virtue in studying the green and ice sheet. It only means that for this specific decision problem and this decision setup for short-term decision when you have the poor concrete every 30 to 70 years, the green ice sheet is not fast enough and or the uncertainty is not fast enough compared to other certain things. The question then is, what does it mean for the design of mission to basic research? Well, if it's just curiosity driven, every parameter is fine. If it's have a mission driven to form this decision for climate variation, then maybe we want to focus on those and such as matter for the decision which actually move the needle here. Stepping back and coming to the conclusion, I think it is time and many people recognize this to shift gears often when students are educated we have this classic model of prior knowledge, lines of evidence, system model, we do some patient data model fusion with outcomes and predictions. That's the part we say but the problem is that this misses interactions. The reason is that the outcomes are compared to values. The values are used to define metrics. The metrics then inform the choice of strategies for decision making and the strategies then interact with the system model. And of course the design if you're learning means that the outcomes can feed in back as evidence to an update. Now, this means that how we optimize the system model and where we elicit priors here has to account for the interactions we have by decision making to do this to inform the values in the metrics then the choice of where to focus here is not value free it is something where in my view it is helpful to be explicit about value choices and be deliberate about which choices we do. And with that, I thank you for your time if you have any questions, send me the email and I'm happy to look forward for your discussions. Thank you. Thank you Klaus. One question by Neil. Yeah, thank you. First, I'll have to say if my chariot says there's a squirrel up a tree, it's up at that tree. So I don't know about this barking up the wrong tree thing. You have a smarter dog. She's pretty sure of herself in that regard. I mean, a lot of this is music to my ears in the kind of Granger Morgan front and especially the normative values and everything else. But what I struggle with is the I might call it the sort of front wheel drive version of decision making and uncertainty and so where we want what we need to know guides the guides the car. But we also have in society this enormous preponderance of confirmation bias right and the appearance that a lot of our decision makers to find the experts that will confirm what they want to do anyway. Yeah. At a distance these two things seem awfully similar. So and I understand that they're not but what's the communication space around that around differentiating getting the necessary information from framing the decision ahead versus just going out and looking for what confirms what you want to do anyway. This is a great question. Sorry to make the obvious point, but you're right. So let me try to make two points. Number one, just to reiterate I am not saying that the fundamental science should be abundant of course not there is we can only we need to look over the horizon and we do have the serendipity of discovery which is how we find out new things. However, what I am saying is that if your objective in a mission agency is to inform decisions then there are choices which are given use objective choice and your values. This is important. We come back to this have different rankings how important they are and from the decision and the second point here is it comes back to this the values whose values are we talking about when you I can send you papers there are many papers on Rob Lampert and others were offers on similar studies it often starts with engaging with the stakeholders decision makers at the location. So this is not really blunt. It is in my view not really promising if you ask an climate scientist at an Ivy League University what would be really relevant to inform decision making on evacuation in New Orleans just to be really clear here the local experts and academic experts have different mental models and you notice very well and we need to align them and understand this and they have also different values and so the content of relevant mental models and engage with decision makers and have a traceable account is something which is really important you know Bob Carpenter is on this call and Linda and Rob and they do this way better than many of us but I think it's really clear to be explicit and clear and traceable and transparent about which values are those so what I did not show for example is that I showed this despite the diagram of this analysis for one decision for other decisions and other objectives it looks very different and so for example if you are concerned about buyer diversity preservation for 400 years then of course really is important it makes perfect sense yes and so what I'm trying to say here maybe not very well is that which parameter value is important depends on the values that you have in the decision and the values are sometimes the values of the decision makers or sometimes the values of the physical scientists who do the analysis but I would argue to be silent and relevant we should actually it is potentially more useful to focus on the value of the people which are impacted as opposed to the value of the analysis in an ivory tower so I don't want to spell as I was too much but I hope it's more clear now thank you Stephen we go to Stephen who will I did some questions from the chat from the public yes so we in parallel we have an opportunity for public to ask questions and I just raised up some questions that are related to what was just raised one was about coming from a decision maker standpoint a specific decision decides the modeling and what parameters you're considering and sort of how do we know that we're considering the right decisions and the right decision makers when asking that question and then a second one is about more about time scales noting that that warming or climate change needs to be considered a geologic time and how does that fit with decision the decision frame and timing of speakers mainly want to raise those up if you have some brief responses before we move on to the next I will try to be brief those are excellent questions so which decision to choose well that is to some extent co-designed by the decision maker stakeholders and the researchers there are decisions which are for mission agencies really important like where to put nuclear power stations and where to put hospitals that's certainly something or how to build a levy those decisions where we as a society invest a lot of money and it's not my choice I'm not elected official I vote I'm citizen of this nation but the point is that decisions in my view are to some extent the choice of the people who made the decisions and where this is however there is very good research on where decision analysis can actually be useful to improve and so the classification there is Ben Hobbs and others have done really excellent work on that I can put it into the chat afterwards to the work by Ben Hobbs and others and the second point of the time scales yes you're right there are time scales which are operational on a day to the week to more tactical to really strategic and long term on multi-decadal what I've shown here are multi-decadal time scales home elevation as well as levy building or putting pipes in the ground because this is where there is relatively little flexibility if you can adjust then it is hard but whenever you put things in the ground and time scales which are relevant to climate change this is where the climate change time scale potentially interacts with the time scale of your decision and so finding problems that are helpful and I just want to point out to the question and the suggestion of Rob yes we cannot do an analysis of one year of postal time for each house we elevate this is not going to work what we potentially is helpful is to find, can we find archetypal can we find typical rules that work well in most cases empirical rules and where we can actually influence the engineering design standards to say is it because engineers do designs all the time they follow rules they follow handbooks and how to update those in my view is actually one of the larger chance but also an opportunity for us to mainstream the findings from people like Rob and Linda and other people thank you Steven thank you and Klaus very quickly in the simple example that you had with the circle and the variables of importance I assume that in a real system it has to be a dynamic thing because amplifications bound events so this was 15 minutes and I try to abide by the time limit you can only do very little there are now time dependents there people have movies yes it gets complicated really fast I wanted to deduce it's you know chemistry is easy does it first and you want to have I can also have a few papers where it's time dependent I can put them into the chat afterwards thank you I will look them up so thank you very much we have our third speaker now Jennifer Jacobs she's a professor in civil and environmental engineering at the University of New Hampshire her research focuses on characterization of hydrologic processes distributed hydrologic modeling in all of its components evapotranspiration soil water dynamics no melt stream water and energy both through modeling and experimentation she has been a community leader especially in quasi the consortium of university for the advancement of hydrologic sciences and she's leading an NSF project on a research coordination network for climate resilient infrastructure she got a bachelor's degree in electrical engineering from Brown University and her PhD in civil engineering from Cornell so Jennifer great well thank you very much for inviting me today and it's been wonderful to sit around the table and to hear all the exciting work that's going on and I really appreciate the work that Rob and Klaus and Linda introduced and now I'm going to take it into a little bit more granularity and as Epi introduced I'm a surface water hydrologist and my role over the past 10 years or so has been spanning the boundary between the climate change scientists and the transportation engineers and so the water people can kind of fill that role rather nicely and so I'm going to put on my transportation hat and I'm going to lead you a little bit into that sector and how that sector is thinking about climate change and about uncertainty so with a quick let's see if we can get this to move there we go so first of all I'm going to introduce civil engineering to you you got to know a little bit about the sector in order to be able to understand what's going on then I'm going to talk about uncertainty in that sector in particular in transportation and then I'm going to go through a series of brief examples of where we've thought about uncertainty and wrap up with trends gaps and opportunities okay so here's how we play with it this is a picture of a bridge that's being built or that was being built in my hometown of Georgia and it's a great example of every discipline has a bunch of sub disciplines and so within civil and environmental engineering we have water resources environmental pavement structures geotech transportation sometimes coastal engineering and sometimes others so if we look at that bridge as a system the bridge part that's being floated in that you see that's known as the superstructure and that's what our structural engineers are going to take care of the pavement engineers those are the ones that are going to put the black stuff or the white stuff on top of the roadway and connect that bridge into the adjacent roadways the transportation engineers are the ones that make certain that that bridge works within the entire system and the water resources engineers are the ones that tell you how big how tall to make that bridge and how big to make that bridge and how big to make that bridge and how big to make that bridge how wide an opening to make that in order to pass that design flood whether it's current or future the environmentalists typically don't have a role in transport except possibly to inform the environmental regulations on where we can and cannot permit some growth alright so that sets our stage so now we're going to move into civil engineering we're working on the dry side environmental and water resources are the wet side and we'll talk about transportation so transportation is well I overheard somebody's conversation saying I drove to the metro station to take the metro here you guys all know what transportation is you use it on a daily basis it is an absolutely crucial component of our daily lives collectively both individually but also in the delivery of goods and services and when it breaks down you hear about it we have this is a community that's a large community I think most of us go to the American geophysical union meeting there is a TRB meeting that meets in January in Washington DC and that meeting is as large or larger than the AGU meeting which is for the transportation engineering community so it's a very large community that's taking care of a lot of assets and I've listed some of those here over 4 million miles of roads 600,000 bridges 136 miles of rail 20,000 airports 900 ports and many many other assets that are combined there's a lot of things in play with the transportation community the transportation community also has a number of attributes that make it a really important part of interacting with the changing climate and interacting with the climate change community the first piece is that everything that we're building today is designed to last and so if we put something in the ground today that is not climate ready it's going to be in the ground for anywhere from 10 to 100 years not being climate ready so we've got these lifetimes that are very similar to climate change lifetimes within transportation engineering the other thing that we know is that infrastructure is already being impacted by the changing climate when we see extreme events changing extreme events oftentimes what you're seeing are bridges that are blown out roadways that are damaged by the infrastructure because that's what impacts people we have 12,000 miles of coastal roads that are currently experiencing high tide flooding and this is only going to get worse we have a lot of vulnerable assets alright so uncertainty transportation community is no stranger to uncertainty we've been thinking about this from the get-go however we don't typically use the term uncertainty and as a bridge engineer explained to me you don't really want to tell the public that you're not certain about whether or not that bridge is good yeah you get the idea and so more often we frame it as risk or reliability the inverse of the inverse of risk transportation assets are designed built and maintained in a manner that most decisions involves some sort of degree of uncertainty and yet you still need to build that bridge the Brooklyn bridge was built long before we knew how to do design rainfall and flood forecasting and modeling and yet it's still standing and so the engineering community will take that information into account when they do their designs and so the engineering mindset is that good designs do not fail and design standards are something that we rely on and Klaus mentioned this very briefly a bridge engineer when he or she is designing that bridge is going to use one of these codes that's over here and they are not going to deviate from those codes unless there is an extremely extremely good reason how are these codes developed they're developed by the American society of civil engineer or like type organizations typically by volunteer groups that meet quarterly annually and so design standards change on the order of 10 years or longer and so if we were to definitively say we should be changing the bridge because the climate change community now knows how to do three second wind gust then that change would not take place likely for another decade so we've got a long lag time here the design standards provide lots of different specifications that account for uncertainties a lot of them have absolutely nothing to do with climate the place where climate comes into play is going to be in the loading so a loading on a bridge it could be the heavy vehicle how much does it weigh but also the other part of loading is the environmental loads so wind speed rainfall precipitation heat those all will come into environmental loads and so that's the place where the climate scientists can inform transportation engineering alright so where do we sit now I was saying to someone over lunch it's a little bleak right now in the transportation community there's a lot of opportunities but it's we are not including climate change in most of our engineering practice at this point in time very little and partly is that there's limited reliable guidance for how to adapt infrastructure to guide agency practice current practice and institutions not the design guidelines but the institutions themselves how does the state department of transportation work those institutions aren't designed to take in the information that's coming out of this community and to use it in a coherent way there's like any other community there's different silos there's planning there's operations there's design and maintenance and the environmental people who are using the people that were the early adopters of climate change considerations are an entirely different division than the engineers that are doing the design work the existing tools Rob has wonderful work it's amazing I would take it to the heart beat into our systems if we possibly could but that's not how it's being done in practice right now we're still doing top down types of approaches looking at what the future climate entails and employing them to determine a relative order of magnitude of what to expect in the future so let's move on to a couple of examples so we can give a little granularity to what I'm talking about flooding if you ask any state department of transportation maybe aside from Arizona and California what the number one problem is they say flooding they would say flooding and there's both riverine inland flooding as well as coastal flooding so let's take our first example for inland flooding and that is currently what we're doing is we've looked to NOAA and NOAA has updated our design precipitation using Atlas 14 and so that was updated after about 30 years going back in history took a large lift to get that updated and so the agencies are incredibly happy to have updated estimates of precipitation for their design standards there is no national guidance on future design precipitation and yet I would gather I bet that every state agency's department of transportation knows about the NCA for different graphs the climate science special report then shows how climate how precipitation is changing they know that design precipitation is going up and so engineers want to make the right decision and so we might be bound by these guidelines but if we know the future is changing we're trying to juggle that act of how do we increase that precipitation okay so don't look too closely at this I'm going to give you one example from a state not to be named and I want to point out a couple of things the first thing I'm going to point out is the second column and this is new from for based upon the change in climate before we looked at all of our different assets that we might be designed that could potentially flood and we just said are we going to design for a 10 year event a 25 year event 100 year 500 year now what we're doing is we're looking at the criticality of assets and saying maybe that culvert isn't so important if it floods once in a while we're okay with that but that signature that signature bridge we can't afford to lose that and so what we're developing most agencies are developing tiers maybe three tiers maybe four tiers in this case tier one would be our we can afford to have it flood tier three would be don't let it happen don't let that go so that's actually a really important change in these disciplines being able to account for the reliability that we need from different assets the other thing this is the Wild West analogy every state is doing something different if anything at all for trying to figure out how to do design precipitation and I think this is really an opportunity for this community the first one the Noah Atlas 14 that's a present day standard Noah plus doesn't use any climate model output it uses the 90th percentile of the upper 90th confidence interval of the estimates from Noah I know you're cringing okay all right but this is I'm trying to give you a sense as to where we are which is very different from some I think what we were talking about this morning the fourth one the 2030 NCHRP national cooperative highway research program 1561 that's a document that was a research project that developed approaches to be able to use global climate model output depending upon whether how vulnerable that asset is or is not and it used a number of different approaches in order to be able to estimate what future what future rainfall was at a number of different time periods that's got some legs underneath it and so the state here adopted both the combination of Noah plus for political reasons as well as for for good reasons and NCHRP 1561 they have they that stood for about three years and they have now revised their design standards to use something called the Cornell projections which would be a weather model simulation so we're not using any climate model output of precipitation but we're simulating the weather using large using upper atmosphere conditions in order to estimate design precipitation okay maybe that might work but this tells we're getting we have a range of different approaches that we're taking next one let's go coastal coastal has been at it for the longest time period so with coastal if we start on the far left we go to previous work and this is where sea level rise scenarios come in the coastal community in the transportation world use those sea level rise scenarios conducted GIS analysis map out find out where the vulnerable sections of roadways bridges and other assets are prioritize those based upon criticality maybe are they close to hospital are they in evacuation route are they in a vulnerable neighborhood they do specific site assessments for those particular assets and then they move into network analysis and in the network analysis they might look at that entire system and then make adaptation options adaptation recommendations where the climate community comes in is in this previous work and again this is a different state and here what they've done is they've indicated the tolerance for flood risk and so different assets will have different tolerance for flood risk and so they will have different design guidelines for sea level rise okay take home on this one climate model output is synthesized into a single number that is based upon measures of criticality example three we'll make this short and sweet we know the C map scenarios we know the C map projects we have an increasing number of global climate models if we plot them all out precipitation over time what we see is a whole different possible futures which is right we know that none of them is correct and that we should be using a number of different models as well as different emission scenarios we also should be sensitive to the equilibrium climate sensitivity so what guidance our transportation engineer is being given regarding how to pick models because I can tell you they're not going to use 50 models so the same NCHRP 1561 guide came up with took all of the C map 3 and C map 5 models and organize them into groups group 1 was tried and true group 2 would be modifications to existing models that may or may not play out and group 3 was new kids on the block or people are trying something radically new and what the recommendation was is use group 1 models and make certain that you get a range of equilibrium sensitivities so get low, medium and high if you're going to only pick 3 pick one from each of those so take a one that we haven't done this for C map 6 guidance is needed on how to select models and so going to the bureau of reclamation site and picking the top one is probably not the best approach but my guess is if you looked at a lot of studies the top couple of models that happen to show up on those tools are the ones that have gotten more play than other ones that happen to be at the bottom all right quick one at the end model physics are not always perfect and stakeholders sometimes have different questions than temperature precipitation winters winters cover half of the globe they have large transitions in both water energy and carbon cycles and they matter a lot to transportation within that landscape we have this lovely network of roads that's what's being shown up there for the new England area 70% of the US roads are in areas that get ice snow in our frozen part of the time whether we get ice or whether we get snow makes a huge difference and so the other things we've got large budgets we spend about two billion dollars a year on winter activities for our US highway agencies or about 20% of their overall budget so after floods winter maintenance winter winter challenges come into play all right so how do we climate models do in winter if anybody's taken a look at them not particularly well we get reasonably good agreement on climate models and with some cases where we've got a fair amount of variability among the models on the right hand side those shows us what our problem is our models tend to under predict snow and it melts too early as compared to what's measured and we also in because of that we tend to have models that have soils that are too cold as opposed to what we observe in fact and so there's an opportunity for this area which just happens to be an area that I do research in but my guess is there's other areas that if you were to ask the stakeholders things that they wanted thresholds that they needed they extend beyond precipitation and temperature okay so bottom line we need some addition there are some areas for reliable physics where we could improve in the models okay so to wrap up there's trends there's gaps there's opportunities I'm going to point out just a couple and the first one I'm going to point out is built funds are coming in and the tech program just got released the first call the due dates are in August and what that means we finally got some money at the state and federal level to deal with our transportation and climate change issues which is huge for us but the challenge is that the expertise in the capacity is limited within those agencies one of my state partners at our recent meeting said that they have a 40 percent vacancy age and vacancy rate at their agency 40 percent their positions aren't filled there's also wide disparity in resources and efforts and so the NSF speaker this morning said you can have these programs but if they don't get to the right people the communities that really need them the underserved communities we're going to have challenges we also recognize that the pace of change within transportation is not keeping up with the pace of change of climate change and so people I think are familiar with ecosystems wetlands salt marsh migration species being able to migrate we've got a transportation community that can't migrate it's not migrating fast enough I've already talked a little bit about some of the gaps and we've hit some of the challenges are not just the magnitude but also how long does it last what's the scale how does our system break down and then the final pieces are I would encourage you to take a look at what the protect program is offering because there's some real wonderful opportunities for synergy for developing tools with partners for increasing for working with partners to increase the resilience of those systems and strengthening the vulnerable evacuation routes so I'll stop there and I will thank the National Science Foundation who has been wonderful in supporting this work and developing these partnerships over the past decade as well as acknowledging my collaborators throughout the ICNAC community thank you Jennifer with this we could talk the whole afternoon about everything you presented here but for the interest of time we'll take on one question and I see the hand of John and then we'll take a short break John. Thanks for that great presentation I'm not a transportation engineer but I've been told that there's a desire to the metric is how many cars can be moved rather than how many people can be moved and I'm wondering under these conditions of extreme flooding and freezing and and climate uncertainty of climate extremes how much are transportation engineers thinking of a step change to say you know we got a lot of roads out there maybe they should be elevated rail you know so I'm just wondering are the transportation engineers with climate extremes thinking that we need to not redo the same system and to think differently and get away from asphalt and roads that's a great question and I wish I had a really good answer for that one I think we're in a world of incremental change at this point and we haven't reached that point of transformational change there are certainly people that are very much thinking about this proposing changes but I think we're going to be in the world of incremental change and worrying about the existing assets for the near future thank you and I think we'll stop here we're five minutes late so we take five minutes break and we'll be back at 250 250 I changed my watch from California so I'll see you in five minutes we're restarting with our next session and I'd like to invite Libby to moderate the next session hi everybody thank you so I'm Elizabeth Barnes or Libby I am a professor of atmospheric science at Colorado State University and I will be moderating session two here so we've just heard from three speakers about decision making and all of whom at some point during their slides have mentioned climate models or climate projections or climate predictions so in this session we have three speakers who will be discussing specifically uncertainties in climate modeling itself and specifically the guiding question for the session as shown on the screen here what are the sources of uncertainty in climate modeling what sources are most important to quantify or reduce and improve predictions and projections or uncertainties in predictions and projections what are the low-hanging fruits for reducing uncertainties are there particular types that may be more or less challenging to reduce and finally how may decision making needs inform the development and use of climate models so just I'll give very brief introductions to each of our speakers so our first speaker today in this session is Tapio Schneider he is a professor of environmental science and engineering at Caltech and a senior research scientist at the Jet Propulsion Laboratory his research focuses on many things including climate dynamics of Earth and other planets, turbulence in both the atmosphere and ocean, cloud dynamics and as we will likely hear about today climate modeling so with that Tapio go ahead and I think share your screen Thanks Libyan, thanks for having me Let's see I hope you can see the screen Looks great, thank you Yes, I do want to talk about uncertainties in climate modeling that I was asked to talk about Okay, here we go As you know, Earth has warmed from 0.2 degrees since the industrial revolution and climate is continuing to warm we will most likely have to live in a world at least about 2 degrees warmer in the coming decades because almost all climate impacts that you look at scale with the global mean temperature change global mean temperature change is actually a useful metric to ask questions about climate change Here's just one example from a paper by Sonya Sanivaratna it shows the percentage change in heavy rainfall in the south asian monsoon region and you see that percentage change scales with the global mean warming one consequence is that mitigation remains critical it remains critical to reduce the global mean warming as much as we can but a corollary is adaptation will be unavoidable in fact we already see climate impacts right now for example the probability of the devastating rainfall associated with Hurricane Harvey of that rainfall occurring has been estimated to have been roughly tripled by climate change in the past few decades so climate change is already affecting risks in this case of extreme rainfall events as one example and while mitigation remains essential adaptation is unavoidable and adaptation also means we need to know what to adapt to and here's a bit the rub here's just one example global mean temperature change as a function of climate change in the future for two scenarios from the IPCC report lower emission scenario higher emission scenario and if you just focus your attention on the two degrees line let's take this as an arbitrary threshold and ask the question when will we cross that threshold now there are some climate models that said we have already crossed that threshold well that's obviously wrong and there are other models that say even when there are high emission scenario it will be until the 60s 70s before we cross that threshold and that huge span and predictions means that these predictions are not fit for purpose for adaptation planning for the engineering purposes that we have heard about before so climate models are good for realizing that we need to mitigate what we can in global warming they're not good enough for a lot of adaptation decisions for some day maybe but for a lot of engineering decisions like the bridges we heard about they're not good enough there are various sources of uncertainty as I think is quite well known here's some graphs from a paper by Flavri Leiner et al from the ENCOG group from the large ensemble what is shown on the left is the global decadal mean temperature change relative to some baseline in a few emission scenarios and the thinner lines for each emission scenario are just model simulations run for different initial conditions for different modeling centers and the like and the right graph is trying to break down the uncertainty and the predictions here into three factors one is internal variability that's uncertainty in the initial condition one is scenario uncertainty so that's an uncertainty in what we will do in emission scenarios and the third is model uncertainties so these are model errors and you see that for this specific question what is the decadal mean temperature internal variability is a large factor in the immediate future it decreases in relative importance relative to model error which is the dominant source of uncertainties for the next few decades and as you go further out towards centennial time scales the scenario uncertainty becomes dominant now sometimes these graphs are interpreted as saying well there is a large contribution from internal variability and obviously that's not reducible by any improvement in climate modeling it's one perspective on what internal variability does is it adds a source of uncertainty it is a nuisance if you want to say how much global mean temperatures will change so here is an example from models that only differ by very small amounts in the atmospheric initial condition and produce quite divergent statistics in the global mean temperature that's internal variability from this perspective it's a source of uncertainty but perhaps a more useful perspective for what we are talking about here is that it's a contributor to risk if you ask a different question what's the probability of exceeding a given threshold of temperature or presentation or whatever it may be then of course internal variability contributes to that probability of exceeding the threshold if it's large and you're more likely to exceed it by natural causes alone but it's not an uncertainty in that sense it's just a contributor to risk and perhaps that's the more valuable way of looking at it for the focus of this discussion today so internal variability is something you have to live with but we'll have to quantify it it contributes to risk and risk needs to be quantified then there's model error that far dominates what can we do about that there is some progress to be made by increasing the resolution of climate models some example from Tom Delberth at GFTL the top plot shows the precipitation frequency the probability density function of precipitation frequency for blue as observations and then there are various ensembles of model simulations relatively coarse from CESM anchor and black dots and then to high resolution simulations spare high and you see that the high resolution model captures the frequency distribution of precipitation events more reliably however resolution is not going to solve all problems here if you look at high resolution simulations of mean precipitation at the bottom panel this is from ECMWF comparing about 10 about 1 kilometer resolution simulations you see that even at 1 kilometer resolution it's the blue line relative to observations in black the large biases in the distribution of precipitation so just higher resolution will give you better resolution of the frequency of say extreme precipitation events but you might still get it completely wrong where the rain falls so resolution is not a solution in itself so what do we do there are various I think priorities that flow from just looking at the sources of uncertainty and what you can do about it number one is we should produce large ensembles at the highest resolution we can and we need this to quantify internal variability and as I'll argue in just a minute also for calibration and uncertainty quantification of the small scale processes that are behind the model errors we need to reduce and quantify model errors and they can be reduced and they can be quantified I will say a bit more about how I think this can be done in a minute suffice it to say here that we need to accelerate progress in modeling small scale processes one path to accelerate progress here is using data much more extensively than we have and in the end you want to have models that diverse groups of users can use to explore uncertainties broadly to samples broadly from all sources of uncertainties that includes initial condition uncertainties includes sampling from model errors that need to be quantified once you have quantified them we can sample from them that includes structural model errors and you need to explore certain uncertainties in the same way it means that you need time models that can run relatively efficiently the PCAST report was already mentioned it addresses two points it was a working group on extreme weather risk and climate change on which I was on which I served for the past year there are a number of recommendations coming out of it for our discussion here I think there are two sets of recommendations that are crucial in the working group that we are ready to produce large ensembles at resolutions in the 10 km range or so various US climate modeling centers can do this now and the recommendation of this report is let's produce large ensembles over 100 ensemble members at the finest resolution we can to at least quantify risks from internal variability that alone is not sufficient the data so produced need to become useful and accessible so one extensive set of recommendations deals with how to make the data more accessible building APIs user portals that allow access to the data and broader experimentation with the data to look at things like compound risks that were already mentioned there's a second set of recommendations that deals with the downstream issue that was I think the topic of the first session today we will now focus on the water and desert models once you have ensembles of climate simulations you can use them to anchor off line models for specific for specific hazards wind storms, storm surges, hurricanes and the like and that can be built on top of the ensemble of climate simulations what this will give you is quantification of risks due to internal variability few decades if that's the focus model errors dominate there. If you have a few modeling centers doing that you get some loose quantification of model error but not a very rigorous quantification of model errors because you'll end up only having a few models and not the rigorous quantification of the model errors. That's what came out of out of that working group. There's a host of other recommendations on making data more accessible for example for hazard models and a climate change adaptation plan that I won't talk about now. There's one big piece that's not addressed in the PCAST report and I think that's there would be a good topic for BAS to take on and it's not addressed really globally and that's the following. Suppose the PCAST recommendations are implemented. You'll get around 300 ensemble members perhaps three institutions, 100 members something that order of magnitude. What you would want to say from that is not everything but order 100 petabyte of data you really want to keep and 100 petabyte is not the data set you can download. Downloading data for processing is not feasible. You need to analyze the data where they are and the challenge here is that we have no viable model currently for storing the data making them accessible and useful. In the commercial cloud the storing those data would cost about a million dollars per month. It's really expensive and we have in the government no clear model for how to provide the data in such a way that they can be analyzed where they are because that's what needs to happen. I think that's an important topic that has no solution yet and has not perhaps seen the discussion. I think it deserves. Ultimately you'd like to make all data accessible and useful through one API and all data here includes observations as well. We get 50 terabyte from daily from NASA alone. Ideally they would be co-located with the simulation output and you can do things like downscaling, bias correction of models and it would be nice to have a model for doing that. I want to talk a few minutes about model error. I think they can be reduced. They are dominating on timescales of 20 to 40 years or so that I think is for adaptation decisions the key timescale and making data accessible that's filling a second gap in this value chain that extends from data to adaptation tools and all the diagrams that I've seen before. I think data we're kind of missing is the concentric and all the versions we have seen. I think data are crucial. We have a lot of data and we do use it. So building APIs, portals deals with a second gap in this value chain that right now is not built out, not integrated, but I think reducing model errors needs to come through filling this first gap in a value chain between models and data. The big problem in climate modeling are small-scale processes. For example, clouds. They have scales for clouds. The dynamical scales are 110 meters or so. They're microfysical scales at nanometers, but the model resolution is well maybe going toward 10 kilometer resolution, but it's a far cry from what we need to actually resolve these uncertain processes that lead to large-scale biases like the large-scale biases and presentation distributions that I showed you before. I think data here are an important part of the solution of how to make progress. Right now, the model to use for small-scale processes are not extensively data-informed, but they can be. Two minutes. How many minutes you said? Two minutes. Okay. So when you use data in a climate modeling context, you have a few special challenges. We need to predict the climate none of us has seen, so you need to generalize out of the sample. You want people to trust models. We've heard that earlier today, which means models need to be interpretable. It can't just be a black box that produces something that we don't know where it comes from that we can't easily validate. An uncertainty quantification is essential, which means risk assessment is essential. How do you do that? Deep learning alone, I think, is not the way forward, but it helps. The success of deep learning rests on over-parantization on models with many parameters. It needs to be very expressive models, but makes generalizability, interpretability, and UQ challenging. Science, as we have done it for the last 400 years, rests on parametric sparsity, reductionist science. Newton's law would be the projectmatic example. It's a one-parameter law that describes how apples fall from trees and planets over its stars, but clearly it reaches its limits in complex systems like the Earth system. So I think the way forward is to combine the best of both worlds. In the Klima project, we are doing that. We are building an Earth system model that's process-based, but where parentizations are, to a large extent, new and built from the outset so that they can learn effectively from data, but are physics-based, chemistry-based, biology-based. We harnessing diverse data, simulated and observed data for calibration and uncertainty quantification, focusing on climate statistics like the seasonal cycle in this process, and use computing power where we can, for example, to generate high-resolution simulations. It's not the time to go through the technical details, but suffices to say, in this process, you can also learn about structural errors in the models that likewise need to be quantified, not just parametric errors. Let me just leave a few key points here. I think internal variability contributes to risks. We quantify it by having large ensembles. Large ensembles, it turns out, is also what's needed for calibration of models with data. We have tools now, AI tools, to do this fast enough that it's becoming a doable proposition to learn automatically for a whole climate model, for a whole earth system model from data. In the end, we need models that allow us to explore possible climate outcomes as broadly as possible by sampling from internal variability, initial conditions from model error, both parametric and structural, and scenario uncertainties. Once you have ensembles like that that come with attached probabilities, which with the tools I didn't have the time to introduce, you can do, then you can use that to anchor downstream hazard models, propagate uncertainties through the chain all the way to data products that can actually inform engineering decisions, for example. Thank you. Thank you, Tapio. Okay, questions. Effie, I see a real hand. Go for it. Yeah, thank you, Tapio. That is exceptional. I really liked it. One issue that I want to bring up, you mentioned, machine learning and AI is not the solution, yet it can guide us. One example I would like to bring is, recently, we tried a deep neural network, but instead of the stutter cost functions, root mean square error, etc., we taught it how to learn things that we cared about. For example, space-time wavelet spectrum, so we know what variability it misses at what scale, and we tell it not to miss it so far. So, space-time structure, do you think or are you thinking along these directions? Because again, you said it right, observations, we have too many and we don't use them as much to learn structural error. Yeah, yeah, I think there's a lot we can do with design of loss functions, designing them so that you focus on what matters, what you mentioned basic spectra is one possibility. You can design loss functions, say with exponential weighting that emphasize extremes, if that is what you're interested in. And maybe the one thing I want to say about loss functions in the business of climate prediction, we want to predict statistics of the system, and exceedance is over thresholds of precipitation, mean temperatures, seasonal cycle of temperature, Arctic CEIs, and so on. And to me, the corollary to that is the loss functions should contain those statistics. That changes the learning problem quite a bit because statistics accumulate in time, and that makes the loss function evaluation expensive. You need to run a model for quite some time to evaluate it. And it changes the learning problem because it becomes more of an inverse problem than say supervised learning problem. And it changes the algorithms you would need to use. But we have tools to do that, to do that well, essentially build loss functions that focus on what matters, climate statistics, and learn about things, small scale processes like entrainment and clouds, you can learn from those large scale statistics. Thank you. Other questions? If not, I have one. Tapio, just going off of, I know we'll during the panel talk more about this, but in terms of guiding questions, since you're actively in, you know, developing models here, and you've talked through sort of your guys's process, do you have any examples of how you are or could bring in the decision making needs? As we heard earlier today, more of flipping the script, if you will, in going in the other direction. And I'm curious, more than just people need to know if it's going to rain. So let's make sure we do precipitation well. I'm wondering if there are ways that you could envision people bringing that into the process, or you're doing already? Yeah, we are doing some of that with people in the building technology sector, for example, who want to build buildings that are comfortable 20, 30 years from now with people designing data centers and thermal requirements for them. I would say I see my, and I think we may do more of it later, but I see my primary job right now, or perhaps for the climate modeling community as a whole, as building better climate models that actually quantify uncertainties, and making samples from these models easily accessible through some API that then you can plug in downstream hazard models, and those you really want to co-develop with ultimate stakeholders. And this will be a whole ecosystem of them because very different requirements for different stakeholders. But what they all need at the back end is climate predictions or projections with quantified uncertainties. Great, thank you. Steven? I have a question from the public input. You mentioned that large ensembles at highest feasible resolution, and just can you talk about the trade-off between the ensemble size and resolution when you're looking at modeling, and is there a maximum useful ensemble size? That's an important and good question with no easy answer. I mean, if you talk with people in the re-insurance industry who want to assess hurricane risks, they want ensembles of 20,000 to 30,000 members. And now that's not what we are going to produce with a climate model. So I mean, there's an obvious trade-off. Resolution is compute cost scales cubically in resolution. So that's one important thing to realize. So going from one to 10 kilometers, it's a factor thousand in compute. So you can do one ensemble member, one kilometer, or a thousand of 10 kilometers. So the evidence so far to me is to do a thousand at 10 kilometers, if that's the computational resources you have available, because the improvement for 10 to one kilometers is not that large yet. But say 100 versus 10 kilometers, would you now want to go to a million versus thousand, 100 versus 10 kilometers? Well, there is sizable improvement going from 100 to 10 kilometers, and at 10 kilometers, you actually resolve tropical cyclone frequencies reasonably well, and that clearly has value. I think what will end up happening in this space is that we'll produce moderate size ensembles, size 100, and use the wonderful generative models that AI is providing us to produce much larger ensembles, perhaps focus on specific impacts, perhaps focus on specific variables. But I think it's a great space for generative AI to fill in what we can't do with brute force computing. Awesome. Thank you, Tapio. Okay, we're going to move on to our next speaker. Thank you very much. So our next speaker, and I'll go ahead and get up your things, share your screen. So our next speaker is Isla Simpson. She is a scientist three in the Climate and Global Dynamics Division at NCAR. Her research focuses and works to understand dynamical mechanisms involved in the variability and change of the large scale atmospheric circulation and its impacts on regional climate and hydroclimate, using a hierarchy of modeling approaches. So with that, Isla, take it away. Okay, thanks. My slides look good. Yes, everything was good, and we can hear you. Thank you. Okay, all right. Thanks for the opportunity to participate in this. So there might be some overlap here between mine and Tapio's views. So first of all, I think our global projections are really the starting point for anything that in terms of decision making, you know, we may not be able to tell everything we need to know on the small scale from these global models, but we need to know how the global circulation and moisture transports and temperatures are going to change to then kind of input to other techniques which might be able to provide the necessary information on the smaller scale. So here I'm really focusing on our global earth system models. And as Tapio already went through, we kind of have three primary sources of uncertainty. We have internal variability, which is just the natural variability in the system, and it's irreducible. It's going to be there in the one version of the real world that we view, but we can quantify how important that is, and we can quantify the range of outcomes that could arise as a result from it. Then we have the scenario uncertainty, which is how are our forcings like greenhouse gases and aerosols going to evolve? And then there's the model response uncertainty. So how does a model respond to an external forcing that it's given? And I just want to note that kind of as we move towards fully interactive carbon cycle models that are emissions driven as opposed to concentration driven, I guess these these uncertainties would kind of become a bit more connected because the models would be starting to determine how much carbon is taken up by the land and the ocean and things. So I just wanted to note that, but here I'm not really going to discuss the scenario uncertainty and really focus on the internal variability and the model response uncertainty. So I further want to kind of divide the model response uncertainty up into two parts. One that's intermodal spread that we're aware of that we can try to understand and reduce. And then the other is some potential issues that might be lurking that might be making all of our models wrong. And so this first guiding question that we have is which sources of uncertainty are most important? And I think that's definitely going to depend on the question that you're asking what variable you're looking at and what location you're looking at. So here I just want to go through one example and demonstrate to you a case where I think all of these sources of uncertainty are going to be important. And that's the case of North American hydroclimate projections. So I'm going to start with internal variability. And here I'm just showing you precipitation projections in the 100-member CSM2 large ensemble for 2030 to 2050 in the wintertime. And so this is the average over those 100 members. You get some wetting in the West and wetting in the East, a bit less of a change in the South. So this is our forced response in the model. But we can ask, using this 100-member large ensemble, what are the range of futures that we actually could potentially experience? So we just focus on a region like the Southwest. You can pick out the ensemble member that has the wettest projection and then the ensemble member that has the driest projection. And these ensemble members are just differing as a result of different realizations of internal variability. And so what this tells us is that internal variability really can be huge. And as Tapio advocated for, we need large ensembles with a realistic representation of the internal variability to be able to characterize this and be able to know what are the range of futures that we might be needing to adapt to. So I want to show you an example of where we have intermodal spread that we're aware of that we could try and understand and reduce. And we're going to stick here with precipitation projections over the U.S. but I'm moving to the summer now. And I'm going to show you results from 10 large ensembles that have at least 20 members each. And we'll focus in on the four-corner states here. So just looking at average precipitation projections over the four corners. And I'll show you results here as a function of global warming level, like Tapio also showed, to kind of put the models on more of an equal footing in terms of how much the planet has warmed. And all you see here at the beginning is this gray shaded range is to orient here. And this is the range of uncertainties in 20-year averages that we could have just from internal variability alone. So you can get some perspective on how big the model differences are in their force projections. And here I'm showing you in the colors the ensemble mean for each of these 10 models. So this is the force response. This is how the models are responding to greenhouse gases. And you can see that they are all over the place. And the range of uncertainty is large. The models at the opposite ends of the scale are exhibiting force responses that are totally outside of anything we would expect as a result of internal variability alone. And then of course on top of this model response uncertainty we have internal variability. And so here I'm just showing the individual members for these two most extreme models. And then that expands your uncertainty range even more. So for our climate model projections in this four-corner region, we have really a big uncertainty in summertime precipitation projections. And a large part of that uncertainty is really coming from the fact that the models are all doing different things in their response to external forcings. And so this is the kind of uncertainty that we can try and understand and reduce. So we need to understand why are the models doing different things. And then through that understanding figure out which ones we think we trust the most. And so there are approaches like emerging constraints where you try and relate a model's projected change to some aspect of the present-day climate that you can observe and then come up with kind of a more informed view of which models you think are more correct. Now I want to talk about potential issues that might be making all models wrong. And I think this is something that we really need to be more and more concerned about. Because we're really starting to experience climate change in the real world. And so we're starting to see the real world evolve and we're starting to realize that there are some problems in our models. And there are a few problems that I'm concerned about. I'm only going to go through one of them here, but I'll mention a couple of them at the end. And I'm going to talk about the representation of sea surface temperatures in the tropical Pacific. So first of all, a reason why we care about what the tropical Pacific is doing. This is analysis from another paper by Flavia Lainer, which is looking at trends in precipitation over the US Southwest. And so in observations, we've had a drying trend over the last few decades. If you look at a couple model ensemble mean projections here, you don't really see much of a signal. But if you give that same model, the observed time evolution of sea surface temperatures, you get the drying. So the evolution of the sea surface temperatures has played an important role in giving rise to this drying over the Southwest. And so this is what our trends and sea surface temperatures over the last four decades have looked like. A slightly different period, but it doesn't really matter. But what you see here is that we've had this relative cooling in the eastern tropical Pacific and also in the Southern Ocean. And this has played an important role, I think, in this drying trend in the Southwest. And it's also kind of at odds with what our models suggest is the force response to greenhouse gases. So if you take the average overall models, they suggest actually we should have had a relative warming in the eastern tropical Pacific. But of course, our real world is just one realization and their internal variability could be playing a role here. But I think we're starting to realize that there's a chance that that might not be the whole thing that's going on because the observed trends really lie at the edge of our model distribution. So here's a couple of papers recently. Both of them are looking at a metric of the difference in sea surface temperatures between the West and the East. This one is comparing observed trends in the lines here with the large ensembles. And this one is comparing all of the CMIP6 members in blue with various observation trends here. And both of these studies show that the kind of observed signal is it does lie within the model distribution, but it's lying very much at the edge of the model distribution. So we have kind of a dilemma here. It's what we've seen in the real world just a very unlikely occurrence of natural variability. Or do we have something wrong in our models, whether that be the forced climate change signal or internal variability? And I think that's still somewhat of an open question. But I think there's more and more evidence that we may have something wrong here. And one thing I want to emphasize just like Tapio did is that we really need large ensembles to be able to see this. We need many members to know where is the real world sitting in terms of the probability of occurrence if we take our models as being the truth. So there have been a couple of arguments made for what might be going wrong in models. One is focused on the tropics and that models might be getting this ocean dynamical thermostat mechanism wrong. But the other is linked to this cooling in the Southern Ocean and that maybe models are not getting this cooling in the Southern Ocean and this signal is kind of being transmitted into the low latitudes. And indeed, if you compare the Southern Ocean trends and models, which all tend to show a warming, the observed slight cooling in the Southern Ocean lies at the very edge of the model distribution. So we still have a lot of open questions here, but I want to show you some new results with some high resolution simulations with CSM that I think are indicating that we very much do have a problem or something incorrect in our low resolution models. So our standard resolution models are typically about one degree and the simulations I'll show you here are using a quarter degree in the atmosphere and a tenth degree in the ocean. And this is the difference between observations and models that we've seen over the historical record expressed as a standard deviation of the model distribution. And so here are some results from these different model resolutions and these are initialized predictions. So they're not the same thing as free running climate models. They're initialized from observation-based states and run for five years. And what you're seeing here is the anomaly correlation coefficient between the model predictions and observations. And you see at the low resolution you've got negative skill here in the Eastern tropical Pacific and in the Southern Ocean. And the higher resolution that moves to being positive skill. And if we take the difference between those high and low resolution models in their skill you can see that this pattern here very closely resembles this pattern of discrepancy that we have between models and observations in the historical trend. So I don't want to make any firm conclusions here but I think there's a lot more to figure out and there's a chance that as we move towards higher and higher resolution we might start to see our forest climate change signal really deviating from the lower resolution models. And then we have the question well what would it take or what would it what would it look like if we did all of our future projections with these kind of moderately higher resolution models. So hopefully I've convinced you that each of these sources of uncertainty have the potential to matter. And so the other guiding question is what are the low hanging frets? And I'm going to kind of avoid answering that question because I don't think we should be going for the low hanging frets. I think we should be going after the biggest most impactful problems that we think we have and I think this example of tropical pacific trends is one of them. There are other examples though that I'd be happy to talk about in the discussion like the signal to noise paradox in North Atlantic predictions. I think we have issues in representing land atmosphere coupling and humidity trends and I'm sure there are many more. Our uncertainty to internal variability will never be reduced but we need to continue to quantify them with models that represent internal variability accurately. The uncertainties due to model response differences are challenging to reduce but that should be something we should be able to do. We should be able to understand why models do different things and then figure out which ones we think are more correct. So how do we go about doing this? Yep okay. There's always this trade-off between resolution and complexity and ensemble size if you have limited computational resources and much like Tapio I think we need to maintain the ability to have large enough ensembles. We need to be able to characterize the systems we're looking at and to be able to determine whether we have discrepancies from observations when you account for internal variability but we need to kind of move up this complexity and resolution access enough to get the answers right and figure out what does it take to improve our projections and include those in our models while still retaining the ability to have large enough ensembles. And of course we need to be keeping our eye on the real world because we're at the point now where we can start to see discrepancies in the between observed and predicted signals in the observational record. So just one final slide about the final question how might decision-making needs inform the development and use of climate models? I think the decision-making needs will inform how global models are translated into actionable information for example what spatial and temporal scales are needed, what projection timelines are needed and what is the most useful information? Do people want to know the worst case scenario that they need to adapt to or the most likely outcome? And then I think the decision-making needs should also inform the targets for improvements in model development. What are the uncertainties that we currently cannot handle when trying to make decisions? You know if the four corners decision-makers are totally fine with that range of uncertainty and precipitation then maybe that's not the focus but if it really matters a lot to narrow down this range of precipitation projections then that's what we need to do. So I'm probably out of time I will I guess maybe just leave my conclusions up there and happy to take any questions. All right thank you Eila and we have I have a hand Linda go for it. Thanks that was really a very sorry that thanks that was really very very useful presentation and I just want to compliment you on actually focusing on what we were trying to accomplish here and that's not yeah that's really important. Anyway so your issue about the the resolution of the models there are all those simulations from high res MIP from CMIP 6 some of which were at about 25 kilometers so I wonder I I'm not really familiar with the results of the high res MIP I wonder if any of those also give different responses for the the South Pacific that you were describing for South Atlantic. Yeah it's a good question I was wondering that as I was getting these slides together I I don't know that there Rob Wills may have looked at that in his paper at the high res MIP simulations and I don't recall the answer I guess one thing to say is that I don't know that high resolution is everything because there I guess with CSM even a low resolution there were some there was some work done with Southern Ocean pacemakers where you just kind of force the Southern Ocean to have that cooling trend and in CSM one that did not produce a signal in the eastern tropical pacific while in CSM two it seems to there's a new paper coming out by Sarah Kang about that so it seems to be related to differences in the representation of the Clyde feedback so I I think it's conceivable that other higher resolution models might not get it if they have different other aspects of their simulation that are different like the Clyde feedbacks but I I don't know offhand whether anyone has looked at the high-resmit simulations in this context and these are very new results of CSM so I think there's still a lot to be done to figure out you know is it getting it right for the right reasons and what the mechanisms are great thanks all right John go for it thanks I had kind of a similar question I think in the end Isla if it is resolution doesn't the resolution dependence still have to be explained yeah absolutely and I think I think what we really need to do is figure out is it ocean or atmosphere resolution or both I mean if if what you need to get this right is to have a one-tenth degree ocean and you can happily have a one degree atmosphere on top of that then I think that would be good guidance for the next generation of models right it's it's it would not be nearly as expensive as having a quarter degree atmosphere but there's a chance that we're kind of wasting our time with one degree ocean models but it very much still needs to be understood and maybe there's a way to parameterize the relevant processes in the lower resolution models too yeah it needs to be understood and we need to figure out what is the minimum thing we need to do in models get this right and are we confident that they are more right other questions okay if not that this is my turn um so Isla I was interested in your comment about how you split uncertainties and one of the splits was sort of uncertain differences between models I think of how you phrased it and that we can we can really understand that and I guess my question is where do you see the balance between trying to understand the differences across the models we have versus trying to move forward if you will and just build better ones does that make say presumably there's a balance there because we don't want to spend all the time on the models we have but we don't want to necessarily just say well this one was last years and move on right yeah I I mean I I think the way we should be doing it is building off of the ones we have and incrementally seeing you know how to improvements um change things and understanding that I guess I'm a little nervous about this kind of move towards digital twins and like just whole new models they're a totally different resolution and not kind of seeing you know how what is the path to getting there what like okay you could have a four kilometer model that maybe does something better but would you also get that with 25 kilometer model I think I'm I think I feel like we need to incrementally be building on the models that we have and I yeah I don't I don't know what the right balance is but I think when we see a see something in the real world that we know we're getting totally wrong I think we need to get that right or else we don't trust any of the models great okay thank you Ila and I think we're just on time to move on to our last speaker of the session all right so go ahead and and Balaji you can start pulling things up so as an introduction the Balaji is a distinguished fellow at Smith Futures and was previously head of the modeling systems division at NOAA's geophysical fluid dynamics laboratory he's an expert in climate modeling and currently works to build new programs in climate and computational sciences so with that go for it thank you thank you Libby so can you see my screen we can see and hear you okay great thank you so you will probably hear some of the similar concerns preoccupations that have been raised by Tapio and Ila I hope to I'll say sufficiently different things so we can make the conversation interesting but there'll be some overlaps and I think some points of disagreement I will state some opinions which hopefully make the session interesting the opinions let me just start by saying do not necessarily belong to either of the affiliations I've listed here one is Smith Futures where I currently work and one is IPSL in France so you will see even some of the same figures sometimes showing up here but let's just talk about the fact I mean I'll go through some of the work so we talked a lot about how people downstream are going to use our models we have a scalability challenge in the sense that the number of people actually producing climate models working on climate models is quite small compared to the number of people shown in this figure here from the EPA who want to use our results so there is what I call a say there's a there's a challenge with scalability of actually making our models run bigger faster whatever it may be but there's also this scientific scalability challenge and the second thing is that we need to communicate uncertainty very well across this chain so this is another old system old figure from a paper by Moss et al in 2010 which showed the whole sequence of modeling that takes place when you actually try to go towards actionable information so going from a pure science towards actionable information so this is a whole model chain and there is a social challenge there's a scientific challenge the semantic challenge we don't often speak the same language and of course there is a software challenge how are we going to share data across and how do we actually couple models if we're going to couple them couple models tend to we'll get to the problem calibration and so forth which has already been mentioned but once you get into this space where models are talking and feeding back on each other we have a lot of problems with how this thing is going to work so we have to think very carefully and one of the things I think one of the main things I want to say here in the context of this meeting is we need to communicate what we know what we have calibrated what we don't know what is the uncertainty all these things need to be communicated very carefully across this entire model chain now this paper has been shown before and we've discussed these scenarios so these three modes of uncertainty I was talking to a friend of mine and she recently after a recent presentation where I showed the same figure and she said that Hawkinson Sutton had a nickel for each time this figure was shown somewhere they would be sitting on fortunes right now but it's kind of an important thing again so we talked about these three kinds of uncertainty the internal variability or chaotic uncertainty there's the scenario uncertainty which is shown in green here and there is a structural or epistemic uncertainty which is our imperfect understanding of how the world works one thing to note here is that one point that was has been the word that has been used many many times here is prediction Linda showed a slide at the top of the session and she was not endorsing the slide so I'm not holding her to task for it but she showed a slide where there was a concentric diagram and right there in the middle in the bullseye was a big circle that said predictions the implication is that if we had better predictions of the earth system somehow we would get towards right answers I'm going to argue here that predictions are actually overrated compared to the importance of counterfactuals we need to know to know what what many things that might happen if we did something or we did not do something which none of this can be verified in the real world so that is kind of important to keep in mind decision making is all about counterfactuals what happens if we do something and what happens if we don't do something a second point I want to make with this figure the same figure is that it's what's shown here is a lead time of 100 years and we've talked often about in the already in the session about the trade-offs between resolution complexity and the size of an ensemble number of instances I want to also emphasize the fact that the simulation duration is also important for many things we want to know particularly if you want to know about what's happening to the carbon what's happening to the heat stored in the ocean if we go on a decarbonizing trajectory what is going to have a how are those two rather heat ocean heat uptake and ocean carbon uptake how are they going to react all of these require fairly long simulations so fourth constraint that is placed on models in a how how you use computing for modeling is also about simulation duration so a baseline requirement I would state just based on this figure is that you need to be able to run if you have something that you're calling a model you need to be able to run 100 simulations of 100 simulated years each if you can't do that 100 days is a kind of arbitrary thing I put there which is you can think of as perhaps a scientist attention span if you can't start running a model and get a result in three or four months then you're you probably moved on to something else but just keep the first two hundreds in mind you need to run hundreds at least I've already been through this in the last two presentations and they need to be of a certain duration as well and one of the challenges we are facing this again if people have heard me talk I show this slide fairly often computers are getting bigger but they're not getting faster so this is on a logarithmic scale is showing how CMOS technology the basically circuits etched in silicon how they've evolved over the last several decades and we know this as Moore's law that the number of transistors doubles every 18 months or so but the important figure here is the green curve which is a frequency shown in megahertz which is the actual speed of an arithmetic operation it it kind of stopped after topped out at around a gigahertz around 10 to the three megahertz in this graph that means like you can do an operation in about a nanosecond that has not got faster in about a decade so what this means is that you can do weak scaling problems you can do bigger problems in the same amount of time so that means but that could mean a higher resolution model or a more complex model you cannot do strong scaling that is if you wait another five years the problem that you're solving today at that resolution with that many variables will not run any faster it'll run at the same speed and this has been shown so this is from one of Tapio's papers it shows this the same kind of leveling off that you see in the green curve you see in the next curve as well resolutions of the models haven't really kept pace with the what you might assume if you're following Moore's law which is the blue curve where they might go so this is showing for different classes of complexity of models just atmospheric gcms or atmospheric ocean gcms or earth system models that include a biosphere in all of these the resolution somehow stays well below where you might expect so a gftl for example where i work from the manabey and brian paper which is the first couple models from 50 years ago 1969 to the model we submitted to seamip six which is isaac held et al in 2019 so 50 years the model resolution increased by 10x it did not increase by anything like the the orders of magnitude you might expect if you're simply following the computing curve what can these new computers do they can do a certain class of problem very well densely in your algebra so if you're computing for example the just take a correlation problem you have a certain set of inputs shown in red and outputs shown in blue and you're computing the relationship between the inputs on the outputs so the more you increase if you use if you can do simply a regression between those two variables so the number of arithmetic operations can be kind of graphically seen here as the number of black lines if you use one layer neural network or if you use a deep neural network you can see the density of arithmetic keeps going up but the number of inputs and outputs stays the same so because of this you can do much denser problems and that these new chips do extremely well so this is one reason why a lot of work is going into turning every problem to look like a deep learning problem and one thing people have done with that is to do perfect emulation of even chaotic systems so this is a famous paper from around a few years ago five years ago model free the title says it very clearly model free prediction of chaotic systems so you take a classic chaotic system called the Koromotov-Sivashinsky system you learn it using a technique called a recurrent neural network and then the third panel here shows the difference between these two and you show you see that a chaotic system has been emulated more or less perfectly the problem here is that it's it's just simulating it for a given value of some of the three parameters in the model if you change them it doesn't know what to do at all because it's just a perfect emulation climate is a non-stationary problem there have been several papers written about it showing that if you train on present day climate and try to predict future climate you you make errors but if you can train on future climates as well then you do a very good job how do you do that you actually don't train on real-world observations because you don't have observations of the future but you train on models that have been run into the future and you try to emulate those so one way to look at that is to use this phrase that I use called Shani's ladder so Shani, Jules Shani one of the founders of our field when he built actually the very first model of of all 1950 on the iniac and it was like a two-layer or three-layer model depending on how you're counting it's a highly simple model compared to today and he then predicted that as we get bigger and bigger computers you could increase complexity and he called it climbing the ladder so what we can do is that you can run so this is the kind of thing that we've been talking about you can run very high simulations you can't run them for very long but you can run some high resolution simulations extremely high resolution simulations would need being what we call LES models or large eddy simulation models and you can learn from them and that that that gives you some confidence if you learn them for various kinds of regimes some of which might be representative of warmer climates than today and if you learn all their behaviors you might be able to do something but simply learning what's going on in the observations is not good enough so how do we do this learning so there's several steps to it so you can there's a certain number of parameters let's say that you're that you're using to parameterize high resolution processes in your model so you have to do three things you have to find good values of those parameters or at least eliminate bad values of those parameters and you must quantify the uncertainty so these are the three things you must do running the forward model is very expensive that's your gcm so you want to do that as few times as possible so while using very few forward model runs explore all of the parameter space there's two kinds of uncertainty that we want to explore here as we've talked about this parameter parametric uncertainty and the structural uncertainty you want to at least diagnose that your model is wrong so you want to be able to do that there is a two stage process in which we do this so the model is composed of many coupled processes you try to tune or calibrate each one of those and then after that you apply some global constraints to the couple system this is a tricky process as we've demonstrated in a recent paper I wrote with a postdoc of mine in France it's very important to know what cost function you're using tapioca gave an example that you must use certain kinds of statistics and not simple values to in your cost function if you have many metrics that you want to all minimize all of them then you have to decide how you normalize them against each other or weight them against each other you may not want to there's a way to get around this waiting problem but in theory you can ask this as a question around the purpose of the model what is it for different people want the model to do different things and they might give different weights to a certain metric they might consider something important versus not important so a good example which you can demonstrate by looking at the models in semen some models for example the funds in India take the Indian monsoon very seriously and they give it a lot of weight in deciding whether they have a good model or not other centers may not if you're using observations you must know whether the observations sample the space sufficiently similarly if you're using Chinese ladder if you're using models higher on the ladder for calibration are they sampling all possible states and what are the associated uncertainties we need to know that and there are feedbacks on multiple timescales that are compensating errors in the some of these new techniques that are coming up for calibration you can at least diagnose some of these things so but it's a useful way to think about this problem you can you can do this more now now I think in the machine learning area you can do this more formally than we were doing in the past for example this project I was involved in and France is called high tune which stands for tuning towards high-resolution models and the formalism is not that important not going to walk through the details but basically you compute the second equation is something called the implausibility function so lambdas are the parameters you can tune in the model and the implausibility says how far the distance for a given metric that you're using how far are they from your your observations or whatever your state you're tuning towards and so that's a euclidean distance that you normalize by a certain error quantity and you try to keep the implausibility less than some threshold usually t is measured in standard deviations so three standard deviations is a kind of statistical rule of thumb for what makes something likely or not likely and the space that's left is called n-roy not ruled out yet so this is not if you notice it's not doing an optimization it is doing elimination it is eliminating areas of parameter space that do not correspond to the data that you're working towards and you can apply instead of applying a weight function you can use different metrics in different waves that is you can do an one one for one metric you you count your implausibility you left with a certain parameter uncertainty space and then within that you search again using a different metric and you do that sequentially until you get to an n-roy space that is small enough that you can actually run the forward model using it so it's only a few parameters to search one thing you notice is that the n-roy space might actually turn out to be a null space which means for your given formulation of equations uh there's there's no setting of parameters that could correspond to the data you're working towards that means you have structural error in your model and you need to rethink your parameterizations a similar approach to the one I'm sorry two minutes left okay all right I'll try to hurry then so the the clima is using an approach that I won't talk about very much this is an important problem I want to make so the many people are using now machine learning to work towards re-analysis data sets or constrain what I'm calling constrained models we can talk about this later if you like but the basically the what this paper is showing is quite an important result which is showing that the hardly cell circulation appears in a lot of climate models from the semifive error to be declining with time a lot of re-analysis models which are the ones shown in green show the opposite signal for a couple of decades and what the paper concludes is that this is actually arising from the fact that the re-analysis models do not necessarily conserve certain quantities correctly and this might be an error that is actually present in the re-analysis but not present in the models which actually try to conserve certain quantities so it may be that the so-called models are right and the so-called observations are wrong the other use of emulators I want to talk about is for sampling the counterfactuals you want to know for example whether you should for a decarbonization strategy whether you focus only on co2 or do you focus on methane you focus on methane co2 and n2 and so forth so which targets are important all of these are done by looking at single forcing runs which are again counterfactual they cannot be done in reality and there are too many forcings to kind of to to consider so this is never done with models this has all been done with emulators which have been built as fits to models this has been alluded to in an earlier paper in an earlier presentation as well for example here you're looking at the emulators used in AR6 to reduce the CMIP6 spread so because we have the so-called hot model problem where there's a lot of models showing very high climate sensitivity and the IPCC wanted to show a bound of likely uncertainties so they actually used various techniques based on the regressing towards the the latter half of the 20th century to constrain this range of uncertainties digital twins was also earlier mentioned this is just using the Google ngrams I measured the number of times digital twins have been referred to there was a brief spur of it back in around 2003 when they were applied to engineered systems recently there's been an explosion in the use of the word but when you're saying you're building a twin of something that's chaotic and it's not even well understood I think you're really committing overreach as I've argued in a recent paper this is another version of the Hawkins and Sutton paper that's written by a former intern of mine Mackenzie Blenosa so she wrote a version of this Lena was also referred to in the previous two talks you can see that the role of internal this is for local instead of doing for global measure you're looking at the different contribution of uncertainty at local scale in Seattle Montreal and Lagos and depending on where you are it turns out that model uncertainty may be important or internal variability might dominate so there are limits to predictability at local scales that we need to keep in mind let me go to my final slide I want to argue the decision making for climate as opposed to weather requires traceable model hierarchies so you need to be able to run both high resolution and low resolution models because one alone will not give you the answers you want you need counterfactuals as much or even more than predictions climate is not weather so there are going to be model free methods taking over in weather I think that's entirely appropriate but the same models may or may not work for climate so I need to take that into account some people are arguing that you can do everything directly from data and you don't need models anymore there was an article in the Guardian for example titled are we witnessing the dawn of post theory science we can argue about that exercise caution when using real analysis for training the paper I showed computers are getting bigger than faster than I've said that before there is always a loss function you all have the phrase all models are wrong some are useful but I also want to point out all models are calibrated what we can do now I think what the machine learning era brings is that you can use calibration and do it very quickly you can do fast sampling of uncertainty using emulators we need to be very transparent about uncertainty and tuning I think that's a reporting requirement in order for our models to be useful downstream by other models so I'll stop here thank you thank you Raji any questions everybody's taking notes maybe I'll start so I wasn't familiar with the tourney ladder idea but given what you've just said in talking about starting at sort of the top of the ladder and walking down it now at least if I interpreted this correctly sort of implies we have a top of the ladder to start from you mentioned to les is one possible way of starting there and for example training on that and moving down the ladder does that imply that this seems to me that means this approach only works for certain subsets of problems where maybe we already have the capabilities of high resolution we just maybe can't do enough of them or run them for long enough is that true or and if so what are those subsets of problems or you think this is most effective yeah so that's a great question so I think tapio pointed this out at all I mean anything that requires for example condensed water where you depend on micro physics that's all happening on micron scales are smaller so you're never going to get to direct simulations of those you can simulate turbulence so you can do even you can go even beyond les and do what are called directs and your numerical simulations where you don't even use a turbulence closure so you can you can treat that whole hierarchy of model yes and I do argue the title of the paper is in fact climbing down Johnny's ladder to saying that we must go the and the upper limit might just come from the fact that computing has hit a limit and we need to stop somewhere because that's where we are because there are some computing constraints but I believe we can learn a lot from idealized models we do not need to run very high resolution over the globe using present day conditions using the present day distribution of continents and all that I don't think and my argument is that you don't need that because you actually want to learn universal physics that will generalize well to the future so I think you can do that very well using idealized models at very high resolution I think you can learn the physics from that is my argument thank you okay John go for it yeah I'm not sure I understand what you mean by um weather forecasting being done in a model free method could you help me understand that okay so what these some of these papers have referenced your pangu weather graphcast and all are doing they take the they take a array of every analysis and they learn to emulate them more or less exactly they do an extremely good job of emulating those and they can use those to focus forward there is no sense in which you understand anything it's a it's a neural network whose weights are not interpretable but if your problem is simply they put an asterisk there I mean they're not exactly model free right I just told you they were trained on eri 5 eri 5 is produced by a model so the the model was responsible for making sure that the variables that they're training on are physically consistent with each other and respect certain covariances so there is a model behind it but even granting that this is a model free let's assume that is model free in the weather problem I mean I would argue that you basically want to know if it's raining or not tomorrow you want to take an umbrella you do not want to take anymore you don't care how it's produced there is no need for counterfactuals there's no need for understanding if you would interpret it strictly but I think climate you have to think about this very carefully because climate decision making is not like simply do I take an umbrella do I not it's a it's a much more complex set of decisions it's done differently so treating climate as though it's a kind of extended weather problem I think can lead us down the garden path so that's kind of where I'm arguing there may be a divergence where ML makes huge inroads into weather but I do those same models when applied for the climate may not do as well that makes sense it does but I don't agree I think that you have to understand how weather systems work in order to understand how the climate system works I think there's no escaping that climate I don't agree I don't disagree at all okay completely agree okay gaylin go for it yeah thanks I just wanted to carry on from what I think liby's question was a little bit uh which is that you know that we have a lot of processes in the system that uh we can't we don't know right the land carbon sick the ocean carbon sick other other processes how then if we uh focus on just idealized models how then do we uh move those things forward in terms of understanding like carbon climate feedbacks right do we we think that yeah because we don't even know what resolution we need for those processes either right we don't have we don't have your a5 whether we can just train things right so how do you how do we integrate the pieces I guess is the question yeah so I'm quite condensed as you probably are gaylin that I think the these are the critical problems which we need to be focusing attention on is in fact uh what is what is what is the land sink what is the ocean sink and how are they going to change over the next few decades depending on various scenarios and we don't have clear answers we don't have first principles ways of defining the land carbon sink you will you will it will be empirical models of one kind or another uh but I do think the ladder idea still can apply I mean you have to figure out how uh processes that are happening at one resolution what is their aggregate at a different resolution at a different lower resolution I don't think that's a ill posed question I think that's a that's a well posed question that has some answer I don't know what it is but there is some answer to it and there are ways of testing we can develop to see if these these kinds of models when you do that which you can think of as again emulations of very high resolution system models whether those emulations they're not idealized in any sense I agree with you there's it's not like an alias which is simply computing turbulence over uh some sort of boundary condition so it's it's much more complex than that but I completely agree with you that these are the big problems if you know if if you ask me where we should be focusing attention I would say it's exactly what you said what is the what is the fate of the land and ocean carbon sinks and I don't think high resolution models can tell you that because you'll never be able to run them for long enough to answer anything useful all right with that thank you Balaji and with that we're done with session two and get to move on to session three which is our panel and Ruby will take over from here great thank you very much so Amy and I will be moderating this panel discussion so first of all I want to welcome back all of our speakers right so so we have great speakers we have heard a lot of very interesting points so will all the speaker turn on your video so we can see you yep so I think how we are going to moderate this panel discussion is kind of like this we already set up three different questions which we listed over here and I can elaborate on these questions a little bit so we are going to ask these questions to our panelists to hear what their thoughts are since we heard from the first three speakers on like from the decision-making perspective and then we heard from another three speakers from the climate modeling perspective so each side heard the other side right so so then I think we can have a dialogue about these three questions and then subsequently I think we probably might still have some time and then we will open up for our BASC members to ask other questions and also we would check Slido to see if there are any other questions for the panelists and Amy will moderate that part so that's how we're going to do it at least let's just give it a try okay so so let's go to the three questions that we listed over here I don't think we need to look at these maybe it's better to see the face the faces of the of the speakers is yeah okay all right so as we saw the first question is basically trying to see what are the gaps in uncertainty between what's needed in climate informed decision-making and what climate modeling can do so so as I said because we the the speakers heard from each side already and I would like to dive into this a little bit more into also three sub questions related to this broader question we heard a lot about like oh we need to know the uncertainty but we haven't heard too much about like what level of uncertainty is acceptable for decision-making and is it enough to be just quantifying and characterizing uncertainty or do we actually need to reduce uncertainty and then the third sub question is how accurate that kind of characterization of uncertainty needs to be because potentially we can have a forced sense of uncertainty thinking oh we have already narrowed down the uncertainty I think Eila gave us a really good example about the tropical eastern Pacific sea surface temperature warming what we know for example in semi-five models about half of the model sets that the warming will be in the eastern tropical Pacific the other half of the model sets that the warming will be in the central tropical Pacific and then all of a sudden in the semi-six models almost every single one of the model sets that the warming will be over the eastern tropical Pacific but based on what Eila suggests maybe that's actually wrong maybe all the models are wrong and we are getting a wrong sense of certainty so I would like to we would like to hear from our speakers like is there a particular level of uncertainty that would be acceptable is it enough simply to characterize and quantify uncertainty or do we need to reduce uncertainty and then what's acceptable in terms of how accurate that kind of characterization is so I would like to open up to see if who would like to respond to this first question as well as maybe the three sub questions that I just asked so the panelists can raise hand if they want to answer the questions I'll give them some time to think about it okay Jennifer yeah I'm happy to dive in and just get the conversation started so if we can look at changes on the order of is it 15 percent is it 20 percent something something in that order magnitude we're perfectly fine with that information the sea surface temperature example where all the models give the wrong answer is really problematic because what that does is that undermines the confidence and in the climates in the climate community that one thank you very much Jennifer and I see several other hands up as well the logic I take your point Jennifer but I think it's worth mentioning it's a point that was raised by Isla as well and I showed a graph that the as you go to smaller and smaller scales if you go from global to regional to local scale the role of internal variability just keeps getting bigger and bigger so there is a limit and we cannot hide that limit and when there is a limit I don't think that should undermine confidence in models we are acknowledging it we are saying that there are limits to understanding there are I mean not limits to understanding there's limits to predictability even though you may understand the system perfectly so that's where that's where we are and I think it's what I would argue needs to be done as to report this very carefully not to not to pretend that it doesn't exist all right and then we saw also a hand okay Klaus Ruby it's usually asked really good questions let me try to make two points one what is acceptable depends on the decision decision maker and how you communicate it it's a tried observation but it's clear but secondly maybe we are not the right panel to do so because you know if I want to have a car gearbox fixed I don't talk to a carpenter who knows woodwork how to communicate information is typically studied by psychologists and decision analysts we have someone who does decision analysis here but this is a problem where you know as smart as physicists or climate scientists are sometimes you need training and discipline to avoid putting your foot in your mouth so having said this let me just make one more observation how to communicate uncertainty is context dependent and there is also something is too little uncertainty because if people see there's overconfidence it doesn't pass a left test the people don't engage and then people just pick a number say oh they're all crazy to start with why don't we just use one and I think that also has bad outcomes thank you all right thank you very much Klaus Rob um I'm gonna I suppose push back a little bit at least on the way you phrase the question um in two ways I mean and so uh I mean clearly um the scientific community should try to reduce uncertainty because that's what um you know scientists do and that's what science is based on and so that ought to be going on um from from my perspective there was a couple of interesting things missing from this conversation um and and one is the sort of when do we know things so there was for instance there was a lot of discussion about internal variability and how big it is but how I mean questions like how long do I need to how far away from a particular realized state do I need to be before I can differentiate that state from others some other state there's a really interesting work going on in in the California water world where people are trying to understand what sorts of patterns you see in the western pacific you know a couple of months out determine how you run your reservoirs now right and so I mean yes there's a lot of internal variability but there are signals at various time scales in that case months but maybe years that can tell us things and so if if I don't know what the the end state is but I know when I'm going to know the end state um I can design a strategy around that and so there was no discussion of that at all so that that's a whole other dimension that you might look at and then while the climate science community is reducing uncertainty there's a whole question of how do you take the information that's currently available and make it available usable by decision makers so it's part of the communication problem I showed the example of an expert elicitation where you just get the people and so I mean all the discussion was on climate models which is in part what we're discussing today but how do you get the information in the models and the modeling community into the hand of decision makers in a way that is useful now um you know so that they can do it and again like the expert elicitation is one way but there's other ways tabio talked a little bit about you know uh apis and databases which is important but how do you transfer what you know now so people can act on it and then how do you think about when we know when we might know more and how that process unfolds so people can think about adaptive strategies that you know know where the bifurcations are great thank you very much are there any others who might want to address this question I was particularly thinking about this problem of like fourth sense of certainty perhaps okay so I see aila and then tapio yeah I guess I just wanted to come back to this uh someone made a comment about the uh specific example undermining the credibility of the climate science and I certainly hope that would not be what that does I mean I I think we need to accept that all models are wrong and we don't have everything represented correctly and we're in climate change right now and we're it's only now that we can start to really see these things and become aware of these problems and I guess what we need to do is communicate that to people that are making decisions and say here's what we don't know but there's a chance that this could be going this way how would this change how you would deal with climate change if if in reality there's you know a 90 percent chance that the southwest is going to become even drier than it is now whereas models say on average zero change how how differently would you kind of make your decisions and then it seems like there needs to be some two-way interaction that would allow the climate modelers to then know okay this is what we really need to know and mail down this problem because people would make very different decisions if this were the case um so yeah more communication along those lines I guess great thank you very much uh tapio I think I want to build on what aila and rob said both of whom I agree with um so with rob I want to separate the problem of having climate model output with quantified uncertainties to how you actually use it in decision making that decision making we've heard a lot about um so the pacific example that that aila mentioned I think is actually a good one right if you build a model that is designed to learn from data and is using mismatches between what's simulated and what's observed to quantify errors or improve the model you need models for both well that's actually great right then you have an example that allows you to improve your either calibration or uncertainty quantification ideally both um so it's it's not undermining anything I mean it's just how science is supposed to work and we just need to formalize using discrepancies and such data much better than we have it's the first point um regarding I think one important communication point is it often comes up in a climate discussion a bit but Bellagio said that internal variability is irreducible well yes it is but I think it's kind of asking the wrong questions I think Jennifer without trying to put words in your mind but let me try to put words in your mouth I think what you care about is is knowing what is the probability that a flood exceeds a certain threshold you know level of the first floor in a street or whatever it might be right and you want to know that probability already now and already now we don't have good information on that it's mostly based on historical beta which already now are not adequate for guiding that information and then we also want to know this in the future so it becomes a question of probabilities of say exceeding thresholds and already now that's a part of course that's driven by internal variability that controls that probability in the end so it's not it's not a nuisance it's not an uncertainty in that sense but it's just something that controls the probability that already for the present day we need to know and engineers wouldn't treat this as an uncertainty factor as I think Jennifer pointed out but as a risk that you plan for and so it is in the future that might shift by the mean shifting or might shift by the by the variance changing and the like and you can you can take all of that into account but I think it's important to reframe this discussion a bit um in that direction because I think that's the direction that matters for for planning and then the the fundamental point for all of that I think is you need to quantify all uncertainties we are not currently doing this so the work that Eila showed and Clara Dessa and others at NCAR started with with more rigorously quantifying initial condition uncertainty is one important step but that's the only uncertainty that in any rigorous way we're currently quantifying you're not quantifying any form of model uncertainty in a rigorous way neither parametric nor nor structural and I think that's the crucial piece that we need to take on thank you very much Tapio anyone else wanted to get back another round of answering these questions after hearing some of these answers okay Bellagie yeah can I just make a quick response to what Tapio said so in the IPSL has recently published a paper where they are trying to quantify parametric uncertainty to the extent where they're saying there are a number of parameter choices which are all equally consistent with data and we don't have a preference between these how useful it is I don't know because unless you publish that entire ensemble other people cannot use it but I think honestly is the first step I think you have to tell people that this is the limit of what we know is the point I'm trying to make great thank you all right Jennifer right and I really appreciate the conversation around this and listening to what I was saying and I would agree with most of what the speakers are saying and a lot a lot of this is communication and so the message that's being received by some of the stakeholder communities is a very it's a number without a whole lot of additional information about about the uncertainty and how to use that information and so I think both the sectors that are trying to use the information if they're not used to quantifying uncertainty and really dealing with risk-based frameworks need to be moving in those general directions at the same time the communications need to be stronger between the climate science community as well as sectors to make certain that the information that's being communicated here is also being taken into account when we when we look at the models because it isn't historical data it's it's a model great thank you very much Jennifer any other okay so let's move to the second question so our second question is what so we have heard about these gaps right so and and so then the second question is what are the key factors driving the gaps that we have heard right from the decision-making perspective like we need certain uncertain we need uncertainty information but oftentimes uncertainty information may not be available and then we also heard from the climate modeling side like the decision-makers need certain information but in climate modeling it's really difficult for us to provide some of those information so so this question is really open-ended but I would like perhaps each of the speaker to just pick the two top factors that you think are driving the gaps because because there can be many right I just want you guys to think about like if you were to pick the top two factors that you think are driving the gaps between the uncertainty that we need to help with decision-making and the uncertainty information that can be provided by climate modeling I'll give you some time to think about it but anyone who's ready please raise your hand or or feel free to kind of change my question a bit if it is not something that you yeah please huh it might be a spillover from the last discussion and forgive my ignorance but can changes in internal variability be related to the slow climate creep is that another source of um I guess uncertainty how much will the variability change as the climate changes and I don't know if anybody works on that because that's where I'm ignorant but yeah yeah I wonder if that's another issue is that yeah I think we can definitely hear from some of our speakers uh definitely it's a big question why we have mainly focusing our effort on trying to even understand or quantify the internal variability uncertainty let alone to understand how internal variability might be changing because of climate change yeah but let's hear okay so I see two hands up for now so I love um yeah I've forgotten what I was going to say now because I was thinking about changes in internal variability but yeah definitely that's a topic that people are looking at right people are looking at changes and and so variance for example and the large ensembles allow us to do that um I think what I was going to say about the gaps seems to just be a matter of communication somehow I I I feel like it's hard for people outside of a climate modeling community to really just grasp the nature of internal variability I mean an example that we saw in our lab the other day was like Google Earth Engine put up a climate projection for one member of CCSM4 that showed cooling over the US and then you know people would just take that on as being the climate change signal and it's not at all and somehow I don't know this is not being absorbed by everyone that needs to absorb it so I think we need to do something better to communicate it and then also in terms of what models are fit for looking at you know if there's definitely things that our global models don't get right and users may not necessarily know that it's just not the right thing to look at for their particular problem so I think a lot more a lot could be done by just more communication but obviously everyone's a bit stretched and so yeah there's just limits to what everyone can do. Great thank you very much Eila. Klaus? Those are great points to be raised. Let me raise another one which I think is maybe pretty obvious and tried it's education. To do this well one needs to understand the earth system and climate system. The sushi baking statistics suggest a few examples there are many more examples that would be exclusive here. Typically people are not trained to work across those three fields and many more and even today we saw evidence where people asked questions and it was at the edge of the comfort range of people but so the question is how do we actually tackle this challenge that we need to have people who have expertise to go in deep but also be able to connect and communicate well and this is to some extent not just in the producer information but also in the user side information you know so again I don't want to put words in Jennifer's mouth but look it's just I'm an engineer myself I can speak for myself. When we educate engineers or people who go out of the world with an undergraduate degree the amount of knowledge they have about uncertainty and climate adaptation is sometimes well there's room for growth let's put it this way and if you want to have an informed populace that actually can use it and to really use the information we have we need to make sure that we have given them at least a chance of the appropriate training and I think right now there is room for growth thank you. All right thank you very much Klaus. Tapio? I have a few questions of gaps I think there are two critical gaps and so the number one is the connection from data to models as I mentioned I think the key area for improving models reducing quantifying uncertainties is exploiting data as I mentioned I think as Balaji also emphasized and the second key gap is from models to information people actually can and want to use and we tend to focus on the communication aspect and I think that's hugely important Rob mentioned boundary organizations I think that's hugely important but we haven't even tried to see how far you can go with improved technology and I think the key thing you're missing is a an easy way to traverse climate simulation output and this plethora of earth observations we have right now if you want to look at earth observations it's sort of typically one grad student per data set you know the data handling is complicated it's all sorts of different formats and what especially machine learning tools give you is adding value to data by exploiting correlations that may not be obvious and we cannot right now do this we cannot harness the potential of AI in an effective way for climate adaptation for providing information to users because climate simulations are in one place who knows where they'll go once we have higher resolution simulations data are in all sorts of other places and it becomes really hard to jointly exploit simulations and data I think the first step there should be you know the platformization of climate data the observational data simulation output and on top of that you can build a whole range of tools for easily communicating flood risks fire risks hurricane risks whatever it might be to users and give give users consumers businesses tools in their hands that are a joy to play with and explore and then we can ask the communication again question again well what else is missing that people actually use it in a decision making but you know everyone is used to using weather apps you have no climate apps that allow you to explore climate outcomes in any meaningful way and I think you should build them that's very interesting idea thank you very much okay Jennifer so I really like that idea and one of the things when I was listening to the talks this morning was I heard about national datasets for pretty much every agency and yet for our stakeholder communities we don't have those sort of toolboxes that we can play with that are available to us that have information a lot of different forms NOAA has tried to hold some of those together but they're really very ad hoc and so being able to have that sandpit where we can engage we can play with the data we can understand where there's areas that have high uncertainties versus where areas we where we have a fair amount of confidence would be absolutely lovely as well as areas where we can't ever give a stakeholder information about what's going to happen every three seconds or at the at the finer scales so I think that would that would really move the field forward if we had that ability to be able to look nationally at some of these some of these challenges great thank you very much Jennifer any other response to this question about the gaps and the factors driving the gaps okay so uh okay uh Rob yeah yeah I mean this echoes a lot of what's been said we do it sort of from a different vantage is um I mean a lot of this conversation and particularly let me just build on what Jennifer talked about you know if you've got engineers they've got you know many many decades of professional practice which has been using stationary climate data and we're trying to figure out how to take information about a different system and kind of match it to those processes and so I think they're one of the gaps is sort of this you know co-production co-design between the climate scientist community and all these individual communities of practice to figure out you know what is sort of the minimum changes that they need to make in what they do to use the information that climate scientists can produce and so it it does go beyond communication which sort of has this you know um you know sort of weather field to it right you know well people have always been trying to decide whether to bring an umbrella or not they're going to provide the better information so they can do that is that people are going to be making following carefully constructed practice or constructed practices that you need to do for legal and you know also safety all sorts of reasons and figuring out how to redo them with this new type of information which is new in part because it comes from models new because it comes it has a different character of uncertainty to it and and so forth and you know when you actually do this sort of thing I mean you can get very big changes in this sort of information and the way the way people use information the design of of systems and so forth I mean just just a very um you know an example that we just went through and we were doing warning systems went in with one with a small town went in with one conception we all had it on all sides of what what it would look like but once you got into what the data actually looked like what you could get out of the sensors and what people were comfortable with in terms of signal to noise and who was making decisions the whole thing got redesigned it went from like a siren notion that we're going to warn the whole community to an individualized app based system where people would get information customized to them and then they would decide what to do so it means just like the whole when you match the information available with the decision space you got like a whole different thing so I think there's a whole area of there to work between the climate community and these different decision communities all right thank you very much Rob and we have Balaji and then Jennifer again yeah so this is more of a question than an answer to anything I think but I we there's a presumption I'm hearing in this entire discussion that we are using this word information what we seem to be seeking numbers does information have to be quantitative to be useful so this is a question because I have a feeling that in that case we might be disappointed but we might be able to give qualitative answers to certain questions that are useful there are some questions for which I don't even know the answer I mean I don't know for example if planting a lot of trees is even the right sign of an answer in terms of a mitigation signal so this is a question for anybody who wants to take it up does when we're talking about this does everything have to be quantitative to be useful yeah I think this is a great question Balaji because oftentimes as scientists we like to explain certain things we may not necessarily give a precise number but we like to be able to explain like oh why should we be expecting there would be more moisture in the air or or why extreme precipitation might be increasing but not necessarily precise in terms of the number but would that kind of information and explanation be be useful right so I don't know whether anyone would like to Jennifer I'll answer the question all right from my perspective sometimes just the direction is really helpful it's going to get higher it's going to get lower at least that gives us a fighting chance or we have no idea that's that's information I did want to sign off so I'm going to let people put words in my mouth after I leave I have to go catch a flight but I want to thank the best coordinators for a really lovely lovely session and for for inviting me to come and participate so thank you so much thank you very much yeah all right uh Effie and then um yeah Effie please and Jennifer before you leave there is a study that both me and Ruby are part of an NRCE study on modernizing the probable maximum precipitation that you're very aware of and of course under climate change those non-stationarity for those that don't know probable maximum precipitation was defined as the theoretically upper limit of precipitation that is ever possible to fall over an area over a duration at a given time of the year and this is what how we design high risk dams or nuclear power plants of course you have to change the process by which we we define this pmp now what I wanted to say is here in the US there will be a lots of studies to come up with revised pmp and the national stator for producing them but I was impressed to see that in Switzerland they have already done some analysis and the Switzerland has been basically classified in regions of 1.2 1.4 meaning the factor that you amplify the pmp to design your dam or to upgrade your older facility is not you know endless research there is for the engineers there is a product you are in the region of 1.4 pmp so there are the two extremes where you need an answer as an engineer as you said you just need to build something and all the uncertainty that we try to nail down because we are you know the scientists and the engineers and it's interesting to see the two different paths I just wanted to bring that example that's great thank you very much Effie and thank you Jennifer yeah all right um I see Bob has his hand up sure thanks I want to ask the question I sort of got out in the chat a little bit uh during the the first half um by the question of as we think of users I feel like even when we're talking about this from a decision-centered approach we're still thinking about a decision that is ultimately owned by an agency or somebody who is responsible for carrying it out but in fact in many of the decisions we're talking about these are things that played out over decades with a whole bunch of stakeholders involved and it's not just we want all their values represented but we have to think about if we're coming up with a plan or some use of the data what's the duration like what is the ability what is our ability to actually implement a plan given all the factors that are exogenous to what we're doing that could affect the results like how do we make sure what we're doing is actually useful given you know there's a lot of political economy for instance that goes on between you know modeling flood hazard and looking at idealized adaptation pathways and actually implementing adaptation pathways how do we how do we think about that and make sure we're still not like when we think about the realities of decision-making we're still not operating in like a super high hyper idealized decision maker world and end up going down a path that is costly and involves a lot of very high fidelity modeling when the noise of political economy is going to interrupt any connection between that high fidelity modeling and decision making once you get it a year or two down the road all right so anyone would like to respond or make a comment related to Bob's comment or question all right okay so with that I think it's really important for us to get to the third question yeah so but we can always come back with some of these previous questions and comments as well so the last question that we set up because we heard a lot about gaps and so it's important for us to talk about opportunities right so what are the opportunities to close the gaps for example we heard a lot from Rob and also Klaus about like maybe flipping the way we think about things not necessarily from climate modeling to decision but also from decision to climate modeling and the learning the iteration not only like one one way and the other way but but this kind of learning I think learning has been emphasized a lot how do we facilitate that then we also heard about some of these like newer tools right AI machine learning high resolution modeling are these tools providing opportunities for us to perhaps close the gaps of the gaps in terms of opportunities and and any other comments that you might have related to yeah opportunities for closing the gaps yeah so again I'll give you guys a bit of time anyone who's ready let me know I mean I very much tend to go to you know sort of process answers for questions like that because I think you know these specific answers are very context dependent but I mean the opportunity is it's starting to get pretty hard to find you know any organization across the country that does not think they need to think about you know changing climate and its impacts on them so there are you know and there's actually money towards that and so there's lots of opportunity for interaction among climate scientists and people making decisions and you know as we've had some discussion on both you know in verbally and in some of the chat I mean there's you know lots of opportunities to start doing test cases of what information is most useful both quantitative and qualitative what you know what what do decision makers need in particular context and then I mean I I think it's all you know it and that's both true for examples and then it also helps do this generalization you know so much of what people do is your heuristic base because it's worked a whole bunch of times in the past so that's just what you do um and so develop new heuristics so that most of these decisions can be pretty quick the example the FE gave of you know Switzerland went through and gives like a factor you know like that you know so sunset gardening map right there's 12 regions and you look up the region and you know what to plan right and it's so they did that for the maximum precipitation um you know so for for meant you'd like for as many decisions as possible to be like really easy you just do this and you know there's a different heuristic and then you need a way to you know kick it out to where you know which um what when you really need to think hard and so Jennifer showed the tier one tier two tier three and presumably if you're doing a tier three you spend a lot more on the you know downloading climate data than you do on a tier one um and you know again it's that sort of screening tool but how do you do the screening how do you do a quick climate screening for most decisions now which ones you need to do um a lot more work on and just doing cases and developing those heuristics I you know is I think where we need to go and there's lots of opportunities for engaging with that okay thank you very much Rob Klaus you need to be a pattern here that I follow up Rob Rob this is a great point as usual and you meet the point there is more demand but I do also think that we're in situation where the supply is limiting and becoming more and more so so let me just ask a question for everybody that reach out to you and says Rob can you please help us how many people do you have to turn down and they would actually be benefiting from more information and is it really do we do we educate the appropriate number of people with the appropriate skills to engage in this kind of work that is needed that people ask for I suppose to answer that or you can answer it um you know I mean I mean I actually don't have a good answer that and that may be another you know question I mean looking at the skills gap and skills pipeline might be a really interesting set of set of questions um you know we get lots of inquiries um you know AGU's got its thriving earth exchange where they send people out to work on things but I think yes you know lots probably engages about as many people as they they have to engage so um but the actually looking at the skills map and skills pipeline um what what can you do um you know what level of training do you have to do this but yeah I think that's a great question all right thank you very much um happy I think I want to in part answer Klaus's questions um the HR question basically forget who said it was Klaus or Rob saying that you know the number of people producing climate models are for a smaller number of people wanting to use it there's a huge opportunity here there's a market now for climate information people estimate this to be 40 billion dollars a year or so as Rob said you know climate information affects just about every longer term decision and and we know that climate models have problems we know that we are not adequately communicating the information and I think that's a tremendous opportunity we are working in an area where we deal with data we deal with computing in an area that matters to people it is what early career scientists the best scientists of the young generation I want to do scientists engineers and the like and we need to offer them on-ramps to to work in this area we need to grow the workforce and I think that's essential a challenge with that is that a lot of climate modeling or a lot of what we're talking about also making the information useful is happening at large centers which are not the easiest on-ramp for students to get involved in this area so I think one one corollary to me is that universities need to get much more involved climate modeling is not something that has happened at universities in the last few decades it started there but it migrated out of it this more applied work that we are talking about here is likewise very sparsely represented at best at universities say hazard hazard modeling it's mostly focused in a private sector as a result it's pretty close off a lot more of that could happen in the open and universities where I guarantee you from what I see of our students they gravitate towards this kind of work it's interesting science engineering problems that matter so we need to offer them on-ramps they need to be funding avenues to fund this research universities or wherever the next generation can find a way in and I think we need to capitalize on this universe on this opportunity more than we're currently doing that's number one again the other opportunity to me is is in exploiting data more broadly across the board and maybe in part an answer to Rob's important question as it is we're talking as if there is some sort of grand decision maker that makes the decisions and that's maybe not the best mental model for for thinking about it and we'll have to revise these mental models but again I would say we also need to revise the tools um you know everyone uses weather forecast for decision making every day a lot of businesses do obviously and a lot of consumers do and there isn't isn't a question about the weather forecasting decision maker anymore because the tools are universally accessible and useful I think we need to get closer to a state where where we have tools that are universally accessible and useful and Bob in the CHEP said there is a bunch of tools you know the White House has a portal for the CMAP archive and that's all good but I think it's not what I'm talking about I want I want tools that are a joy to use easy to use and and say for an engineer they're so easy that you know climate is only one of many factors to consider in an engineering decision so you need to meet people where they are with very easy to use tools and sending them to a White House portal where you sift through things or any other of the polls we have is not going to do it and likewise I think the other big opportunity I see. Thank you very much Tapio very well said okay um okay so we have Mary. Ruby you also have Linda who wants who can't get her hand up but she wants to ask the question. Yeah okay let's go um who is first? I think Linda. Tapio is just okay so so so let's go with Mary first since she's going to comment. Well I was going yeah I want to make a comment and then do a question and so let me at least make my comment now um so Tapio I I stopped myself from cheering as you were talking um because I think you articulated well I think what a lot of us have been seeing here and so I wanted to let the panel know that you know the board is very focused on climate services and we actually have a closed session tomorrow to talk about some you know potential steps forward there but I think we do have to entrain all of the sectors um you know and it's almost sectors in every sense of the word you know um that people that are working in engineering firms today need to be versed in this so I think you know health the same thing will need to happen here but I think there's some national efforts that are needed um to bridge between the the public sector the private sector the philanthropic sector and the government sector and we're not at all moving in that regard and that's been really a focus a focus of the board um I'll put my other question out there now but you might not want to take it before we're done with the uh the session I would love to hear from each of the panelists your your talks were just great if you see something or is there an area you think BASC or the broader academies could be helpful here we'd love to we'd love to hear that from you yeah definitely that would be an important question that we would end with um okay so Linda yeah thanks Ruby so um I just wanted to this is I think a question that's related to everything we've been discussing it's it's not clear to me can someone I'm sorry that Jennifer left um but one of the panelists described a situation where they've worked with stakeholders and the reaction was oh well there's just too much uncertainty we cannot make a decision because that you know I mean it's really not clear to me to what degree the uncertainty that exists now is a problem for decision making yeah that goes back to the very first question that we discussed yeah yeah so yeah any anyone wants to respond to that yeah yeah yes Rob definitely yeah I mean I I mean people need to make decisions and obviously not even you know not doing anything as a decision people make decisions um and if there's too much uncertainty I think the tendency more is just to ignore that whole factor and often pretend it's zero um and ignore what do you mean ignore that whole factor so no climate change I think yeah I mean if like climate is too uncertain you just ignore it um pretend it doesn't exist um and make your decision without that so in other words you make your decision but it's what evidence do you admit into uh you'll bring into the decision making process but are you saying that you actually have experienced that since I know you have worked with many stakeholders where there were situations where the stakeholders just threw up their hands and said too much uncertainty we're gonna ignore climate well I think that was the case for a very long time and the FDC is currently arguing about that very thing right so yes there are certain examples well I would say that there are also counter examples like California state you know California is one place where we know the projections for the precipitation change has always been like oh is either positive or is either negative but still the state continues to yeah yeah for information yeah anyway all right so let's move to uh Bob and then Bellagy yeah yeah but so I think this is actually where um you want to take in the panel on but Bell ask question we can delay it if you're not which is basically you know I've been trying to listen with themes of okay is there something where the academy is weighing in could make a difference and I think the thing that's emerged particularly in the last 10 minutes on this focus on training and workforce development for this the space seems like one such area I also heard Tapio saying well we don't have public adequate public data infrastructure in this country um has another potential area but so so apologies if Ruby I am jumping ahead on your panel but I what I wanted to ask is well basically where where do you think the academies right either on sort of a short term basis with like workshops or recognizing that we're little the academies are a little slow often with studies into like a two-year process um you know where where where is there a need for for external voices to come in and comment to the agencies or otherwise or convene people right yeah so let's yeah let's give a heads up for all the panelists because our last question for you is what might you suggest uh that Basque or the National Academy can do about this because we have heard a lot about the gaps in um uncertainty in decision making right so but before that let's go to Bellagie and then we'll have every uh panelists give us some suggestions for where the National Academy or the Basque might work on in in this space um Bellagie yeah yeah okay so um I mean I had two remarks to make maybe I'll make just one I'll I'll well let's see but the the remark I wanted to make is that this discussion seems to have been I don't know whether by intention or just by accident almost entirely focused on climate adaptation I've heard almost no talk about climate mitigation and I believe your initial question for the third question Ruby was about opportunities I do believe there are a lot of opportunities in the mitigation space to improve in particular um carbon cycle modeling and other aspects of the climate system which um perhaps are getting insufficient attention so I think that that's an opportunity area that you could look at and maybe that's a comment for Basque as well so I think uh yes we should be thinking about adaptation but let's not forget mitigation and there's a lot of deep scientific problems like you know Galen Galen mentioned in one of her questions so what is the size of the land sink and the ocean sink and what is going to happen to them either gonna we don't do anything or more importantly even if we do do something if we do get on like the planet gets on a target into a net zero pathway then what happens to the land and ocean carbon sinks I think that question is still uh unknown so I think that's a that's an opportunity that I see uh yeah I'll just stop there the second question was a little technical but I think it's probably too late for that okay thank you but but please do yeah let us know if you have that comment uh by email or anything all right um okay so we come to the last question before we wrap up with any other remaining questions right so we would love to hear from the panelists like what your suggestions might be that would be good for the national academy and the Basque to take on in this space okay so I see I'm assuming Bob's yeah okay um all right Eila yeah yeah I mean I think bringing together the climate modelers and the decision makers I mean back to Linda's question I guess I don't I don't really ever interact with stakeholders but I would very much like to know as an analyzer of climate models where the biggest problems are for decision making where the uncertainties are that people can't handle because that should be our focus so I think yeah bringing together the communities and finding ways to improve the communication between them all right thank you very much Tapio? I think the number one priority to me would be provide some roadmap for organizing climate model output and data jointly I think this will hit us very soon and if you're not prepared for it Ruby you and DOE are producing high-resolution simulations and many more will be coming we have all this data I think this is in a way building the interstate hybrid system it will just create a whole industry of things you can do with data that right now we can't do and the resources to do it computing resources are there in the government within DOE and the like technological resources is a bit more challenging but provide a roadmap with what needs to be done that at least it gets on people's agenda that we should do something about it I think this will otherwise come it'll hit us unprepared and of course the national academies can help is making recommendations for funding research to the to the agencies and making it clear that the areas we are talking about both on the sort of modeling side and the adaptation side closer to the stakeholders these are super interesting science and engineering problems that have direct applications I love what I do because it's fundamental science but it translates immediately into something that can be useful right and it's few areas where you have that direct translation and you know in our field there's this dichotomy between climate science and climate modeling they're kind of viewed as different things they shouldn't be I mean Sukhi Manavi was not a climate modeler he was an excellent climate scientist to build fantastic climate models and I think you need to get back to more of that mindset and I think BASC and the national academies can help here in recommending programs that work directly on climate modeling there's some there are climate process teams but then even downstream hazard modeling would be another area where I think there's there's great opportunity for publicly funded research that will make a difference that that's scientifically engineering from an engineering point of view really interesting thank you very much Topil Rob um three things one is the workforce questions that we've been discussing so three things that the academy might be able to do another is is is evaluation of what's working and what's not in terms of communication information provision um I mean there's a lot of use cases out there and and when you're actually doing these things it's it's actually very hard to get funding to actually do any evaluation so that would be useful um and then just I'll tee up again the the sort of the signposts what might we know when um as you think about the different pathways that you go along which I think is very useful input to decision making but often not highlighted in some of these discussions mm-hmm thank you very much Rob um okay uh Klaus I think I saw yes one quick comment um Tapio you're right there's one thing which connects what you said to what Rob was showing you know the fourth blade in the propeller diagram the responses and how we go from hazards to risks which is due to vulnerability exposure is really crucial um and that also goes to the workforce problem in terms of uh we need climate science but earth system science in the broad sense where humans are a key driver they're impacted but also impacting your system I think that can be rather crucial mm-hmm yep anyone else um we pretty much run out of time but I I hate to just stop without having anyone I want anyone else who might have questions that you were not able to to bring up um from from past or if we have any questions from Slido there was a a number there a number just to from from the public there are a number of questions around bringing it back to the user needs of how do we reconcile these questions about the the models at whatever scales we're looking at and the kinds of um decisions and data that Jennifer was talking about that are needed and um just what are the opportunities there um and then also um yeah there were a couple of them that just came back down to how do you how do we make that how what do we how do we think about that connection between those numbers like what Effie was talking about something that engineers can use directly and how do we keep as as models iterate connecting back to those specific needs for decision makers who were looking for specific numbers any further comments on that yes yeah please uh actually Amy please please I've been the most quiet moderator because you're doing everything so I don't have anything to add because you've been in calling on everybody except for Linda there you go so I have a question just because it's out of ignorance I understand the astronomy field a lot better I think than climate change right now so do you do the equivalent of the astro decadal because I think climate change has reached that point now I mean there's some wonderful things that the astronomy community has done they've seen this ball rolling that intense data is going to come in as soon as they launch the LSS T. Vera Rubin multiple things and they've self organized and created systems of when we get a million alerts a night how are we going to broker those and how are we going to quality control those and how are we going to send those out and they've self organized but also in astronomy every 10 years they do God's gift to the worst probably assignment close to a thousand white papers and when it comes out it ranks for the feds what are the most important things in this category like really large-scale instrumentation medium-sized instrumentation and other things and it gives a priority and we respond to those priorities and I feel like climate change has grown up enough now that that could be useful and that could be something that the academies and the basque could do too that and comms training I will tell you we have a postdoc program in astronomy and every one of those postdocs every year gets comms training this year it's the Allen Alder School we have a comms expert that comes in and I got to tell you I don't make the same talk that I gave 20 years ago and no offense but y'all give really science dense talks that I can't reach my next door neighbor with so a combination of comms training probably a manner you could speak to that because I know you went through spitfire right those two things might be recommendations okay can I respond on the decadal question and yeah so the academies does do a decadal survey of the space space programs it's jointly funded by NASA in no way it's it's a similar approach to the astronomy decadals and there are a couple others that our space studies board does and but we haven't really done like holistically all maybe since america's climate choices is probably the most recent version of that and like we've done you know assessments on modeling strategy about 10 years ago um so you know it's a good it's a good question or it's like individual agencies have asked us to like do assessments of like nsf or system science for example um but it's been harder to wrap our hands around like how might you do that for the whole field and and so that that's a short answer there oh and ocean decadal gaylin thank you for yeah and i mean and you have ipcc right so there's this circling factor that kind of gets in the way that the astronomy doesn't necessarily have yeah right sorry just a quick comment the academy did have a program that was called science and engineering ambassadors um about a decade ago as a pilot all of the academy's um presidents happened to be from within about 40 miles of Pittsburgh and so we had it we had a program there i was one of the ambassadors and it was it was a nice program it didn't it was a lot of work and so it didn't wind up scaling and it doesn't exist anymore all right i think we really passed the time i i wonder if there are any last comments or last question if not i really want to thank um all the speakers on behalf of the planning committee yeah really really wonderful talks lots of great points and lots of suggestions for us to think about thank you very much for your time and preparation for for your presentations and everything um so but then i passed back the time to me yeah well let's give the panelists a round of applause yeah thank you so much you've given us so much to think about um so all i'll do is remind um people that there's a dinner this evening and you should have the details of that in your thing so i hope to see many of you there and then you have on your agenda we'll start tomorrow here same room right yes 8 30 a.m. is breakfast so nine for the meeting nine for the meeting right i know there's west coast people they're still in shopping okay so have a good evening everybody goodbye thank you