 Let's get started with our last session. We have a panel discussion of the topic of the conference. It'll be led by Andy Wicker. All right, thanks Brad for the introduction. So as we're wrapping up, the conference organizers were thinking that one good way to end this would be to have a bit of an organized conversation to reflect on a number of the things that we've learned and a number of the shared experiences that we've had. And for that, they organized, they asked me to facilitate our panel of four folks, three here and then you can see Merritt on the screen. And before we get started, I'd just like everyone to introduce themselves. And since Merritt can't see the rest of us, at least she'll see us with a little bit of a leg. So could you get started and then we'll go back towards me. Sure, first thing, can everyone hear me okay? Yes, that's a thumbs up from Andy. Well, thanks so much for the invitation. I'm sorry I can't be there to join you. I've had a rough day, but I'm happy to talk about this topic which is extreme events, extreme weather and I've experienced that today. It's why my day has been so rough, but my name is Merritt Teretsky. I am the director of INSTAR, the Institute of Arctic and Alpine Research. I am also a permafrost researcher and wildfire researcher, two major disturbance events that are really triggered by extreme weather. So I'm happy to talk about that later today. Thanks so much for having me here. Hello, okay, good. Hi everyone, I am Lauren Lohman. I'm an assistant professor at Wake Forest University in the Department of Engineering. I had the opportunity to talk to you all about my research yesterday, but just a quick refresher. I enjoy studying how vegetation responds to extreme weather and climate events. Hi, I'm Sarah Shantz. I'm an assistant professor at Colorado College. So just 90 miles south of here. I'll be doing that drive later today. And I am a geomorphologist kind of working on landscape evolution, bedrock rivers, and I work primarily with undergraduate students. We have no graduate students. So definitely speak to that pace of research and let me know if I'm not doing the microphone thing, right? Hello, I'm Chris Vernon. I'm a senior data scientist at Pacific Northwest National Laboratory. I work on projects that are both global and continental in scale that really deal with fundamental sciences more or less. And so we do evaluate and research complex interactions between co-evolving systems for human and natural systems as well. I'm also on the facilitation team for the multi-sector dynamics community of practice and that field of research. And I'm Andy Wicker. Most importantly to them, the host, I feel a little bit of existential dread in classifying myself as any specific kind of biologist. I work on rivers and glaciers, study sea level, build instrumentation, and do things that used to inspire me and now mostly that inspire my students and through them inspire me. So with that, thank you everyone for being here. I'm gonna kick this off with a few different prepared questions that I have that start out with the theme of the meeting. And then we're going to move on to some questions that all of you as the audience have. And just before we get started, one thing is that Sam Merritt has experienced the extreme winds of today in ways that have impacted her house. And so we only have her for maybe another 10, 15 minutes. So just so you know, she might slip away and that's absolutely planned. So my first question for everyone and I think what I might do is just see if, because we're not all here in person, can't read body language if we just move in the same direction from Merritt towards me. And if you guys want to like jump back and forth off of each other, just like make loud body motions. So to start this off, we're talking about extreme events. And one thing is that these extremes by definition are things that we don't measure very often. We have really poor counting statistics of in our data sets. And that seems like another opportunity where models we can simulate systems where we can start to create really robust statistics of these, but then this creates its own complexities are our models actually correctly representing the system. If we start to model things that we can't actually measure where are we just extrapolating into the unknown? And so are there ways that we can think about that go into the assimilation of data into models or other ways to creatively view data model statistics? And what do you think kind of the future of data models and extreme events might be? So as a broad scope, Merritt, would you kick it off? It's perhaps one of the most challenging questions, but it is really fascinating to think about, you know, from sort of the perspective of Arctic change, what we're seeing now, and I like Andy, that you brought up the word opportunity because I think when we're thinking about extreme events, it is both a challenge and an opportunity. And I guess the first point I wanted to make just to kind of bring us all into sort of my way of thinking is this is the way most people are going to feel, see, hear, experience and really feel in their own livelihoods, the effects of climate change is through these extreme events. And so whatever we can do to not only represent these extremes in our science, but also represent these extremes in our science communication is gonna go a long way to bringing the public along with us in this journey and challenge that Andy kicked us off with. And so I think that needs to be part of our discussion today is as we are pushing the envelopes scientifically and methodologically ourselves, how do we communicate out that boundary and that expansion of our knowledge to the public? We have a real disconnect between science and society on climate change, but particularly around extreme events. And some of that comes right out of dialogue here at CU Boulder, thinking about some of the facets of hurricane impacts, for example. And so how we communicate this out to the public and stimulate real authentic discussion and debate not only amongst ourselves, but also with different sectors of the public is really gonna shape policy in the next few decades. So I've done a wind around, I haven't really directly addressed your question, Andy, but I think just using an example from my own research, which is, I study a particular type of permafrost thaw, which is thermocarsed or abrupt permafrost thaw. And this is an extreme form of permafrost degradation. It's occurring in a very narrow set of conditions in the Arctic and it leads to a very particular manifestation on the landscape. It is not currently represented by any, certainly not any earth system model, nor is it represented in any large scale model. And so the approach that our group has taken is a climate toolbox. So this is something that IRENA over M has published and EOS sort of thinking about permafrost toolboxes. There is gonna be no one fits all solution to sort of pushing that knowledge envelope on extreme events. We're gonna need a variety of approaches from data analytics to numerical models to conceptual models, perhaps feeding into our earth system models. And so I think if we take that diverse toolbox approach and think about how to integrate ways of knowing across those different tools within our toolbox, we have some hope at really advancing our state of knowledge. So I might just stop there and turn things over to our other wonderful panels, but thanks for that great question. Is it on again? Okay, great. Yeah, Merit, I really appreciated your response. And I think I wanna build off of it. I agree with what Merit said in terms of, I think we have the toolbox. I think we have a lot of the tools needed to tackle this issue of, right, we have these extreme events that's not necessarily captured in our data sets. But I think that what we need to change a bit is our mindset and just being more okay with uncertainty and non-stationarity and reevaluating assumptions that are sort of inherent in the modeling techniques that we're using or in the data sets that we're using. So an example from my work, I spent some time building a predictive technology model. So trying to predict how plants are going to grow and to nest under different meteorological and soil conditions and trying to hopefully use that in future climate models to see how plants are going to change, are going to change our water and carbon cycling under future climate projections. But a big challenge that we have is that we don't really know how plants are going to behave and adapt. And one thing that we saw is based on the data that you used to train your predictive model, it completely changes your plants' water use behavior. So if you train your model on say data from a really dry period, well then your plants are going to be much more conservative water users in any future projection that you make. And conversely, if you train your model on very wet conditions, then your plants are going to be very aggressive water users in any prediction that you make. And so part of it is, right, we don't necessarily know what the future exactly will look like, if we sort of embrace this uncertainty in our understanding of non-stationarity, we can at least see what different trajectories could look like and get an idea of a good range. And so I think it's really using the tools that we already have, but just challenging our own underlying assumptions and making sure that we really understand how to interpret the results under these very uncertain conditions. Yeah, I'm going to ping off uncertainty, mostly because I feel like I model landscapes and processes over millions of years. So there's a ton of uncertainty because our equations are not very exact. So how are we going to apply this to extreme events? And I think that's a, I think what we really get at there is uncertainty. We have a lot of uncertainty in our equations, but if we can continuously or, you know, reliably keep modeling kind of similar forecasts, it's probably going to happen. I think, I don't know. Yeah, I think that's a hurdle for probably those of us who think of longer timescales is we know we have a lot of uncertainty. We're a little uncomfortable with that, but at some point we're going to have to get comfortable with there's uncertainty. We're going to have to move forward to thinking about how this applies to extreme events and think about how to incorporate that uncertainty in our results and in our implications and in our science communication. That's definitely, as Merritt brought up, science communication is a big part of it. And yeah, when I'm talking about rivers evolving over millions of years, I have that disconnect of, I don't know how to make this important to somebody who's really concerned about the flood and the debris flood that just happened now. And so that's also something to kind of work on, how can we incorporate those long-term models with the data that is in the here and the now, the things that are going on in the here and the now. Yeah, I think one example, human induced floods that I've worked on, we have a lot of uncertainty in sediment transport, but if we run those really uncertain sediment transport models, we see that, oh, yeah, these large human induced floods did strip off all the sediment on these creeks, on these bedrock rivers in 10 years. So yeah, that was pretty important. And maybe it wasn't exactly 10 years in reality, but that uncertainty we have in our sediment transport models, but it still gives us that time scale of response that this was something that happened immediately and that you would need to take immediate action on. You don't have a thousand year leeway. Yeah, great responses, trying to build on that as well. Excuse me. And so I think in the terms of having tools to be able to do what we need to do to get the job done, to understand what we can and evaluate deep uncertainty so the things we don't necessarily know how to describe or know how to look for, I think the realm of exploratory modeling can provide a lot of insight to that and how we develop our tools to be able to expand the scope of exploratory modeling. So under that umbrella, we can take our models to a therapist, if you will, to find out all the details about them by doing global parameter space sensitivity analysis, for instance, where we begin to look at the bounds of what our models can do and then use those results from that to understand first and second order or higher order interactions where there are contributions to the variance in our outcomes that we can start to understand and describe in rank, which lead us towards being able to pre-calibrate our models. So when we do uncertainty characterization or quantification, we have a starting point that is robust from the get-go and then we can use a lot of modern tools like you're describing that help us to evaluate not just a deterministic run where we're saying this is the truth but a realm of runs that represent alternative futures. I think that gives a lot of confidence in building those distributions where we can look at the tails as well. Well, thank you, Etta, everyone for that and I think that actually the way that that arc of answers went really helps to build on the next question I wanted to ask everyone, which so I guess I'll just cycle all the way back, which is relating to the topic that we've talked about also this week, which is bottlenecks. And that's something that we often consider, especially in computational science because computation exists as a part of a long chain that starts with vague ideas and campfire discussions of the way the world works, eventually evolves through data collection, data analysis, all the assumptions built in there, all the propagation of uncertainty and then continues all the way through questions of data dissemination, communication of model results and these same uncertainties and questions as of the initial biases that we have and of how the membership in our communities determines the questions that we ask and the places that we ask them. And so what I'm wondering in this kind of broader scope that goes all the way from society to the depths of numerical solutions to differential equations, for example, like what do you think are the places where the CSDMS community is most poised to make changes and or what do you think are the most important changes that we have to make to open science and make an improved variability to advance knowledge and share it and so can you back to Merritt? Thanks again for this question. I'm actually gonna share my screen really quickly and hopefully you can see some pretty pictures of the Arctic. Blow that up. Can you all see that slide? Is that a yes? Okay, I think that's a thumbs up. I will proceed until someone tells me to stop. So I'd love to just answer Andy's question in the lens of some of these images. So I mean, I talked a little bit about the backend gaps in terms of like communicating out to the public because ultimately we do need the public and policymakers to turn around and support our science. But I wanna switch and talk about some of the upfront bottlenecks, if you will. And in the Arctic, it's perhaps a little bit more straightforward to answer this question because we are incredibly data poor. So we are having challenges right now jumping from our observations and measurements that can be extremely data rich in very few locations, marrying that with more regional to global scale data sets, for example, obtained by remote sensing. There's a lot of assumptions and steps that go from that joining of different data streams that I think in my field at least represents a major bottleneck. So here's an example. If we're thinking about extreme events in the Arctic, it's these really fine scale landscape alterations and geomorphological alterations that are really driving a lot of extreme changes. These happen in probably less than about 10% of the Arctic in terms of sort of landscape area. So there are very narrow portion of landscape attributes and overwhelmingly our measurements miss these kinds of features. Of the data that do capture thermocars or permafrost law in these ground ice rich locations, they are definitely biased towards some portions of the landscape and not others. So trajectories that tend to lead to wedding. You hear a lot about methane bubbling up from thaw lakes and certainly that is important, but it is not the only kind of abrupt thaw that happens on the landscape, but our measurements are overwhelmingly biased towards those methane producing areas. And we have ignored the thaw areas that lead to oxidation of methane just as one process level example of how we as researchers make decisions about our science that can lead to biases. And this is one known bias that we've been able to identify through our synthesis efforts that we are placing our permafrost law measurements in some parts of the landscape and not others. So that is I think a really like good example of how we need to step back and think about data coverage, thinking about where our data rich strains are coming from and making sure they are truly representative of the processes that we want to scale and how do we do that scaling as we jump up to airborne and satellite based observations. I'm gonna skip two slides here. I just wanna talk about humans as another big bottleneck. So again, in the Arctic, some of our alternative futures that we're trying to model include thinking about ways that society is going to be responding in coming decades to warming and thawing. And I wanna use food systems as an example. We see the northern boreal and subarctic biome as the next, in fact, I mean, this is what our envelope modeling shows. Farmers are all ready responding. We're seeing an influx of farmers into places like the Northwest Territories of Canada. They're sensing an economic opportunity to drain and produce food on northern soils that are warming and thawing. And so how do we incorporate such an abrupt societal shift into our climate models? Because clearly this is going to impact not only carbon pools, but also albedo, nitrogen pools, et cetera. So thinking about humans, not only human land use today, but major societal decisions about how humans will respond to warming in the future is something that really, really, I think challenges me at least. So thank you. Stop sharing there. I similarly feel a pain point in data availability, especially for validation and for model input. So thinking, I think, you know, something that would be helpful would be talking more with the people who actually go in the field and collect the data and sort of making sure that the parameters that we're using in our models are actually getting measured. I think that's a big issue that there's sort of this gap between what people in the field are measuring versus what we're actually using, what parameters and variables we're using in our models. And the second answer question about what can CSDMS do to help some of these pain points. So I think for the other two issues I want to bring up, I think that CSDMS is already doing it, which is really great. The first is in sort of the geosciences field, I feel like a lot of us are sort of self-taught coders and we're not necessarily good at standardizing the models that we write or the code that we develop. And I think that by offering these training sessions and helping individuals and students, especially early on in their careers, understand how to document and being trained on fair research practices is really, really helpful and is going to just help move our coding abilities forward in the future. And the other is just coming up with these repositories to share code and so we can do inter-supplementary research but also learn from each other. I think it's in the past, it's been very hard to get access to other people's code if you wanted to use it or learning how to be trained on it was very difficult. And sort of having this community and platform to do that work I think is tremendous. So I hope that CSDMS continues to do these activities in the future. So one thing that I think this was actually a theme of some of our discussions three years ago. So there's some stuff that happened in between but I remember we got together and thought about source to sink, what are we, what are we doing from modeling hill slope erosion down to deltas? And it made me realize we do not have the same variables we're passing back and forth at all. And that becomes really important when we start to think about modeling things like cascading hazards and we go from systems where maybe we have like a rocker ability or a fracture density and then another system where we just have D50 and then another system where it's the full grain size distribution and a lot of other parameters that I don't know my part of the source to sink story. But I think that systems has already done a really good job of connecting groups. And so this is another way we can kind of leverage that more is in the framework of these extreme events and of cascading hazards. How do we not just conceptually connect hazards but also in our models connect and make sure that we're able to speak to each other not just using the same kind of naming conventions but also actually passing the same variables that are necessary to accurately look at hazards further down system. That's one thing. The other thing which might be more of a dream big but it is how do we do community engaged research on these issues in a really effective inclusive way? These extreme events are not hitting all communities the same they are disproportionately impacting low income and already disadvantaged communities that historically we may have had less interaction with. So how do we make meaningful connections and value those communities and not just signal that we're studying or self-perpetuating our own study sites and our own biases in our science. And that may not be true at all but think of a non-scientist and what they see us working on they may not be seeing their communities being represented which means a little bit more of a disengagement from the conversation. So that is definitely thinking big for CSDMS. I think that might be more in the role of programs like Urge but kind of thinking about how we can start to do that in the future, great. Yeah, I think in the context of extreme events and then bottlenecks I think more than just the tools or data I think considerations need to be paid towards the interconnectivity of the world that we live in. So when we're evaluating finite systems, local systems there need to be more tools and research available that show the teleconnections that when I pull the string here or when this action happens here or way over there somehow they're connected through the transfer of goods through services or whatever the driver may be that creates that interconnection but that those interconnections can co-evolve as well along with the research like as we're looking at climatic impacts for instance there are a lot of things that tie together our research when we're looking for instance towards water scarcity water scarce regions may not always be a bad thing in terms of damaging to the economy of a local region some economies are more versatile and flexible and can handle the bounce back a little easier but when we start evaluating those again the interconnections between those systems and you see well there's a spread that doesn't really make sense in the long-term scale you can start kind of isolating your research in those areas. Also systems has done a great job with another aspect which is making computational resources accessible. So if you have an interconnection in a computer you can have a dream or wanna conduct science right and you can go do that because they've given you the resources to be able to do that I think that's a big step forward that hasn't traditionally been present in the science. Well thanks to everyone for doing it again because I think that Chris's comments about these connections and how if we tug here it might be felt in another place is really core to the last major question I want to ask now just the three of you which is related to how we deal with a certain tug in our own communities and so over the past couple of years I think especially we have had communities around our world have had this really strong stress test of how networked are we? How resilient are we? Which ropes that connect us have broken which ones have stayed the same gotten stronger and it seems that there is actually a paired mission of creating and enhancing our own communities and that their resilience in our own communities meaning all of us and our colleagues and students and mentors and also doing the really good quality science that needs to be done in order to do everything from advance and curiosity about Earth's past and how Earth evolved to making actionable predictions about the future. And so my last question for the three of you before we open it up just to the full audience is how do we manage this? Can we find alignment between these? Are these things that we have to do in parallel? Can we do them together and how? So going back to starting at the far end. Question. So I guess I can speak from my own experience in terms of how I think about maybe bringing the community in to research but in line. All right. So I mean for me it's always sort of been a natural evolution in my interest and how that sort of reaches out into the community and brings them along and gets them interested and also gets their voice into it. And so I'd say it started with just talking. It always just starts with conversations meeting with people in your community. So when I first moved to Winston Salem in North Carolina for this job, one of the very first things I did was just talk to different people in the community and find out, well, what are sort of the key issues that are important to them and what's bothering them? And one of the biggest issues was around flooding in the city. And what I learned from talking to local historians and local longtime residents is that our city was actually originally settled back in the 1700s or actually mid 1600s because it was a home to natural springs and fresh water. And so people wanted to live there for that reason and they built up this large community eventually got industrialized and the water got in the way. So they moved streams and creeks, they concreted over them and there is now a city on top of all of these underground waterways. Whenever it rains we have really bad flooding. And so first I thought, well, that we have so many great resources in the city we had. So the initial settlers were these early Moravian's and they have to really great historic detailed maps going all the way back to the 1650s of the original waterways. And so one of the first things that we did was just sort of track and map how these waterways changed over time and it was something that the community was really interested in seeing. So we ended up sharing it, getting their feedback and the sort of snowballed into a class that I ended up teaching last semester. And it was a very fun interdisciplinary class that I co-taught with someone who has a background in religion. So exploring the hydrologic issues and flooding issues in our city from both sort of the technical flood modeling background but then also with ethical frameworks taking into consideration how to evaluate the impacts or the results from this modeling work in terms of the cultural, historical and economic perspectives in our city. And I learned a lot through it. And one of the I think most interesting things that I actually came out of it is there it. So my polling place is an elementary school in the city and this elementary school is literally built on a stream. Its parking lot is on one side of the stream and the school is on the other side. So to go from the parking lot to the school you actually have to walk over a series of three bridges. And this is obviously a problem every time it rains the school basically floods and the kids can't go to school that day. And it's sort of caused a little bit of a big question in terms of, does the city take tax revenue and renovate the school in some way to make it more flood resilient or do they move it to a completely different site? And obviously it has an impact on the neighborhood that the school is in and the kids who currently go there whether or not it's going to remain in the same district. And so our class came in and learned about that issue and something that we learned in terms of why the school is even built there is that initially the county school district only bought the cheapest land which was land and flood plains. And they built all of the elementary schools so the most vulnerable populations don't get a voice or also the ones most impacted and who are going to continuously be impacted in our community under climate change. And so this is really sort of informed a direction of more local research they do with my undergraduate students in terms of getting them involved getting them talking to the community and then helping address solutions that are also sort of taking into consideration the actual needs and wants of the community not just what is the best because of this quantitative model says it but what is actually going to be a solution that people will want to live with not the most financially viable say or the best in terms of what our model says but really what's going to help the people in this community under these new needs because of increased flood risk in our community because of climate change. That's incredible. Yeah, I think I'd follow up on that with something very similar. I'm an undergraduate only institution and wow having classes where students are doing community engaged research has been amazing for me to motivate myself to get out in the community and learn what their problems are and listen to them and then for the students to really engage with those ongoing natural hazards as well as just kind of some other honestly low level but constant problems we have along the front range like water issues which are made worse during everything and then also kind of they can start to learn how to model and kind of tie into this community too. Sam, you may have already left but we already have one undergraduate who did modeling during his undergrad was able to think about problems that were affecting the community locally and then kind of moved on to be a master student or PhD student depending how well Charlie convinces him and so I love that connection. So not coming at it as somebody mentioned this before but we're all a little bit of ad hoc modelers I came in for other reasons and I think it's really nice to have these students coming through classes really intentionally thinking about why are we modeling how does this tie back to my community or this adopted community of mine and then being able to like go further than I can ever go in terms of modeling prowess and so I think that's a great way to increase our engagement with our local communities and then with our scientific community as well and yeah, that's okay, I have some ambitions now some, you know, we can now cross list these classes so they're not just science classes and really bring in a holistic view. Yeah and building on that again I think from a fundamental science perspective where we're not really stakeholder driven necessarily in communities and things like that we've started to while trying to advance our science objectives we started to build applications where one in particular it's called the Hector Simple Climate Model and it allows people to open up a web app and manipulate parameters for this simple climate model in real times that runs in the back end but the sense of it is that you can use it in the classroom to connect the student or the individual, whomever that may be to see when I move these processes these are the impacts they may have for my own community or my own region and then how that propagates over time so I think going back to what you guys said really connecting individuals to extreme events or extremes in general and the change that communities are going through right now and will over the next 50 to 100 years putting them in a place where they can experience that manipulate that from a modeling perspective adds a lot of depth I think to the education. Thank you to everyone again and so with that I just wanna say I really appreciate first off the combination of answers of advanced science finding ways to tie the science into data into the entire stream that goes to and goes through this community as well as to the communities that we engage with both inside and outside of our respective institutions and with that I'm just looking around in the audience and I know a few people have had to go and speaking of extreme events some people are going a bit early perhaps because of the extreme winds coming and so unfortunately those voices won't be heard but of those of you remaining I think we might have time for a couple of questions so if someone wants to raise their hand I will be the runner and send the mic out. What do each of you see as future directions for your own research and from more broadly the CSDMS community? Yeah, what do you each see as future research directions for your own groups and then also for the broader CSDMS community? All right, so my research goals are kind of small at the moment because I'm never sure how much time I'll have for research these days but I think I brought up that we don't know some of the variables that go along in these sources sync connections and I think that's really what I've been trying to focus my energy my students energy towards can we start to understand better these connections from Hillslope sediment production and those kind of extreme events that happened there to what's going on in channels and downstream and it's a very like I do small things but we have a lot of students they can measure a lot of rocks we can start to connect that and see more of a system approach to sediment rather than discrete like process regimes of Hillslope first then streams. So that's my own little thing that I'll be doing for a while. I guess for my future research something that I'm really interested in is how to translate some of the work that I've already done in terms of vegetation responses and make it accessible and applicable across different spatial and temporal scales. So in the work that I presented you could sort of see how I thought about processes at different scales and it's not necessarily that every individual scale needs its own representation of core earth processes. I'm really interested in this idea of how you can sort of translate what are the really important pieces of information at smaller scales and what that means at larger scales and vice versa. And so I think that's a long-term goal and maybe very wishful thinking on my part that I could tackle that problem but is how to just tackle this issue of how do you scale the core information from the small scales to large scales and vice versa. So taking say really large data sets and what's the key information and how can that possibly translate to smaller scales. And what can CSDMS do? I think just being more cognizant of these scaling issues in the models that we build and share and having ways to... I'm thinking about this because I was in your workshop yesterday and you have that hexagonal grid and it made me think of M-PASS and I don't know if anyone's familiar with the M-PASS model but it's this very cool mesh that can sort of change in spatial resolution dynamically. And so thinking about how we could sort of make that resource that's applicable to different scientists in different fields and accessible for their problems not just for atmospheric circulation problems like M-PASS is currently working on. Yeah, I think for the research that we do really starting to drive again towards more exploratory modeling approaches not just saying from a fundamental science perspective this is the result but these are the results and this is the distribution around that. I think that begins to build a stronger confidence and with paralleling that with transparency where we are committed in both of the main projects that I work on to fully open science open source software and open data and the development process therein. So maintaining those side by side and parallel are really important. I think from a CSDMS perspective I think the challenge with moving into the exploratory modeling realm would more or less be from the computational space and moving the capabilities for things like PMT into more of a cluster space where it can be more scalable. But I love what's going on now and I think the directions there are set up to be very successful. Thanks for all your answers. Are there any other questions that we have? Yeah, thanks. So I'd like to ask a question that kind of gets back to where you started with talking about uncertainty and also how this connects with connecting to communities and making plans is I'm thinking about last year there was some fantastic stuff at systems keynote talk by Rob Lempert about decision making under deep uncertainty in a workshop that he did with Mara Zellner about rather than trying to fight uncertainty sometimes you're better off embracing uncertainty and presenting the whole panoply of possible futures to help people make decisions and then look at how those play out under different policy responses. And we've seen this week some really phenomenal work that was presented here about reduced complexity models that can be run with very low computational demands so that you can run a whole lot of scenarios quickly without big computational resources. So I'd like to ask you all in your work and thinking about systems how do you think about the relationship between on the one hand the move towards trying to make better more sophisticated scientific models more processes and the process based models to understand the fundamentals better and then connecting those to reduced complexity models that can be run quickly with much lower computational demands. We do use a lot of emulators with the models suite of models that we use that are both global and traditionally emulating climate fields or realizations of different climate. And these have been done both bottom up and top down and we're working with MIT currently as well that they have kind of a new approach of doing that too. So I think these massive realization runs like seeded by CMIP6 for instance that we could potentially make those resources available not just from the reduced form models and being able to reproduce them yourself but the actual data products and suites of data off of those that could be readily accessible and adjusted easily into current workflows would boost that a little bit as well. From a computationally scalable perspective when we launch it, it's easy to fall back on the same old tools from a cluster perspective of MPI and distributing these processes out for each field over different nodes. So I think really getting in tune with computer scientists to know how to optimize clusters and the operations they're in very specifically to reduce both our carbon footprint and our computational cost is very important. And then finding a way to make that equitable so anyone can do that type of research because that's a bottleneck I think right now too is that institutions can support that kind of expense but individuals who want to conduct science themselves need that opportunity as well. I think it goes back to how you started your question around uncertainty and I think it involves knowing how much uncertain like whether or not you want to go with a very simple model or a more complex model sort of goes hand in hand with how much uncertainty you're willing to tolerate and how much certainty say you need in terms of how you communicate that whatever outcome you're modeling to policy makers. So I'll give you an example from the I teach a computational modeling class and I always try to give the students real world examples. So then when they're calculating their uncertainty they're thinking about what that would need in the context of the problem that they're solving. And so one example is we do the problem from the movie hidden figures, that same problem where she solves Euler's method to find where John Glenn's going to land where his spacecraft is going to land in the Atlantic Ocean or I guess the Gulf of Mexico. And the students will go, it's an iterative method, right? And the more you do it, the more your error goes down. And so every single time I do this example a student asks, I did it once, am I done? And I say, well, how much confidence do you have that you're going to find John Glenn in a reasonable amount of time? So how much do you value his life? And it really helps students understand, well, maybe not this problem I need to have. I cannot have a large amount of uncertainty. I really need to be certain about where he's going to land but that's not every problem. And so I think it's just educating and communicating that what problems actually need more complex models more complexity where hopefully we're reducing our error in whatever outcome we're predicting or trying to recreate versus what are problems that we don't necessarily need to have a ton of, maybe we don't need the exact answer, we just need to know the direction, right? Positive or negative change. And we can use that a much more simple model than say something that's more complex to get to that answer. I would just build off that to say that maybe in some instances this might require a lot more collaboration with people who study human psychology or kind of human migration. Like we can honestly make the choice of when the error is appropriate for that certain subject just especially extreme events, right? We may want to ask somebody else, of what level do people individuals start making decisions and what's the certainty or uncertainty that we need to then have in our models? So with that, I'm gonna hop back up to the front and thank all of our panelists one more time as we move into the final phases which I'm guessing that Brad might be introducing.