 Okay, great. Thanks so much for inviting me to give this webinar. I'm really excited to discuss some work that I'm doing along with colleagues at the USGS and other external partners. I'm thinking about post-fire debris flows. And in particular about this sort of inundation piece of the post-fire debris flow hazard. So I'll get started with a bit of some background on what debris flows are, why they happen after fires, sort of bigger picture context of thinking about hazard assessments for post-fire debris flows. And then I'll describe in a bit of detail a study that I did along with a number of others, thinking about testing different models for debris flow run out in the context of the 2018 Montecito California event. So to start, I wanna get us sort of all on the same page about what a debris flow is. And to do that, I'm gonna include in theory a video from one of our monitoring stations at the Chalkcliffe's location in central Colorado. So what I hope you can see here is that debris flows are a fast moving type of landslide. They can reach very large speeds. They originate in steep, high-ordered catchments. They have high sediment concentration and they're quite mobile because of high pore pressures. And this means that motion of these types of geophysical flows continues on a lower slope ground. Debris flows can also jam and evulse. They can carry large boulders and they're more common after fire. So let's next think about why they're more common after fire. Greg, I maybe have frozen. Yeah, your slide is not advancing. My slide is not advancing. Okay. You might come out of presentation mode and go back into it again. Ah, okay, now things are moving again. Okay, I seem to be advancing. So I will let you watch this great video. Well, I'll just note the scale. If I were standing in this channel here, I would be shorter than standing from the channel to this crossbar. I may have frozen again. So in just a moment, hopefully I'll regain the ability to advance my slide. Okay, great. It's the only video. So next I wanna talk about the connection between fires and debris flows. On the left hand side, you can see the sort of uppermost reaches of a catchment in south central New Mexico where debris flows initiated after a fire in 2020. And we know that fire changes so properties, which leads to increased surface water runoff. Vegetation is removed due to fire. This reduces interception of water and changes surface roughness. And then most commonly, we find distributed debris flow initiation through processes like raindrop-driven detachment, transport, sheet wash and rilling. And if you look very closely here, you can see the erosion of the surface soil where the debris flows in these catchments originated. Once started, debris flows can be highly erosive. They entrain sediment and grow in size. And on the right, you can see a picture from the same site in New Mexico, maybe a half a kilometer, quarter of a kilometer downstream, where now the debris flows having passed through here have eroded into the channel. So to sort of illustrate the big picture hazard of post-fire debris flows, I want us to consider the Thomas fire. So we're looking at a perspective map of Montecito, California in this area here. And after the 2017 or during the 2017 Thomas fire, the burn intensity of which is shown here, a major rain event occurred. A major rain event occurred resulting in a debris flow. And we'll talk a bit more about this event in a little bit. But after a fire and before a storm, what are the types of things we might want to be able to forecast? These sorts of things are, for example, when will a debris flow occur, which we might call debris flow probability. And at present is addressed using empirical rainfall intensity duration thresholds based on many years of observations of what rain results in debris flow occurrence. You might also wanna know how big the debris flows are gonna be, what their volume will be. And right now there are a number of tools, but in Southern California, most commonly we use a statistical model that's based on basin characteristics like topography and soil and rainfall characteristics. And then given that a debris flow is going to occur and it's of a particular size, we may want to know something about where it will go. This is this sort of debris flow inundation aspect of the hazard assessment. So the USGS has worked for a number of years in collaboration with external partners, thinking a lot about this question of what will, sort of what types of rainfall events will result in debris flow occurrence and also on the question of how large. But we might wanna ask, how is it that we move towards being able to create a hazard assessment that says something about where material will go. And as you can see from this picture in the bottom right, which is of interstate 70 in central Colorado through Glenwood Springs, the Grizzly Creek fire burned both sides of the canyon. And then in last summer, we had a number of rainstorms which resulted in debris flows, which you can see, they blocked the interstate, damaged the road and the response to that is still ongoing. So as a result of a number of decades of work by current and former USGS scientists, including but not limited to Sue Cannon, Jason Keen and as daily Joe Gardner and Francis Rengers, we're able to make maps like what I'm showing here, which is a statement of the sort of propensity of a basin to yield a debris flow given a design storm. And we generate these sorts of hazard assessments when asked by partners to do so. But you may note that this hazard assessment only includes information about where the debris flows may originate. It does not include information like that, which is shown here, which is the debris flow run out that occurred after severe rain in January, 2018. Also does not include information about building damage such as the buildings shown here. And unfortunately in this event, 23 people died. So I want to note that I think it's quite interesting about debris flows and post-war fire debris flows is that they're really a hazard at all sizes. So I've shown you here this picture on the left-hand side of Glenwood Canyon. We've talked a little bit about the Monestido, California event, we'll talk about it more. And then in the upper right, I'm showing an aerial photograph taken from a helicopter of the Dodson Werndale fan in the Columbia River Gorge. And just over a year ago, this is the location where a passing motorist was killed because of this debris flow run out path. And so we have a lot of, we wanna be concerned about these events, you know, both when they're quite small and when they're quite large. And in particular, the small events can be challenging for linear infrastructure like roads. So one of the challenges, however, is that, you know, it's quite hard to forecast volume. So this is, you know, given the amount of rain we might anticipate, should we be able to forecast that well? What types of debris flow volumes might we encounter? Now, the work that underpins the, you know, present empirical model that we use for forecasting debris flow run out is really based on a lot of hard fought data to be able to make a statement about forecasting volume. You know, you need to measure debris flow volumes. And there's a lot of imprecision that comes in from this type of data. You know, many of it comes from sort of counting the number of dump trucks that were used to remove material from debris basins. And, you know, I think that it's quite impressive that we can do the aspect of volume forecasting that we can and that reducing uncertainty and volume forecasting is a big challenge. And to give you a sense of sort of how big a range of uncertainty we have in volume, in this upper right hand panel, I'm showing for sort of three subdomains of the Montecito event. The Montecito Creek, San Ysidro Creek and Romero Creek domain. Sort of the black line is the mean value we would expect from this volume model as a function of changing rainfall intensity. And then the star and error bars are sort of what we might expect or sort of our best estimate of what the event itself was. And you might notice the y-axis that's on a log scale of reflecting just how wide a range of volumes we might expect. So another reason that forecasting debris flow runout may be challenging is that debris flows jam and evulse to illustrate that here. You can see this sort of evolution path on the Dodson Warndale fan. And then here is an example of a number of coarse material blocking the channel. So debris flows can create levees which steer the flow and sort of all these things taken together may make them quite challenging sort of to forecast where the material goes. So because of this, debris flows and their inundation hazard assessment are not well reflected by existing available things like for example, FEMA flood maps. So with this sort of general introduction to why we might want to forecast debris flow runout I'm going to start the talk by and the majority of the talk will be focused on how well can we simulate the Montecito event and how by really digging into a multimodal comparison in the context of that event what can we learn about forecasting runout? And then I'll briefly end with some sort of in progress and progress work on what kind of information do professional decision makers want about inundation? So to start, I'm going to go through in a little more detail what happened in this January 9th, 2018 Montecito event. So as I said earlier, Montecito, it's in California. And it's located to the east of Los Angeles between the Pacific Ocean to the south and the Santinez Mountains to the north. You can see Highway 101 running along its southern side and there's an alluvial fan to the south of the Santinez Mountains, which is quite urbanized. So the Thomas fire started on December 4th, 2017 and at the time that it completed, it was the largest fire in California history about a quarter of the size of Rhode Island. And on the left, you can see, or say the right, you can see the soil burn severity map for the portion of the Thomas fire to the north of Montecito, which is primarily burned to moderate severity shown in yellow. But you can see from these photos on the left that that is quite a severe burn state. So in January 2nd, the USGS released a post-fire debris flow hazard assessment shown on the left. And as we've discussed, these debris flow hazard assessments, tell us information about susceptibility to debris flows to think about rainfall initiation thresholds of how large debris flows might be, but they don't tell us about where the debris flows will go. So our timeline now takes us from December 4th, when the Thomas fire started to January 2nd. On January 5th, in anticipation of a forecast storm, the Weather Service issued an outlook for debris flows. On January 8th, Santa Barbara County issued the largest ever evacuation orders. And then in the early morning hours of January 9th, the Weather Service issued a warning followed shortly by a 50 year rain burst over Montecito with storm totals nearing 100 millimeters in some places. This mobilized 680,000 cubic meters of sediment from the hill slopes and channels in the burned Santa Ines Mountains. And the areas that were inundated by debris are shunning gray cutting through this urbanized alluvial fan. There were very large boulders mobilized. You can see Francis Rengers for scale and the picture on the left. And then this resulted in 23 fatalities, over 167 injuries and 408 damaged homes. The locations of houses that were damaged between one and 50% are shone in orange. And those that had over 50% damage or complete destruction were destroyed. So how does this event sort of compare in terms of sort of total mobilized volume and total inundated area with sort of other debris flows? To put it in context, we can compare this event with the existing compilations of debris flows from Griswold and Iverson shone in purple and from Bernard and others shone in brown, which relate the mobilized volume on the x-axis and the inundated planometric area on the y-axis. And so this event, again, broken up into sort of three main domains is shown by the yellow, green and pink diamonds. And so you can see from this that for the volume, this event was highly mobile inundating a very large area. So as I've mentioned at the USGS right now, we provide information through our emergency assessments about where and under what rainfall conditions debris flows are expected to occur, but we don't provide information about expected inundation. And so the study that I'll now describe in more detail is going to test a number of candidate models to understand their suitability for use in inundation hazard assessment. So I'm going to discuss a numerical modeling study in which I set up and run simulations of this inundation event with three candidate models simulating debris flows, which vary in the representation of debris flow physics. Model performance is assessed on the overlap between simulated and observed debris flow extent as well, excuse me, as well as peak flow depths. And each model takes between three and five inputs, which I find sort of useful to separate into two categories, the total volume of moving material and the sort of mobility or flow ability properties. Today we're going to primarily focus on the role of volume in succeeding and forecasting the event, but we'll also think a little bit about how well we need to know the material properties. And then we're going to compare the best results from the different models and see how well they do. I also included an aspect of the study where you sort of an expert judgment, we might say, user manual value based statement of setting up the models to sort of see how that compares with the best fits. So I want to note that many times when studies are done sort of in a hind cast context, we know what the volume is and we may set what the volume is and explore how well material properties, how we need to sort of modify those to get a good model fit. But because the volume of material mobilized is one of the largest sources of pre-event uncertainty by exploring the sensitivity of the model to this input in a well constrained experiment, we set ourselves up to sort of understand how uncertainty in volume propagates and uncertainty in inundation in a hypothetical pre-event context. So now I'll talk a little bit more about the three models we use. The three models were Rams, Flow2D and DeClaw. All models solve depth average conservation equations supplemented by constitutive relationships. An example, simulation is shown in the animation. And I want to note that there are really two main distinctions between these models. First, DeClaw is a bit more complex than the other two models because instead of considering the movement of a sort of single phase of material, it considers solid material embedded in a fluid phase, the interactions between those two phases and the sort of effective rheology of the material originates from that interaction. In contrast, Rams and Flow2D are single phase models and they represent two alternative ideas for the shear rate versus shear stress or constitutive relationship of the fluid, each of which has its own name, the bolemi or crudratic rheology. So I apply each model to three domains. The first is the monocidal domain, which is in orange. The second is the sinusoidal domain in green and the third is the Romero Creek domain in purple. And I treated these areas separately for a few reasons in computational as well as because the observations of sort of average sediment concentration based on sediment deposit and maximum flow depth make it look like these three domains could have been quite different during the event. In the majority of the talk today, I'm gonna present results focused on the monocidal creek domain. So at each site and for each model, I run many simulations varying the input parameter ranges and to do this, I first define the parameter space. I sample it with a Latin hypercube design. In this case, at least a hundred times the number of input parameters and assess each simulation's performance. And as I've said before, we're gonna consider a really large volume range surrounding our estimates of flow volume for the event because we wanna understand how the models behave at a wide range of volumes. So how do I assess simulation performance? I do this in two ways. The first is an extent misfit metric, which is called modified omega T. I modified the metric provided by this paper to make it have a good value. When it's zero and a bad value, when it's one. So it's a statement of the difference between simulated and observed extent of material. So if they perfectly overlap, we have a value of zero, the best possible. And if they're perfectly disjoint, we have a value of one. I also look at the peak flow depth. So I split a sort of misfit statement of peak flow depth into two parts. The first, just called delta U is the normalized sum of underestimates and delta O is the normalized sum of overestimates. And these are normalized by the sum of all depth measurements within sort of the given run out domain in order to make the values comparable between domains. And I split these apart because I wanted to distinguish between over and underestimation as we sort of explore different volumes. At some point though, in the study I needed to choose sort of a best fit. And so I needed to combine the extent and depth misfit values. And to some extent, this is arbitrary, but I think very reasonable. And I combined these sort of three elements, the extent, overestimate and underestimate, such that I gave 50% of the importance to matching extent and 50% to matching depth. And you may have noticed that I insured that these all sort of scaled between zero for best and one for worst. So now we're calling that for each model in each domain, we sampled parameters based many times. You might ask, you know, what do the simulations that minimize this misfit metric look like? So we'll now look at those results for the Montecito Creek domain. Here are the results for Rams. In dark green is the true positive area where debris flows both observed and simulated. Light green is a false positive area, where debris flow was simulated but not observed. And then in light brown is the area where debris flow was observed but not simulated. And you may notice that the simulations, simulation matches the observations quite well. And so next let's see what it looks like for flow 2D and D-claw. I think the biggest takeaway, I'd like you to have from this is that the results are quite similar. And in addition to being quite similar, many of the places where the simulations struggle to match the observations are in the same places. So for example, we have overestimation at the red and yellow circles. And then we have underestimation of this purple circle. So next I'm going to show you results that explore changing volume and see how simulations vary as volume is changing. And this slide gives you a sense of the type of variation that I'm simulating as I change volume from on the left, where I have quite low volumes and I'm substantially under-inundating the simulation domain. And on the right hand side, where I have quite high volumes and I am again doing poorly, but in a very different way. So this slide explores how simulation performance is related to volume. Each column is for a different model, Rams, flow 2D and D-claw from left to right. Each row is a different metric. On the top is this combined cumulative metric. In the middle is our extent metric. And at the bottom, the peak, sort of a combined version of the peak flow depth. So it just averages the two elements. On the x-axis is the log of the flow volume and on the y-axis is the metric value with a good value being zero and a bad value being higher. The vertical black line represents our best estimate of how large the actual flow was with a sort of an arbitrary error range. And each dot represents a single simulation with different flow, mobility and volume parameters. And so as we expect, we can see both poor performance at high and at low flow volumes for all metrics and all models. But when we look at the sort of best possible performance, we see that all models do comparably well. We also see that the models perform their best when the volume used for the simulations is the same as the estimated volume for the event. That is the sort of bottom of the U is at the same, more or less the same place as the vertical line. I think it's also notable that we see that the models perform the best when they both have sort of comparably good values for extent and depth. And I think this is a thing that we might, we would hope to see our models do that they're matching extent at the same time that they're matching depth, but that's not necessarily a given. So it's nice to confirm this. Next I wanna look into how the different metrics trade off. So what we're gonna do is look for each of the three models on the, in our columns and each of the three domains which are in the rows, we're gonna plot the two depth metrics on the X and Y axis. And then we're gonna plot, use color to show the extent metric. So if we have no material coming into the domain, we're gonna plot at that blue dot. And if we have material that is way too thick everywhere, we're gonna plot at the orange dot. If we perfectly match the extent and depth, we're going to have a yellow dot that's located at the origin. So we might expect that the relationship looks something like these two blue lines here, but exactly how closely these different, matching these different performance metrics and sort of how scattered they are, is gonna tell us a lot about how the models are influenced by both volume and material properties. The results look something like this. And again, I would say that what I think is most notable to see here is one, we don't end up at the origin. We never perfectly match the extent or the depth. That there's very little scatter across on this line, except in the Rams model. And I think what this really means is that as you're changing volume, you're sort of quite smoothly transitioning from not inundating enough areas, deeply enough to inundating areas too much. This sort of gives us a premonition that we may expect the material properties and mobility by non-matter so much here, which is a thing we'll dig into in a few slides. So I think I've summarized these points. And so now we'll move on to looking at the influence of the non-volume parameters. So on this slide, what I show on each panel is a different non-volume input parameter. So at the top, you can see it says D colon M naught minus M crit. So that's the D-claw parameter of initial solid volume fraction minus critical state solid volume fraction. And then we have all of the non-volume parameters for D-claw with a D flow to D with an F and then Rams with an R. And for the purposes of today, we're not gonna pay too much attention to exactly which parameters these are. But we have the log of volume on the x-axis and the value of our combined metric on the y-axis. We again have the sort of value, the estimated value for the size of this event in the dashed line and then at the sort of arbitrary error range in the gray box. So if we plot all of our simulations, no, we see something similar to what we saw before except now the dots have different colors. And what I've done is I've colored the dots by the input quintile. So I sampled these values in a uniform distribution. And so I've just, the lowest quintile is yellow and so forth. The final thing I do to look into this is I take the conditional mean within each of these quintile bins as a function of volume. So now these lines show that conditional mean. And if all the lines plot on top of one another, that's indicative of that particular parameter having very little influence on the results after we've controlled for volume. And the sort of main thing that I would take away from this slide is that broadly speaking, most of these parameters have very little influence on the, performance of the models indicating that at least for this event, the most important thing is to get the volume right. This may not be particularly surprising, but I think it sort of really emphasizes the importance of being able to forecast volume. And it also motivates exploring the extent which this is transferable to other events. Next, let's ask how does performance degrade? And so here what I'm showing is the results for the Rams model in which, and we're looking just at the Montecito Creek domain. And on the left-hand side, I'm showing the Montecito Creek domain parameter values. So I've used the best values for that, the best fit values for that domain. And then for the other three columns, I've used the best fit values for the San Ysidro Creek domain and the Romero Creek domain. And then again our sort of expert, my expert opinion parameter values. And what I'd like you to notice is that there's really not a big difference between these simulations, again emphasizing the importance of being able to forecast volume. We can do the same thing for the Flow2D model and it looks quite similar. And finally, we can do this for the DeClaw model. And so I'll sort of conclude this first majority part of the talk by summarizing what we've learned from really digging into this particular event. So we found that all the models simulate the event quite well. They struggle in similar parts of the domain. And we find that we have good performance for the models at the same time that we match inundation patterns and peak flow depths. Volume is by far the most important input, though this was as we saw very early on when I compared this event with sort of other debris flows, this may be because of just how mobile this event was. And we find that performance degrades very little from when we exchange parameter values from domain to domain. So what I'd like to do in the last couple of minutes is talk a little bit about ongoing work, thinking about a user, your ongoing work to generate a user needs assessment of professional decision makers in Southern California. And I guess the way I conceptualize why you might want to do something like this is that right now, we have this left-hand blue circle sort of of what is scientifically possible. But as a scientist, I have a number of choices that I can make and what studies I do and how I pose them. And if I gain a number of choices, an understanding of how potential users of this type of information may want to use it, that allows me to make choices about what I study and how I study it so that I can maximize the usability of the science. And I really think about this as a sort of an ongoing iterative process and conversation in understanding what information decision makers need, how different decision makers need different input, and using that to guide choices in setting up studies. So in this work, which was funded by the USGS risk community of practice, along with two colleagues, Veronica Romero and Katie Clifford, we had conversations with county and state emergency managers, floodplain managers, Bayer and WURT team leaders and weather service professionals. So sort of a wide range of potential professional decision makers who might want to use this type of information. We focused this in Southern California because of the greatest need and experience. We had, I think 14 or 15 participants. We found that this group of participants was highly motivated and almost everyone agreed to participate, which we're very thankful for. And we did a series of one hour long, sort of unstructured and interviewed conversations and then analyzed this through a sort of thematic qualitative coding scheme to understand current practices, stated needs, and existing trade-offs that we might face in research. So I'll describe some lessons that we learned from this process. First of which is focused on actually the process of setting up such a study, which is that it was quite important to respect different forms of expertise. Constructing this interview instrument required that all three of us brought our sort of capabilities to the table. It took, I would say, quite a bit more work than I expected to come up with a very good interview instrument. But this really paid off in our eventual analysis because our interview sort of conversation guide led us to the information that we were interested in understanding. And that doing this required that I articulate places in my research where I actually could make changes based on user input. And that was quite different from what I'm used to doing in sort of doing research. So, but this was quite important because spending time thinking exactly about what information could influence sort of the direction of research as a way, it's sort of the most important way to demonstrate respect of participants by asking questions with answers that we'll use. So what were some of the things we learned? So we asked participants what kind of data or features a potential tool might contain and how they envisioned using it. And some of the answers you can see here, they spanned from deciding where evacuation zones might be delimited, educating decision makers in the general public and determining where emergency response personnel and other resource could be allocated. We asked participants to choose between inundation information over a very large area with less detail or a small area with more detail. And broadly speaking, most participants opted for an inundation information over larger areas. And when asked what characteristics about debris flows, how close are debris flows pose a threat in the most, the most common answer was really a focus on where debris flows intersect with people and roadways relevant for egress and emergency access. In part, I think because many of the sort of thresholds for closing roads are relatively small compared with the types of flow depths we can generate with debris flow. So to close, I'll just sort of pose some future directions and questions. One thing I think is quite motivating is how transferable are the results that I showed at Montecito to other locations. How do these models compare and how do they compare with other existing models when compared with other outputs that may be better indicative of the sort of dynamics of the event like sort of flow front and interior distributed velocity. I think a big picture challenge is to reduce uncertainty in volume forecasts. Similarly, making linkages between field observations and sort of model parameterization. And finally, because to be meaningful for a sort of pre-event context, we need to, we don't know what the event actually is. We have to design meaningful scenarios for inundation hazard assessments. So with that, I think I've left about six minutes for questions and I'm more than happy to discuss.