 Yeah, thank you for having me. My name is Alex Gore. I'm a PhD candidate at the University of Arizona. And before we get started, I'd like to thank my collaborators, Luke McGuire and Ebergen Francis Rangers for their help on this project. But today I'd like to talk to you about a model that we developed called ProDF. ProDF is a reduced complexity debris flow inundation model. And today I'd like to focus on its applicability in rapid hazard assessment scenarios, particularly following wildfire. Now, before I talk about the model, I first want to refresh our memory regarding debris flows. I know we had some talks regarding them yesterday. But just as a refresher, debris flows are fast moving mixtures of water, boulders, trees, and other debris. They are characterized by having a high sediment concentration, typically greater than 40% in a boulder dominated flowfront. They are found in mountainous areas across the world, but they are particularly common on steep slopes that have recently been burned. This is because wildfires have the capability to alter soil properties, as well as decreased vegetation cover making these slopes particularly susceptible to post wildfire debris flows. Now post wildfire debris flows are becoming more and more prevalent across the Western United States as both wildfire frequency and severity continue to increase. Furthermore, the Western United States is one of the fastest growing reaches in the country, meaning more people are at risk to the hazard associated with post wildfire debris flows than ever before. This really underscores the necessity for having a well established hazard assessment framework in place that can be used to estimate the impacts of these events. Now currently we have established methods that can be used to predict how likely a debris flow to occur is to occur following wildfire or how large that event might be. This information can be combined to create hazard maps like we see here that assign risk factors to watersheds depending on whether it's likely to produce a debris flow and how large it might be. However, these hazard assessment frameworks rarely if ever consider the downstream impacts of debris flows. And this is because up until this point there has not been a well established debris flow inundation model that has been tested extensively on post wildfire data sets. Furthermore, existing inundation models like some that I have listed here aren't necessarily applicable in rapid hazard assessment scenarios. For example, many are processed or physics based and require really detailed information regarding flow properties that might not be easily constrained, especially during a rapid hazard assessment. Furthermore, they're very computationally expensive, which means they might be less applicable in scenarios where hundreds or even thousands of burned watersheds are likely to produce debris flows. We had this issue at the forefront of our mind when we wanted to develop a reduced complexity debris flow inundation model. We really wanted to prioritize having a model that had as few inputs as possible was computationally efficient while maintaining accuracy that is offered by some more complex process based models. And then a model that can be easily implemented into the existing hazard assessment framework. And so with those goals in mind we came up with a model called protea for the progressive debris flow routing inundation model. We developed protea primarily against the inundation scenario from the 2018 Montecito debris flows in Southern California. So this event occurred in January 2018 shortly following the Thomas fire in Southern California. The heavy rainfall in the area produced multiple debris flows that ranged in size from about 10,000 to nearly 300,000 cubic meters in scale. And in the days following the event a team from the USGS gather detailed information regarding the inundation extent of these flows and that's shown here by this hash polygon on this figure. Now these flows, particularly the five that I have outlined here, which we developed the model using a Montecito Oak, San Ysidro point of vista and Romero creeks were particularly devastating for the community of Montecito. They killed 22 people injured 108 more and destroyed over 400 homes. Once again, underscoring the necessity for having a method in place that can be used to delineate the downstream hazard zones associated with post wildfire debris flows. So I mentioned earlier that we wanted to develop a model that had as few inputs as possible. Protea requires for the first is input topography so this is a hillshade of the pre event DM from the Montecito study area. It also requires starting points for the debris flows. So these are pretty similar to initiation points, except that their user defined and the user can place them anywhere upstream of where inundation is expected to occur. They necessarily have to be where the debris flows expected to initiate. It also requires values for debris flow volumes. So for this for this scenario we used observed sediment volume made in the days following the event and the volumes shown here are the volumes for each of the five watersheds. And finally, the model requires to flow mobility parameters. And so they are the flow resistance coefficient and the yield strength and these are parameters that at this point need to be calibrated for each site. And the debris flow volume and the flow mobility parameters are particularly important for the model outputs I'm going to focus a little bit more on those throughout the talk. So the debris flow volume which is represented by M here can be used to calculate the discharge of the debris flow using an empirical relationship first introduced by Rick and men in 1999. Then using relationships that relate discharge to flow velocity and topographic slope we can solve for flow depth as a function of discharge using the second equation here. And then note the presence of Chi which is the flow resistance coefficient one of the two flow mobility parameters is used to calculate flow depth. And then the third equation here is equation for shear stress so it's density times gravitation times topographic slope and the flow depth and so this is a stopping criteria which tells the model when to stop the flow. So like I mentioned earlier there's a yield strength that is prescribed by the user prior to a simulation. And so the flow is only routed when the shear stress is defined by equation three is above that prescribed strength. And so the flow or the flow volume is routed using the progressive flow routing algorithm, which is a variation of the multiple flow direction algorithm. The traditional MFD algorithm routes flow based only on topographic slope and does not consider flow depth which makes it more suitable for low flow conditions. If we don't have floods or in our case the free flows, there may be scenarios where the depth of the flow exceeds the depth of the channel, but the traditional MFD algorithm artificially constraints constraints, the flow to the low flow path flow condition flow path. So we employ an iterative variation of the multiple flow direction algorithm to better capture scenarios where the flow over tops the channel banks or evolves out of the main channel. So this is an example of that here so this is San Yuzichir Creek one of the five watersheds we looked at on the Montecito debris flows. At the start of each simulation the user prescribes a set number of iterations, we found that 100 typically works the best. And so the first iteration the model routes the debris flow volume using the multiple flow direction algorithm and it kind of identifies those low flow paths. And it keeps a fraction of that flow depth so one over the number of iterations in this case one 100th of the flow depth is added to the original topography to create an updated routing surface. So then the second iteration will be routed over the updated routing surface. And as you progress, the main channel will progressively be filled, and then it will eventually be routed over a surface where there is no channel because it has been filled, and you'll start to get some lateral movement which is shown here the warm colors and later iterations. So this phenomenon is shown here in a cross sectional view. Once again the early iterations are shown in the cooler colors and they are mainly confined to the main channel. But as we increase the number of iterations as we progress the simulation we see that the flow progressively fills the main channel and then over tops and begins to move laterally. So once the model is given the inputs and goes to this progress iterative progressive routing algorithm, we get a model output that looks like this. So not only does protea provide information that regarding the extent of an indication, but it also gives us information regarding peak flow depths, which could be particularly important for identifying which downstream areas are the most susceptible to severe impacts from post wildfire debris flows. It's important to note here that protea does not provide information regarding where sediment will be deposited, but rather identifying which areas will experience some amount of flow throughout the event. So we quantified model performance using several metrics for the extent of an indication we use this similarity metric introduced by Heiser and others in 2017. So a mega here is a value that I will call the similarity index, and it has the parameters alpha beta and gamma, which represent model overlap underestimation and overestimation compared to the map extent of an indication respectively. This metric is fixed between negative one, which is the value when the modeled extent of an indication does not match the map extent of an indication at all and positive one which represents a perfect fit between the two. So we use the extent of an indication and the similarity corresponding similarity index to calibrate the model. So this is a calibration for the entire five watershed days that at Montecito for this simulation we kept volume constant. And only varied to flow mobility parameters that I mentioned earlier so the flow resistance coefficient and the yield strength. So with the best fit flow mobility current parameters we have a similarity index of about 0.04 on that scale of negative one to one, or in other words it reproduced about 70% of the observed extent of an indication. Furthermore, this study area was about 40 square kilometers in size, and ProDF was able to run the simulation on a standard laptop computer in about 45 seconds. So once again highlighting the importance of computational efficiency for the purpose of rapid hazard assessment. And so this is promising for that purpose. We also calibrated ProDF to one of the individual watershed San Ysidro Creek in order to compare its output to the output of several other more established inundation models. So Beset Kieran and others in 2019 took a look at a process based inundation model and an empirical inundation model and calibrated them to the observed extent of an indication on San Ysidro Creek. So first they looked at Flow2D and established process based model, and we see that the similarity index for the best fit parameters for that model is about 0.28. And note that this simulation ran on the scale of about our minutes to hours. But when they looked at an established empirical inundation model, the RZ, we see that the runtime dropped significantly down to the scale of seconds. However, that came at the expense of the similarity index, both for non volcanic debris flow parameters as well as our parameters the similarity index was well below zero. To look at ProDF, however, using the same input topography in the same starting point we see that the best fit similarity index is about 0.25. However, note that the runtime is now on the scale of seconds instead of minutes or hours. Once again, very important for the purpose of rapid hazard assessment. We also looked at how the model performed against peak flow depths. So the model outputs information regarding peak flow depth, and the USGS team that gathered information regarding the extent of inundation also made several hundred observations of the peak flow depth of the flows that occurred. 317 of those points overlapped with the extent of inundation from the best fit calibrated parameters for the Monocita data set. So we looked at how the peak flow depths compared at these 307 points. We found that over 80% of these points, the model peak flow depth was within one meter of the observed peak flow depth and more than half it was in within 50 centimeters and nearly one third of the time the model peak flow depth was within 20 centimeters of the observed flow depth. I mentioned earlier that the volume in the flow mobility parameters are particularly important for determining model output. So here are two plots. These dots represent 3000 individual simulations that we conducted against Stania Zidro Creek. Once again, these are scenarios where we varied both the input volume as well as the two flow mobility parameters, the flow resistance coefficient and the yield strength. We see here. And so the color bar is the area. So we chose this metric because this may be something you would be interested in if you're using the model in a predictive sense. We see here that the warmer colors are areas are simulations that inundated more area and the cooler colors are simulations that inundated less area. And we see that the model is sensitivity sensitive to both the volume as well as each of the flow mobility coefficients. So for example, if you hold volume constant and move up the y-axis, only the yield strength of flow resistance coefficient, you can get very different answers regarding how much area is inundated. Same thing if you hold the flow resistance coefficient or yield strength constant and only alter the volume. Once again, you can get dramatically different answers. So this really underscores the necessity for having a well constrained volume that you can put in place and then you can focus mostly on calibrating the two flow mobility parameters. And so this is an example of where ProDF can be easily implemented into existing hazard assessment frameworks. So for example, you can use the existing post wildfire debris flow of volume model as the input for volume, and then you can really focus on the flow mobility parameters. Eventually we would like to have a system where we have the flow mobility parameters constrained. And so we are working on building a more comprehensive inundation data set. So currently we have eight sites at which we have calibrated ProDF. And as we continue to build this data set, we hope to identify which factors are the most important for constraining the flow mobility parameters so that you can use the model in a predictive sense. So we have seven burned areas which are shown here across Arizona and California. I also want to point out that we also have an unburned site in southern New Mexico at the White Sands missile range. So I've mostly talked about post wildfire debris flows today and how ProDF can fit into a hazard assessment framework, but it is important to note that there's nothing fire specific about the model. It can be employed for any debris flow where volume can be estimated. In conclusion, we wanted to create a model that had relatively few inputs and ProDF I believe accomplishes that it requires only an input topography, a starting point, a flow volume, as well as the two flow mobility parameters. Furthermore, it is computationally efficient running on the map in a matter of seconds, but it still maintains the accuracy offered by more complex process based models. It's really easy to implement into existing hazard assessment frameworks by using the output from one of the volume models as the primary input for the inundation model. And then finally, if we want to really get this up and running in a predictive sense we need to find a way to better constrain the flow mobility parameters and determine which factors are really important for constraining these. Before I wrap up I just want to point out that we just recently had a paper published last week that really goes in depth regarding the methodology and the case study at Montecito. So if you're interested in really learning more about the model you can go ahead and look there. But with that, I will take any questions. Thank you. So you said having a well constrained volume was really important. How easy is it to actually constrain the volume. It seems like something will be hard to do. So there is a pretty well established volume model that uses information regarding watershed morphology and rainfall characteristics. However, it was developed and tested almost exclusively in the transverse ranges of Southern California, which is very different than say Arizona. And so we find that the uncertainty associated with that model can be pretty substantial. So I think it really highlights the necessity for better constrain the uncertainty on that model as well as developing volume models that are more widely applicable outside of Southern California. A very interesting model. Your pictures of the damage are mostly about the sedimentation. Is that the next step. So, do you mean identifying which areas experienced deposition. Yeah. Yeah, so some previous studies particularly focused on Montecito have suggested that it's more important to have information regarding the depth, the peak flow depths that certain areas may experience than the extent of deposition itself. And so I don't believe there's a way that this model, I could is just the way that structured would be able to provide information about where specific deposits will be. But feedback that we've had is that peak flow depths may be more important in determining damaged infrastructure than the deposition itself. Not the best person for this. I can't see very well. Thanks so much. That was really interesting. Does the fact that 100 iterations seems to be the magic number tell you anything. So, like I said earlier, the multiple flow direction algorithm only accounts for topographic slope it doesn't count for flow depth. So we employed this iterative variation so we can progressively fill the channels that it overflows. And so the reason we chose 100 is because we found that if you chose, at least in this scenario, a lower number, you're going to be breaking the flow into depths that are still too deep. So if you break it into like let's say four iterations, you're still going to artificially constrain some of that where the flow depth is going to be higher than the depth of the channel. So it's important that you have a sufficient number of iterations where you can progressively fill the channel and then round the top. But we found that if you increase it any more than 100, it really doesn't improve model performance. It kind of, you know, comes to a point where it just increases the runtime of the model, but doesn't really offer any bad added benefits. And so at Montecito and other sites that we've looked at 100 seems to work fairly well for maintaining computational efficiency, but still capturing the lateral movement. I have a question on that volume. I imagine that the volume grows as debris flow moves from the source to the depositional areas. Now you need the volume for the entire debris flow, right, the whole thing. So then are you feeding that volume in one over 100 or one over any increments to the model. So right now it's just a fixed volume the model doesn't consider that deposition may occur or that more volume might be entrained as it's moving right now it's just a fixed volume that's routed down. Thank you.