 pressures. In fact, we've done quite a few experiments at the flume where we have two debris flows, one that hits a bed with an entrains material, it becomes much much more mobile. So yeah, I think it plays a big role and that's one reason we like to have better physical models for that entraining process. Okay, we should move on but let's let's thank David one more time. Thanks. So our next speaker is Paul Bates from the University of Bristol and he is going to be talking to us about modeling flood risk in the continental US. Thank you very much Greg, thank you. And I've got two really tough acts to follow. This is going to be a challenge but I'll do my best. So I'm going to talk about some recent work we've been doing in Bristol, modeling flood risk across the whole continental US. So we have for a number of years at Bristol been developing a computational model called this flood FP, which simulates shallow water flooding with the 2D shallow water equations. So I'm going to be talking about work I've done with a whole bunch of different colleagues, Neil Quinn, Chris Samson, Andy Smith, Ollie Wing and Jeff Neal. And this work has been reported firstly in a paper in water resources research in September last year. We've had a paper out in environmental search letters in February this year as well. So the background to this is that flooding in the US is, as you know, it's a significant economic and well-being hazard. If you have a look at the National Weather Service data, thank you very much, cheers. If you have a look at the National Weather Service data they provide loss estimates and casualty estimates for inland flooding, the period 1903 to 2014. If you take the average over that whole kind of century period, it's about 5 billion US dollars a year flood losses on average. And there's around about 100 fatalities every year as well. There's no significant trend in fatalities over that same time. National Flood Insurance Program costs about 190 million US dollars a year and it's got a significant debt. And recent events like Harvey and Emma have really raised the profile of flood risk for government insurers of public and scientists as well. So if you take the Weather Service data and you fit a generalized Pareto distribution through it, you get an estimate of what the 1% annual probability US flood loss is. It's about 34 billion US dollars. The high point on this, 2005, that's just the inland flooding component of the Katrina event. So the 1% annual probability or 0.5% annual probability is the level insurers typically use to work out whether they've got enough capital to pay out in the advent of a really bad year. So if you want to learn something more about flooding, it's obvious we're going to need to harness some computer modeling efforts. We don't have observations of extreme flooding everywhere we would like them. The only way we can really get at that and make predictions is to harness power computer models. And in this discipline over the last 20 years, as we saw in David's talk, remote sensing of terrain data as we really revolutionized our ability to do modeling for floods over a lot of different scales. So starting out 20 years ago, we started to get the first airborne laser altimeter data. That completely changed our view of how we built flood models. We moved from this really data poor science to one where at least for the terrain data was incredibly rich and detailed. So this is a reach on the river seven in the UK that I've worked on a fair bit. And you can see this beautiful geomorphology emerging in line our data. We've even in the last kind of five years been able to step that up and start to build global and continental scale models, taking process custom process versions of the SRTM terrain data set. That's allowed us to build this is a one in 100 year flood hazard map for the whole of Africa 100 meter spatial resolution. And so we can do these things from beautiful detailed reach scale models right up to continental scale. And just not to be out done by David's animations. This is this is actually I think this was better. This is an animation of a 2D flood model simulation of an urban area in the UK which flooded back in 2005. And we built a five meter resolution 2D model of the whole urban area. So this is building resolving actually resolve flows around the individual building. And it's got three rivers coming in and a main two small ones and a main channel. And you can see the complex progression of the flood wave into a series of flood compartments as defense is over top and areas get invaded. And you can see if you would closely complex backwater reaching facts. So when we've got the data we can produce really good models. But when we step up to the global and continental scales, we don't SRTM is is the best we've got and it has some limitations. And that's where the US comes in, because you guys have really good data over an entire content. So we can build something that's intermediate between the global models of these local detailed models for the US test our ability to produce continental scale automated model builds at a whole content scale. That's what we've been doing playing with over the last couple of years, where we've taken our list for the FP model, and we produced a 30 meter this flood model of the whole US. And clearly, you can't do that basin by basin, you can't even do it with anything other than an automated way. So we've written the framework of around about 70,000 lines of MATLAB, Python, and our code to automatically build to the hydraulic models across the whole continental US, then simulate a number of different return period as an event. So the data we can use here in a particularly important so instead of SRTM, we can replace that with the US National Elevation Data Set. So this is LIDAR, where it's been collected, and the best available is a composite source is best available data at any particular location. That comes at 10 meter resolution, but we've simulated a 30 meter computational efficiency reasons. And we've, we've been able to incorporate all the river channels explicitly. So they all have a width and depth that we can parameterize. We take the boundary conditions from this from a regional flood frequency analysis. So this is a standard index flood methodology old school hydrology technique. But we've we've constructed taken USGS gauge data, produced a regional frequency analysis, which tells us the magnitude of the one NT year return period flow, anywhere on the continental US river network. But the levees in from the US Army National Elevation Data Set, and we simulate a whole bunch of different term period events. So the first point of the presentation is that the data and the software tools now exist to build 2D models near automatically over very large areas. Then the question you then start to ask is, okay, so how good is that going to be? And we've been able to test it in a number of different ways. We can we can look at undefended situations like this is Sacramento, 100 year flood event, assuming no flood defences, and we put the flood defences in, we get a much more correct simulation. So we've tested this in a number of ways. First way we tested it is to look at individual storm events where we've got really good data on the areas that were in the data. And so a very nice person at the the local newspaper in Houston gave us this data set for 2015 Memorial Day store. And it's the basically a Google map of properties that were flooded and during the event. So we've simulated this with our whole US model, and we can look at the number of properties we correctly predicted being inflated that were identified by the Houston Chronicle. So green is good, red is red is ones we said were either flooded or missed. So we capture, we're getting close to 90% of the properties that were observed as flooded at this data set. And in terms of validation for these kind of data, that's not a not a bad level of success. So the event is somewhere, I mean, there were there are problems with this type of validation in that the return period varies in space. And exactly what the return period of event was is somewhat arbitrary. But it's between a 105 year event. And if we simulate our 1000 year flood, both fluvial and fluvial flooding simulated, we get this 90% capture rate. The 100 year layers capture around about 70% of the inflated properties. And we're somewhere in between. So not too bad on these individual stores. Ideally, though, we would like to do some wide area validation. So one way we've done that is take all the FEMA one in 100 year flood maps and ingest those into Google Earth Engine. I think it's around about 2 million different shapefiles ingested. Now, that allows us to do a direct comparison between our our prediction of 100 year floodplain with FEMA's analysis. So we take all those local studies developed by 1D Hexgrass modeling, typically, or more ad hoc methods. And we resample those to the resolution model. Actually, this is 30 meters. And we look at the un-examined areas in the FEMA data. So the FEMA coverage is, is only the areas in red. There's about 40% of the US that FEMA doesn't currently cover. We also have a problem with the FEMA data is that when you look at it, it's clear that they've only really done the main rivers and not headwater areas. But we'll, I'll tell you a little bit about how we can work around that. So that's, that's the issue with the headwater areas in FEMA. This is in blue is the areas that are predicted as that 100 year floodplain predicted by FEMA. The red is the 100 year floodplain model. And we can see some clearer areas where we're, we're over predicting relative to FEMA. But some other areas where we predict flooding along valley bottoms that it seems to us that FEMA hasn't modeled those. And that probably is really flooding risk in those locations. It's not picked up in the big place. Therefore, as a metric for comparing FEMA and our model, we can only really use a hit rate as a sensible metric. False alarm rate or critical success index really make a lot of sense because of these, these un-examined areas. If you zoom out a little bit further, you can see the scale of the problem. So some places we do really well, some places we do, we do less well. We're plotting here hits in blue, false alarms in red, misses in black. And what we're trying to do firstly is buffer the FEMA flood map. So put a one kilometer buffer all the way around it and only calculate our, our fit metrics within that buffer. The approximate size of the buffer you need is going to be somewhat scale dependent. It's a very arbitrary way of doing it. And we're going to still get what we consider to be false alarms or the metric is to be false alarms actually to be really flood risk correctly predicted by our model. So we're only going to really talk about the hit rate in these analysis. But if we do the comparison between the content of the model and the FEMA data set, we get a hit rate of 81% across the whole US. That goes up in catchments that are above 80 square kilometers, the model, our model performs less well in in capture catchments below that threshold. And where we only look at at where FEMA has what it designates as high quality data, that's typically data that's compute, blood maps computed with a heck or ass model, the hit rate goes up to 86%. Yeah, this is not too bad because FEMA data is not truth. And where we do a comparison of a any sort of bespoke 2d model to say satellite in the nation data, hit rates of 90% or above are about as good as you ever ever really get. So we're approaching the skill of of bespoke 2d modeling, but not probably not quite there. Critical success index, if you want to calculate it is 59%. They're probably under values, the model skill a little bit with stuff. And we also see that the the continental model does less well in our areas. And that's because the the return period flows are not so well constrained. So the performance is much better in in temperate areas, and drops off in continental and arid areas as well, probably exactly as you would expect. The other way we validated the continental model is to look at a number of locations where the USGS has built bespoke one and 2d model. So these are probably built to a higher standard than FEMA studies that probably had a bit more care and attention lavished on them. So we'll expect them to be really a bit closer to the truth than FEMA data. 10 of these sites with 100 year simulations, and there's a three further sites where they've got simulations of different return periods as well. And if we have a look at these, this is the hit rates, the hit rates here now get a bit higher. The average is about 92%. Albany, it looks really good. So some places will do less well, but it's still a coherent match. Some places the hit rate is excellent, but there's potential false alarms in here and in red. And we can go through looking at all of these. So where we look against the USGS models, they're pretty good job. And when we look at different return periods, it's the same story. So this is Battle Creek, Michigan, one in 500 year event, a really nice hit rate. You can see it's a big flood in a confined valley. It's probably quite easy to predict. Probably lots of different methods could do this quite well. You'd hope the continental model do okay, and it does. Again, smaller flood, but again, in a confined valley. And this is more extensive, but we're still doing pretty well. On the whole, it tends to be easy to predict bigger events, which are valley filling, if you're using an elevation extent, but it's the thing that we can do over a wide area. So the conclusion from that part is that the model seems to be working pretty well. And it gives us some faith that we can use it to make inferences about flood risk across the entire US. So now if I can weave, you know, that's a bit like David's talk, and now if I can weave in a bit that somewhat takes the themes of Susan's talk, bring the session together. We're now going to do some risk calculations, bringing together the hazard maps with exposure and vulnerability. So the vote exposure that we can get across the whole mess are things like value of buildings within the floodplain. And I have to talk to Susan afterwards and intersect some of our data layers with the social vulnerability. I think that could be a really neat story in there, too. And then, you know, the the risk is hazard times exposure, times vulnerability, and the vulnerability, we have to have some function to relate the hazard to the potential damages. Typically, what we'd use in flooding is a depth damage curve. So just walk you through what we do. So we take our 30 meter resolution flood hazard model and then we take the population data, we take the environmental protection agencies, de-symmetric population data. So this is de-symmetrically downscaled census data to 30 meter resolution. It takes the 2010 census and assigns it to 30 meter pixels based on things like land use slightly. We take FEMA's national structure inventory, which gives us the location of 140 million different structures within US floodplain. It tells us something about their location, their value, their type, the number of stories, and their general characteristics. And we can use those to decide what kind of vulnerability function we should be using. Take the national land use database, which indicates developed areas. And then we do some future projections of socioeconomic change, because EPA have very, very cleverly done this project called ECLUS, the Grated Climate and Land Use Scenarios, which give projections of population land use change out to 2100 under different shared socioeconomic pathway scenarios, principally SSP2, which attracts US census projections, and SSP5, which is a high growth scenario. And then the vulnerability functions we take from core of engineers' standard relationships between water depth and percentage damage structure. We've got a whole bunch of those for different structure types, which relate to the infrastructure inventory. But we don't have that data, the FEMA NSI data for the future. So for the future projections, we just use one generalized curve. So what does that tell us? Well, when we look at present day risk in the US, if we look at our data and we intersect our 1 in 150 and 500 year hazard layers with the population data set, the 1 in 100 year event, we suggest that there's around about 41 million Americans living within the 100 year floodplain. So there's around about 13% total US population. If we do the same analysis with the FEMA data layer, we only get a value of 13 million. That's partly because it's 40% of the country that's not covered. But also there's a lot of risk residing in those headwater areas that are not included in the FEMA maps. If we take another global flood risk data product, the aqueduct flood risk maps, again, we intersect those with the population data that we only get an exposure of about 15, 16. Again, that's because they only consider the larger catchment. So I think we're about sort of 5 to 10,000 square kilometers. That's present day. And there's a big discrepancy here. So what we're suggesting from this analysis, it might not be 41 million, it might be plus or minus five on that. But even given the errors, there's a significant difference between what you would identify as the population risk FEMA data, what we would do this new constant scale model. But if we look at the future exposures, by 2050, just taking the business as usual SSP2 pathway, exposure has gone up to 61 million in the 100 year event. And actually, the proportion of the US population that's exposed has gone up from about 13 15. So that's 24, 20 million up 2.3 percentage points more US total population in the 100 year flood plan. And then same story, if you go out to 2100, now the number is 74.8. So that's 34 million up on present day. And 3.1 percentage points in terms of the proportion of the US population. If you look at the value of those assets in the floodplain, if you look at just the total value of the exposed assets in the 100 year floodplain for present day, it's about 5.5 trillion US dollars. And if those assets were to get hit, based on the depth damage curves that we have, the potential damage would be about 1.2 trillion US. So the 100 year developed floodplain in the US is currently about approximately the land area of Georgia. We take that forward using these scenarios. Then in 2050, the exposed assets go up to 8.1 trillion, the potential damage up to 1.7. And the 100 year developed floodplain is currently is then approximately the area of South Dakota. By 2100, we're up to the size of Kansas, 9.8 trillion of assets, potentially in half way. So the difference, the 100 year newly developed floodplain is approximately the land area of West Virginia. So even not taking into account potential climate change, which is very likely to change the intensity and duration of extreme flooding, both fluvial and fluvial, we just look at socioeconomic change alone, a lot of which we're already locked into because of trends in population growth. Flood risk in the US is very likely to rise in the future and the proportion of the US population exposed to flooding is also likely. So to conclude this presentation, we developed a whole US model, which seems to have a reasonable skill at predicting innovation. And it's not quite there, but it's approaching the skill that we get when we build local spoke models and compare those against observations of flooding from satellites and air photo. We intersect those those simulations with high resolution population data, we show that the exposed population in the US to flooding is about three times higher than you previously estimate using FEMA information. So socioeconomic change alone is going to increase the proportion of the US population exposed to flooding during the 21st century. And climate change will undoubtedly amplify those effects even further, even if we find it difficult to say exactly where, when and by how much climate change will affect flood risk. We think it probably will based on simple, simple atmospheric physics like the classes clay problem relationship. And lastly, one problem with all these hazard maps is that they present an unrealistic view of flood events, both the FEMA flood map and our flood map are essentially constantly term period in space. So we're looking at things that actually don't look like real flood. I think the next thing that we have to do here is to produce some way of stochastically simulating things that look like real flood event footprints where the return period varies in space. And that's what we're working on. That's it. Thank you very much. Thanks, Paul. Any questions? Do you consider that they have levies, they have a drainage structures, they have all kinds of a detention basis in the cities or when you do the flood map? Yeah, so we put the structures in from the US National Levée database, but that's a really incomplete source. But anecdotally, we've been told people think it's only about 30% of the levies known to exist in there already in the database. The problem when you're building a continental model is you can't go hunting for local data everywhere. You have to use data, the data that's there in national databases. So that's one constraint. So levies are in there to the extent that national databases contain. But we know that's very incomplete. What we're working on at the moment is a series of analyses which try to predict the likelihood that a given pixel of mobile is defended by a levée or some other structure. So we're trying to give probabilities to the probability there's a levée there that it's standard of protection and the probability that it might fail. So you can imagine some kind of fault tree decision tree that comes out with a probability that a given pixel is defended by a levée, depending on its location, its land use and possibly socioeconomic. That seems to work reasonably well to first order to fill in the gaps where the national databases break down. But again, it's not perfect. Okay. I wanted to ask a quick question. Like I think you answered one of my questions in the last one was why we're choosing that hundred years because one of the size you chose was Indiana and Indiana had 200 year floods in one year. Yeah, it can happen. And the other question is, have you taken into consideration the fact that a lot of the floods in the future and even today are not next to a flood plain. They're flash floods. They're happening without any proximity to water body at all. Yeah. So the models, I wasn't clear about this. So we simulate both fluvial and fluvial flooding. So for the fluvial, for the fluvial, we take a regional flood frequency analysis on gauge discharge data. We use that to work out extreme river flow, extreme river discharge. We do the same regional frequency analysis for rainfall using national weather surface rainfall data. And then we do a rain on grid approach. And that allows us to simulate flooding that generated by intense rainfall down to catchments of a few kilometers square. There is a slight thing, certainly in the insurance industry, they tend to think of a lot of flood losses as being off flood plain because they have a really limited conception of what the river network is. They don't only consider main rivers as part of the. They're looking at urban drainage as well. Yeah. I think this model, please, if you look at it conceptually, it almost looks like you're not taking up urban drainage. Yes, we are, but not very well. So we make an allowance for an assumed capacitive sewer network. So we assume that the sewer network can handle the one-in-T year return period rainfall. And it's an excess rainfall over. So that's not a very detailed way of doing it, but it's taking it into account. But again, I go about this point that if you're going to build a national model in an automated framework, you can only use data sets that exist and are unavailable. But I think a lot of what the insurance industry thinks of as off flood plain flooding is actually really connected with the river. It's just in head water Hello, my name is Terry Eidel. I'm with the Open Geospatial Consortium. We had a chance to talk earlier. My question actually is for you and would have been for Susan also, is as you output this really rich data in these maps, do you have any standards or anything that you output them in so people can ingest the information behind it into other sources and be able to use the information behind the maps? I guess we're using Earth Engine just because using Google service to crunch some of these numbers, it takes the sting out of things. So in so far as that's a standard format, then that's what we're using. Otherwise, we tend to use either asking rasters or finalists just to press the file site. Not very much, I guess is the answer. I guess I have two questions. The first is have you compared your model to the national water model that NOAA is running down in Tuscaloosa? That's the first question. And the second is it seems that for your work and the work of the other speakers, there's two ways to use the model. One is for immediate first responders. Say there's going to be a storm of a certain magnitude and a certain indication. Second is for long-term planning to protect infrastructure. I'm wondering how you see these models or your model specifically being used? Okay, yeah. So the first question is we haven't compared to the national water model but we have compared for a flood event in the south of Paris, we compared the HAN inundation mapping methodology to our inundation for that region. And there was a paper on that at the European Geophysical Union a couple of months ago. We did that in collaboration with a French insurance company. So there's some interesting conclusions there about which method works best. The HAN method works well, it seems to me, in confined river channels or confined valleys. I think it starts to become more problematic when the flow, the particular flow pathway becomes more complex. So in wide plains, then real hydrodynamics and mass conservation is going to should win the day most of the time. So your second question was about how these data are used. The moment these are set up for the second of your applications, planning purpose. So we give these data to these data are used by people in the insurance industry for deciding on the insurance contracts and pricing. You could use it for zoning planning, decisions, you could use it for forward planning and working out how much the US should be investing every year in flood defenses to mitigate the risks. What I'd like to see in the future is we would hook up these hydraulic models to either miracle weather prediction or climate change models and land use models. And we drive them with output from those. In fact with GE we're we're currently linking versions of the this flood model to the VIC land use model. But I can see once you can do it for VIC you pretty much do it for anything. And I'd really like to see that develop over the next three to five years as well. I think it'd be really powerful and be really interesting. VIC let's thank Paul when we're done.