 Good afternoon. My name is Jim Smith. I'm from Princeton University, and I'm chair of the committee that's examining methods for modernizing probable maximum precipitation. I'd like to welcome attendees, speakers, and committee members to this information session. In beginning, I'd like to give a special thanks to the National Academies Committee for the excellent job that they've done in organizing this session. Jonathan, Steven, Katrina, and Kyle, thanks to all of you. To begin, I'd like to quickly go through and have each of the committee members give just their name and affiliation, and I'll just go through what I see on my screen. So, John Ng is first on my screen. Wow. Okay. John Nielsen-Gammon. I'm in the Department of Atmospheric Sciences at Texas A&M University. And Effie, I see you next. Hi, this is Effie Foukola-Giorgio. I'm in the Department of Civil and Environmental Engineering at the University of California, Irvine. And Dan Cooley. I'm Dan Cooley. I'm in the Department of Statistics at Colorado State University. Katie. Hello. My name is Katie Holman. I work in the Technical Service Center for the Bureau of Reclamation. And John E. Hi, John England. I'm a lead civil engineer at the Army Corp Engineers Risk Management Center. Ruby. Hi, I'm Ruby Leung from Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory. Chris. I'm Chris Pacharek. I'm a statistician in the Department of Statistics at the University of California, Berkeley. And Russ Schumacher, Department of Atmospheric Science at Colorado State University. She. Hi, I'm She Che-Kao. I'm from Environmental Sciences Division of Oak Ridge National Laboratory. Let's see. Did I miss anybody? Robert. Excuse me, Robert Mason. Oh, yes. Robert. Robert Mason, formerly US Geological Survey, retired. Thanks, Robert. So we could move forward to the statement of tasks, just to give an idea to the attendees of what the committee is really focusing on. I'll go to the very end. Ultimately, the committee is tasked with recommending approaches for probable maximum precipitation estimation that incorporate the effects of climate change and provide for characterization of uncertainty. So that's kind of the bottom line of what the committee is tasked to do. One of the key elements and one of the key tasks in achieving the goals of that task deals with advances in data used for probable maximum precipitation studies. And that will be the focus of the information workshop today. And can we move to the next slide? There are a collection of questions that form the initial set of data issues that the committee is wrestling with. Roughly, they're looking at data for probable maximum precipitation in the pre-radar era in the US and the radar era in the US. And then a third set of questions that we're working with are how to use reanalysis fields and downscaling simulations based on reanalysis fields for probable maximum precipitation. So they'll be among the principal themes that we will deal with in this information workshop. Data and data issues are much broader than that. And so one of the things that we would like to get from the attendees and from the discussions today are broader ideas of what the community sees as the most important issues, data issues that are associated with making major advances in probable maximum precipitation. So there's the catch-all at the end. Now one of the main ways that we're going to be dealing with this today is through presentations and questions for a group of speakers. And if we could move to the agenda, what we have organized are presentations from six speakers. The presentations will be about 15 minutes in length and I'll give a two-minute note at about 13 minutes into the presentations. And then 10 minutes for questions. The questions will initially come from members of the committee and they'll sort of target things that the committee members see as most important for resolving. The attendees will also be able to post questions and as time permits, we'll deal with them today. But if we're unable to deal with them today, they will become an important part of the material that the committee will be using further in their deliberations. So we'll have a short break after the first three presentations and then have the final three. And so there are a variety of ways that we're looking to get input. And if we could move to the next slide then, one way is to provide, for attendees to provide questions through Slido. And you can do this through the Slido link below the live stream. You can scan the QR code or you can go to the web page. And so the questions are one way of contributing to the process, but more broadly, we'll be looking to get feedback on issues that are raised today that the community thinks are that are important to pursue. So if we can move to the next slide, after the presentation, provide comments, feedback, issues that you think the committee should be considering and addressing all of the tasks, we're specifically looking for input now for the issues that really pertain to data. So the agenda will work through our presentations and questions, but we look for a longer-term engagement on these issues from the community. So with that, let's move right into our presentations. The first presentation will be from Ken Kunkel from North Carolina State. Ken, you have the virtual floor. Okay. So let me share my presentation and see if you all can see it. Yes. Okay. Well, I'll jump right in then. So these are the two questions that were addressed to me for presentation here. One, regards how can atmospheric water balance variables contribute to modernized methods for computing PMP? And what are the key data sets for assessing trends in PMP magnitude storms? I'll be talking about these in kind of reverse order. And I'm going to do my last slide first so I don't run out of time and go over my key points. So regarding trends and looking at trends, first of all, there's a question of historical trends. And I would say that co-op network, which is captured within the global historical climatology network data set, remains the backbone for looking at trends in extreme precipitation. I'm going to show some aerial analysis that I have done that provides some insights into causal mechanisms of very large events. Also regarding historical trends, you know, when the original HMRs were done, they had limited tools. We now have lots more tools. And one of those is reanalysis, and that provides some important meteorological insights into dynamical and thermo dynamical features of historic events. Regarding future trends, we have large ensembles of global climate model simulations, either many models or multiple simulations from the same model. And I think that forms a key data set suite for looking at future trends. Regarding the water vapor question, precipitable water has been a standard metric, and it's highly correlated with extreme event magnitudes. But some other metrics that appear in the literature, such as integrated vapor transport and moisture convergence, may provide other insights into PMP events and reanalysis actually make this possible where it wasn't that possible decades ago. So I'm going to hit a few of the research areas that I've been working in that relate to these questions. So first of all, data sets for historical purposes. So what might some of the requirements be for appropriate data sets? Well, I would say they need to be temporarily long and homogeneous with good spatial coverage. And the National Weather Services Co-op Network does check all of these boxes to some extent. So if we want to look at analyses to look at trends, what do we need for PMP trend analysis? Well, it's a real challenge. We got PMP, which almost by definition doesn't happen very often. We only approach it rarely. And so that presents a statistical challenge. And so I've been thinking about, well, how might one look at this from a statistical standpoint? And so I'm going to show some results for a metric that is definitely below PMP levels, but maybe the sample size is big enough we can learn something. And that's the all-time record rainfall at a station. And when have those records occurred? And this shows the results of an analysis I did based on 856 stations with more than 100 years of records. So they span mostly the entire 20th century up to present time. And basically what I'm showing here is the distribution of the years in which those records occurred across the U.S. I've done it for two different durations, a one-day record and a three-day accumulated record. There's small differences, and I'm showing it pentad by pentad here, small differences between three-day and one-day. If you look at the trend, they're virtually identical. And what you see is an overall upward trend. And generally we've seen more records relatively speaking since 1995 than before that time. And if you look at the period with the most records, the largest percentage of records that occurred in the most recent five-year period of 2015 to 2019. Let me show some results for another metric, and that's the largest area-average events that have occurred in the coterminous U.S. This analysis shows results for 50,000 square kilometers and four-day duration. And this analysis I'm comparing with historical events for the 70-year period of 1949 to 2018. I did this analysis in the wake of Harvey and Florence. That was sort of the motivation for doing this. I took the top 100 events and then looked at where they occurred, what caused them, and how they were distributed in time. So first of all, this shows a ranking of the magnitude of these events. And one message here, hey, Harvey was a pretty bad storm. In fact, it's the largest event under these criteria of area and duration, and the largest by a large amount. It's 50 percent higher than the second ranked event. Florence is no slouch for this particular metric at ranks number seven. And if you look at the causes here, I've shown the top 30 events. And the causes include atmospheric rivers, subtropical lows, fronts, extra tropical cyclones, and tropical cyclones. So where are these located in the U.S.? Probably no surprise. The bulk of them are in the southeast quadrant of the U.S. There's also another tranche of them along the West Coast. You might think in the Gulf Coast that maybe most of them, maybe all of them, or most of them are tropical cyclones. And quite a few of them are. But actually, the largest number of them in terms of meteorological cause are fronts, frontal events. And how are they distributed in time, these 100 events? Well, there are more events in the latter part of this period than the first part. You notice that the highest number of this 125 of them occurred in the last 10-year period that I analyzed in this particular study. Okay, are there other datasets that we could use for a trend analysis for the historical period? Well, stage four radar immediately comes to mind. Great temporal and spatial resolution, ideal for lots of extreme precipitation studies. But for trend analysis, I would say the period of record is just too short. What about satellite estimates? They have some real advantages, kind of uniform coverage. But again, period of record is too short. There are other networks out there. Cocoa Ross is a nice resource for looking at events, big events. But again, most 20 plus years of record for the longest stations. Ross is an interesting one that I thought about. The longest raw stations do go back into the 1980s and even, they may extend even back to the late 1970s. So now we're talking about maybe 40 years data. And while I haven't, I've never done anything with that, it is an interesting one that perhaps could be mined for more information on trends in the western U.S., specifically. Now let me turn to the water balance variable question. Well, what are potential water balance variables? Well, here's four of them, possibly dew point temperature, precipitable water. Those are the two that have been used. Dew point temperature has always been used as really a surrogate for precipitable water for the early part of the record when we don't have precipitable water observations. Integrated vapor transport, water vapor convergence, those are other ones that could provide insights. I do want to show some results of analysis I did using precipitable water. And it really isn't for PMP, but it's for variables that go into NOAA Atlas 14. So we're talking about the annual maximum series, partial duration series. And what I was looking at is how correlated are the magnitude of these events with precipitable water? So I looked at about 3000 stations for each station, for each annual maximum value, used reanalysis to find the precipitable water and the vertical velocity associated for the nearest grid point to that station. And then aggregated it all together. And I come up with a relationship like this, where I have a very analysis found that there is a very good relationship. Sorry about that. Did you hear that? Hold a slide from a recorded one. Anyway, let me go back here a little bit to this one here. So there's strong correlation between precipitable water, monotonic relationship, and the precipitation magnitude of these events. So precipitable water does, at least for these kinds of events, provide a very good metric if you wanted to use it for estimating precipitation magnitude. There's even a stronger relationship with aerial coverage of the big events. So this shows the distribution of areas on days in which there was at least one event that was an annual maximum value. And there's actually not just a linear relationship, but nonlinear relationship curving upward, indicating that the aerial coverage of large events increases very quickly with the amount of precipitable water. So what about integrated vapor transport and water vapor convergence? Those are appearing much more frequently in studies of extreme precipitation and looking at these relationships. I expect that all these variables would be highly correlated, but perhaps the integrated vapor transport and water vapor convergence provides more directly than precipitable water to extreme precipitation amounts. I haven't done that kind of research yet. Another issue to address here, what's readily available in reanalysis and climate model simulations? In reanalysis, ERA 5 has integrated vapor transport, for example, but not all of them do. If we look at climate model simulations, those are not standard variables that are provided. For that matter, precipitable water is generally not provided. So in any of these for climate model simulations, one has a computational task to do to get at those variables. I want to make one remark about reanalysis in that regards a case study that I did for the Colorado, New Mexico study that updated PMP values. And the question came up about transposition of a particular event, the 1964 Gibson Dam Extreme Rainfall event. And the question arose whether this event should be transposed to Colorado. It had been in an HMR study that had not been in another study. And so there was a question about whether it should be. And I used the NSFN car reanalysis to diagnose vertical motion and moisture transport fields to provide a recommendation on whether it should or shouldn't. And I provided a bunch of slides at the end of this presentation. So the committee wants to look in more detail about what I did in that, that that'll be, that's available there. What about future projections of precipitable water and big events? I just want to point out the study that I did a few years ago, where I looked at an analysis of precipitable water change in CMIT5 models, and just point out that the models simulate quite large increases in precipitable water in the future and everywhere. Water vapor goes up everywhere across the globe. So very robust relationship. We also did a recently analysis of the CMIT bar archive, looking at very large events over CONUS in those models. And without getting into detail, we looked at a historical period and a future period. This is an analysis showing the magnitude of the top 10 events in the model and both historical and future. Most of the models produce amounts that are less than observed, but a few models get approximate amounts equal to what's been observed. The last bar there is the observations. And finally, the ratio of future to historical, most of the models simulate increases. There's a few exceptions to that. But overall, they're simulating roughly a 25% increase in the largest events. And so those are my key points. And just kind of a last thought is that science is unique, and that expert judgment underlies many decisions. And I ponder the questions about what can nature produce? Are there fundamental limits? If so, what are they? Thank you for the opportunity to talk to the committee. Great. Thanks, Ken. Let's see. I believe John Nielsen-Gammon has a question right off the bat. Am I right, John? Yeah, sure, why not? Hi, Ken. This is not one of the questions you're meant to address, but I think among the people who are going to be talking, you're probably best qualified to answer it. All the datasets you showed are well-validated, quality-controlled datasets, essentially. But for actual PNP analyses, we use data that's almost anecdotal. It's been through some extensive validation perhaps for some of past events, but there's no longer really a formal process for validating those sorts of extremes that aren't part of a regular network. And NCEI has this very involved process for saying, yes, that's an official record. It seems odd. Do we need something like that for PNP-type storms, which are going to be so important for regulations? Well, I think, I mean, for trend analysis, one thing, if you want to look at trends, you do have to have, I think, some fairly guardrails on what data you allow in. But for evaluation of individual events, given the caveats, you do have to pay attention to and kind of evaluate the quality of those observations. But I think everything should go in to establishing the spatial distribution and the amount of a rainfall event that is approaching PNP levels. So I don't know if that's what the question you were asking, but you know, my old days working at the Illinois State Water Survey, when there were big events, the scientists would go out and do bucket surveys to establish more detail, the kind of spatial and temporal morphology of events. And I think those are the kind of data that really shouldn't be lost, I guess, in terms of the full suite of activities that go on around PNP. Was that the question to answer your question, John? I guess, let me just follow up briefly. Should, you know, should there be some agency tasked with saying officially what the storm total was for a given event, for example? Well, that would be ideal, I think. Have experts evaluate all the observations and provide their judgment of, you know, what did this storm produce? I mean, Harvey's a good example of that, when some of the analysis you did, maybe eliminating some of the large outliers in that particular event was being probably in error. Okay, thank you. See, Ethie, I believe you have a question and John, Johnning E next. So Ken, you mentioned satellite, possibility of using satellite data, but then you said the record is too short. And my question is, if we were not looking at historical trends, but individual events of the order of PNP, could we be open to looking at events that happen outside corns? And basically enlarge our knowledge of, you know, the limits of nature? Yeah, I would say, you know, if we have obviously in situ observations and radar, I would say those are superior to satellite. That's my own personal evaluation of, you know, the nature of the dataset. But when we're in situations where we don't have those, I do think that that is a resource that could be tapped to understand on a larger context what can nature produce? You know, getting back to my final maybe semi philosophical questions is what is nature capable of? And if we have the whole globe to evaluate that, you know, I think it increases our knowledge much as if we want to look into the future, some of the analysis we're doing, what I would like to do further is to examine in large suites of models and large ensemble models, what can nature produce in those kind of, you know, we only have one earth and one history to look at, but the models provide a way perhaps to examine the limits of what nature can produce. Okay, thank you, Ken. Don. Hi, Ken. Thanks for the wonderful presentation. Appreciate your insights and inputs. Question is on that future part, there seems to be a lot of opportunity to really go after IVT and convergence. And just off the cuff, can you comment on our elusive charge of maximization? So the question is, do we see observationally any physical limits so far? Yeah, well, my answer is a little similar to what I just said in that, you know, if we let's just take the example of a model with 10 realizations out 100 years in the future, now all of a sudden we have a thousand years, maybe we get a few other models that have that now. We have these cross-model problems perhaps comparing apples and different flavor, you know, different varieties of apples, but I do think that, and that's something I've wanted to do for some time but haven't had the bandwidth to do it, to see what are the true limits of, you know, how large can water vapor get? Can we find situations where some of our big PMP vents, let's take Harvey, you know, tropical cyclone that sits in the same place for five, six days? Can we find examples in the models perhaps that duration is longer? Just take an example, and that's sort of a lot of what I'm thinking about are, hey, you know, we get this frontal event, we got tropical cyclones, we got frontal events, the front stays stationary for four or five days and produces PMP-level precipitation or near PMP-level precipitation, but what prevents that front from staying around for 10 days and additional waves moving across? What are the limits and perhaps climate model simulations can provide some insights into that? What are the true outliers that we haven't seen historically but perhaps can happen? Yeah, thanks, especially with the clear record you're already showing on precipitation increases and obviously the temperature, that's our big concern is temperature and the moisture, so yeah, thanks. So I think we have another question from John and she and then she. Okay, thank you. So Ken, you showed some trends in a lot of these associated with record precipitation and thinking in terms of the PDF of extremes, that's not PDF storms, PMP storms, it's a way off from there. Yes. How do you propose we go about translating trends in one level of extreme to trends in another extreme? I don't have a solution to that other than is there consistency as we go from, essentially my metric that I was showing there, the point is a 100-year storm, let's say. I'm looking at maybe the distribution of 100-year storm level events and is that consistent with 50-year storms? I think we could push that out further you know, maybe going out to the distribution of, I don't know, I've got some thoughts in my mind about how one could go out to maybe three or four or five hundred-year storms, but we are limited. I don't have an easy solution to get around that. Are the trends I'm finding in 100-year storms, if we had a long enough time series, you know, stationary time series, would we see the same thing in PMP storms? Can't answer that, I don't think. Okay, thanks. Okay, let's take one last question from Shichi and then we'll have to move on to our next speaker. Yeah, thank you so much. So first of all, wonderful presentation. I am especially intrigued by the high correlation you show between the water vapor versus rainfall depth, because I think that for us that's one of the biggest challenge to do the PMP calculation. So since you're so familiar with all this data, I basically want to ask for your opinion is, in your view, what would be the best way or best data to calculate the principal water, like using real analysis or conventionally, people has been just picking station and calculate dew point and try to use dew point to approximate principal water. So in your view, which would be the better way to do that? I certainly think for the more modern period, let's say from 1950 onward, real analysis are, I would think, the best tool to use for that. As we go back beyond, you know, essentially before the radius on era becomes a little more uncertain, I think about that. But I I think just the basis for real analysis, I'd be tempted to still rely on real analysis for that. Hey, thank you so much. Appreciate it. Thanks very much. And with that, we'll move to our second presentation from Alexander Rishkoff, University of Oklahoma, the National Severe Storms Lab. Alexander, the virtual floor is yours again. Okay, thank you, Jim. I hope that you've seen my slide, right? Yes, and my cursor as well. All right, I'm Alexander Rishkoff, and I will talk about utilization of polarimetric weather radar data for PMP estimation. And here are this couple of questions for discussions I have to address and both relate to rather rainfall estimates and also errors of rainfall estimation for PMP magnitude storms. And I will start from this table illustrating three generations of rather rainfall estimates. First generation, that was before the polarimetric upgrade of the next retreat in 2013. All rainfall estimates in various catalogues were made based on the rather frictivity factor only. So the polarimetric estimates utilizing two polarimetric variables, differential frictivities, specific differential phase, CDR, KDP respectively have been practiced once polarimetric become available on the operational weather radar networks. And the combination of Z and CDR was utilized for light and moderate rain and specific differential phase KDP if rain mixed with hail. However, relatively marginal QP improvement was reported partially due to the problems in the absolute collaboration of differential frictivity. However, the introduction of the algorithms using specific attenuation A was a real game changer. The MRMS group, multi-radar multi-sensor group at National Service Stop Laboratory started providing these new estimates on rainfall to NCEP since October 2020, although the official next retreat for product is still based on RZZR. Some of the river forecast centers have already started using the MRMS product, which pays its way to NCEP next stage for the dataset. And this slide demonstrates why the ROV algorithm became a game changer. So specific attenuation, which can be estimated only with polarimetric radar, this variable S-band is almost linearly depends on the rain rate. And that is why this scatter plot of rain rate versus its estimate from specific attenuation is a narrow list compared to standard ROV Z relation, ROV ZZR and ROV KDP relation. So that means that this ROV relation is less sensitive to the DSD variability of a wide range of rain intensity. And this is another slide demonstrating the same advantage. It shows that the dependence of fractional mean absolute error of rain rate estimations, the function of rain rate. So black curve, ROV Z, green curve, ROV ZZR, blue, ROV KDP, red, ROV A. It's obviously clear that the switching to ROV algorithm will dramatically improve rainfall estimation, at least up to the rain rates of 60, 70 milliliters per hour. And after that, we have to switch to ROV KDP specific differential phase. Of course, the DSD variability is a primary sort of uncertainty in all rather rainfall estimations. And it's great that the ROV ROV KDP estimators are less sensitive to DSD variability than the ones based on rather reflectivity and differential reflectivity. But more to it. As opposed to ZZR, these two variables A and KDP are immune to rather miscalibration, partial beam blockage and attenuation in rain and wet radar. This slide shows the impact of partial beam blockage caused by nearby trees on the left panel. And this impact is completely eliminated if you utilize phase-based, actually ROV ROV KDP measurements in generating rainfall maps. The performance of this new latest ROV ROV KDP algorithm in terms of daily rain totals for the three PMP grade storms of the summer 2021 is illustrated in this slide. So the biases, if you look at these scatter plots, the biases, one of them in Tennessee flash flood and another is Hurricane Ida when it simply went off the Gulf to Louisiana and when it finished its path through the continental United States in the New York City region. So in all three occasions, the performance of this algorithm is almost super, no practically no bias and very high cross correlation coefficient. In other words, the bias is usually less than 10% and correlation coefficient is well above 0.9. And the reason, the polyrometric rather QP meteorologists are particularly beneficial for estimation of something that matches the definition of PMP, very high storm totals or extreme rain, unanimously high rain rates, is because the essence, because the integral of these two variables, special temporal integral, they are large. And the first scenario is usually tropical cyclone landfall in Hurricanes, they don't produce extreme instantaneous rain rate, but the rain totals can be extremely high. And the second scenario continental rain associated with the MCS coal lines or tornadic hill bearing supercells when we have extreme precipitation over relatively small area over a short period of time. Existing climatological catalogues of rainfall data such as NOAA or C or NOAA Atlas 14 contain only surface hourly rain totals. So there is a need to capture vertical profile of precipitation and understand microphysical process precipitation formation if we need to understand the nature of these PMP storms and predict the climatology of those sorts of events. So for this purpose, and not only for this purpose, we developed novel methodologies for processing representation of the of the rather data such as quasi vertical profiles, column the vertical profiles, which probably not well known by say hydrological community and so on. And these are have been introduced for better understanding of mechanism of precipitation formation. Usually what we did for range defined what's in the QVP or QVP. So for every volume scan, the polyrometric greater data collected at various elevation angles as immutally averaged and projected onto the vertical. And the resulting vertical profiles for successive podium scans stuck together and presented in the height versus time format, which allows us to capture the vertical structure of the storm producing precipitation and also its temporal evolution. So the column the vertical profile, this is a basic brother centric product. And the column the vertical evolution profiles explore the same idea, but the column can be anywhere put in the field of view of the radar. So, and this is simply example of the novel representation of the vertical structure of the precipitation producing storms and their temporal evolution. So this example for the last year Hurricane Ion. So even without much knowledge about the, sorry, about knowledge about the nitty gritty details of polyrometric measurements, it's obvious that these multi parameter measurements provide different variables provide very complimentary information that definitely elucidates the micro physics of precipitation formation much better than just single rather reflectivity, which is shown in the upper panel. And it's also very important to distinguish between contribution of warm and cold rain to its total amount for PMP magnitude storm. So warm rain in the rain that is produced by collision coalescence process just close to the surface where ice doesn't play much role. And cold rain is the rain that is mostly produced from melting gravel and ice. So in the vertical gradients of rather reflectivity and two other polyrometric variables KDP and ZDR below the melting layer they characterize warm rain component whereas KDP, if you look at this very nice signature of KDP about the melting layer at this for this time period of the ion. This is characterized the contribution of the cold component. Okay. And we have to figure out how all these climate changes will affect the sources of cold and warm rain. Here are the typical RDQVPs or CVPs that we are producing for different Hurricanes and Hurricane Harvey and Hurricane Florence. So just to give you information and how it looks like. So this then what we did on the top of that we also develop the polyrometric rather micro physical retrieval techniques to estimate micro physical parameters of clouds and precipitation such as liquid water content, ice water content, mean volume diameter, total number of concentration both in rain and ice. And the results of the corresponding retrievals for the same QVPs that are shown here demonstrated in this slide. So we see how the vertical profiles of ice water content, liquid water content evolve for these two Hurricanes, how particle sizes changes and how particle concentration changes. So this gives us a full understanding and review of the underlying micro physics which is very important. So and such products for the most notable PMP grade storms which all of these storms are PMP definitely, PMP magnitudes. These products can be generated very quickly and we suggested already to augment the NOAA analysis or records for collaboration precipitation data set by adding vertical profiles of all micro physical parameters of clouds and precipitation for all notable PMP magnitudes storms for last 10 years after a polyrometric upgrade of the WSRTD stack. So we can do reanalysis of those type of of this polyrometric next rather data. You know there is not a very long time interval to predict long-term tendencies but this is already a decade of polyrometric weather radar observations and we have to utilize the most advanced algorithm to quantify precipitation. So we already started building the climatology of vertical profiles of polyrometric rather variables and micro physical parameters of precipitation and our preliminary effort is described in the recent paper where we examine large number of storms of different types and generate climatological profiles demonstrating for example the profound differences between continental and tropical storm. For example for continental storm in the continental storm side two minutes left yes okay okay I'm close to finishing so for example mean volume diameter of ice particles is much much larger in continental storm that for than for tropical storm of the sort of same rather similar rather reflectivity and the concentration of ice is very very larger for for tropical storm as opposed to continental storm and this actually has strong impact and correlation with the with rain actually is produced which is produced at the surface so this is very important to understand. The short duration and small area rainfall extremes is another big area of research a big challenge and here we are talking about time scales shorter than three hours and special scales smaller than 300 square kilometers. The problem is the nature and origin of such rain extreme is not well understood. These are commonly associated with deep convective storms which include those that are capable of producing tornadoes in large hail but sometimes of course within this continental deep convective storms when we see obvious core of tropical rain the warm rain process that's very interesting and challenging situation to understand and understand the mechanics for that PMP precipitation. So and what we also found is that peaks of extreme rain exceeding 100 200 milligrams per hour often underestimated by existing rather QP techniques especially including even polarimetric ones although they are much better than z-based and one of the hypotheses is that these rain extreme are often coupled with convective down drafts where traditional QP methods underestimate rainfall because they don't take into account high down draft velocity and rainfall is a presentation flux it's a product of mass and vertical velocity and again in the down drafts we have a sum of the thermal velocity of raindrops in steel air and vertical downward velocity so underestimation is inevitable so here I just show a couple of the variant of my presentation couple of examples of these sort of storms that produces extreme extremely extreme high they're extreme in all accounts so for example this arena storm in a outscored Veclacoma city the tornadic storm on May 31st 2013 this storm resulted in E5 F5 tornado 16 centimeter size hail and torrential rain that caused 12 flash flood fatalities so and these are these are reconstructed RHIs the PPI's of the rather reflectivity specific differential phase and same differential reflectivity cross correlation coefficient so and again the most notable feature is anomalously high specific differential phase next to the surface and if you look at the similar storm of the in the Ellicott City in Maryland May 27th 2018 also produced huge amount of rainfall you see the same sort of feature of specific differential phase which is with anomalously high values of 12 degrees per kilometers if we convert them into rainfall it will be almost 300 millimeters per hour whatever during short period of time so finally my conclusions okay you can read it okay I can repeat them anyway this rather for any the office significant improvement in the quality of rainfall which particularly for PMP magnitude storms and we plan to reanalyze ATAD polymetric rather data back to the date of inception which is 2013 to use the absolute the most advanced PPI algorithm and we insist that we have to analyze and examine the examined vertical structure of the PMP magnitude storms in order to understand micro physical processes for information and predict how this process respond to climate change and this is the next frontier of research and very interesting subject is detection estimating of extreme short duration rainfall which associated with deep convective storm yeah thanks for your attention yeah I'll stop at the moment move on to questions and let's see any I'll I'll pick off so they're Alexander there are a lot of their operational data sets radar data sets that go back to the the pre-polar metric era what are your suggestions on how to proceed and developing a storm catalogs of extreme events is that this very detailed examination of storms where you characterize or you both estimate rainfall and characterize error structure or are there ways that we can build on some of the either operational or reanalysis radar data sets in an effective way you know first of all there is a lot of a lot of RZ relations that work have been used in the past some of them for tropical rain some from for continental rain then even pre-polar metric next rat units at least five of those relations so using polar imagery, polar metric estimates that actually have been obtained actually have been obtained only starting from 2013 we can probably recommend make recommendations which of R of Z relation out of the multiple multitude of R of Z relations can be recommended for any particular situation based on a general analysis and morphology of the storm so and unfortunately I don't believe that we can do much on the top of that at least from my perspective let's see now we've got a question that's come in how do you see combining rain gauge observations what's the role of rain gauge observations in developing rainfall estimates combining polar metric measurements I think that's the gist of the question yeah of course it's first of all the rain gauges should be used for validation but very often you know we see that you know you have to trust rather more than than rain gauge especially for all this hoodie cane with a hoodie cane strength wind actually according to huge underestimation of rainfall measured by rain gauges so but apparently you know that's so far we don't see anything else nothing better for ground validation that rain gauges uh john johnny hi alexander thank you for your wonderful presentation quick question on besides rain gauges merging the issues on merging multiple radars together for merging to make estimates um do you have any comments on that and the challenges say you're in Tennessee and you're trying to combine three radars three polemetic radars to make the best spatial estimate of rainfall um okay uh I can tell you I can tell you first of all this our multi-rather multi-sensor group at the national severe storms laboratory is already doing that and I know that a lot of customers prefer mms products uh as compared to the sort of official vexrat products because they do this sort of a composition but what is good about those estimates based on specific attenuation and kdp is that they are very good for compositing and for merging because they are not affected and biased by for example different some rather miscalibration errors attenuation effects and so on and what we did actually with great success we have this our northern neighbor Canada that operated say c-band network until recently now they're changing to s-band network frequency okay and what we did we we simply use all this kdp or a based estimate of rainfall on the canadian side and on american side and we found out that the merging and compositing is much much easier uh than if you try to do something based on rather reflectivity or differential reflectivity that requires very careful the calibration then also some uh uh errors uh uh associated with for example attenuation of uh microwave at a shorter wavelength at c-band for example so and this in other words we also probably may have some sort of a forthcoming the network of gap fillers at x-band that operated x-band that's very short wavelength in the in the united states and i do believe that using those principles based on the use of uh specific attenuation differential phase these are these are the perfect platform for integrating the data from different readers thanks so much let's see uh any other questions for alexander at this point let's see a question that um uh basically looking at integrated aerial precipitation problems with uh with rain gauges that you notice there any utility of bringing in stream gauge observations uh any thoughts on that you know we never i i personally never worked with this stream gauges and so because i believe that's within the realm of uh hydrological sciences but science but uh definitely because all stream gauges there's some sort of they provide some sort of a cumulative um amount of water that uh fall um that that falls over relatively large area depending on the say rain relief whatever it's very important to measure uh this water that uh that falls over relatively large area and again as i mentioned in one of my slides you know this all these polymetric methods are particularly well particularly well working for aerial estimates for aerial estimates or long term estimates so in other words the scale should be uh large in order to or or probably the intensive precipitation should be high so this i'm i'm sure that it's definitely a big uh big help for uh uh for hydrology and to estimate hydrological impact on the on a on a big scale okay well thank you very much shalexander that's uh uh going to be very useful in uh guiding our thought on how to look at really extreme rain events um we'll switch over now to uh to dan right uh from the university of wisconsin all right can you see my slides yes and you can hear me okay i guess too as well okay great um all right so i was only given one question to answer but um it's a big one and so i'll do my best here uh so how should storm catalogs of extreme rainfall events be constructed for modernized pmp analyses so i do want to say a little bit about the perspective that i'm coming at this from um and so my research group for some time now has worked on something that we call process-based flood frequency analysis and so i'm not going to go into a lot of details here but i will do a little bit and so what we're doing in our work is is generating lots of what i'll call flood recipes so extreme rainfall events uh combined with other seasonal seasonably appropriate um initial conditions uh so antecedent soil moisture antecedent snow pack uh and base flow etc and then using those all together uh in uh distributed rainfall runoff models to generate large numbers of flood events and Monte Carlo style simulations and then from that looking at the outputs for flood quantiles and looking at this whole chain of events to see what the physical drivers of those quantiles are and so where storm catalogs fit into this is through our use of a technique known as stochastic storm transposition which basically is a a resampling approach where we create a collection of storms from gridded rainfall data over some relatively large region that surrounds a watershed of interest in order to support our understanding of rainfall frequency and flood frequency over our watershed of interest um and so we've used a variety of different data sets but essentially any gridded rainfall data that we can get our hands on we've probably used and so this figure that you're seeing here is just one example from the work that we've done it's the example that's the closest to pmp relevance this is work that was funded by uh the bureau of reclamation and you'll see here that we were generating recurrence interval estimates out to 10 000 years in this case using this stochastic storm transposition to generate our rainfall events for this flood frequency analysis i'm going to spare you a lot of the details on what's going on here but the point is that we were doing this probabilistic work with storm catalogs to get at flood frequencies for rare events and we backstop those using a variety of statistical paleo flood and deterministic uh methods to to kind of show that the whole story hangs together and that this kind of approach can get at pmp relevant types of events so i'll just share some perspectives on why to use storm catalogs um i like to say that extreme storms happen all the time that's not technically true but um it is not so far off the mark as long as you look at a relatively large region um extreme storms or storms are only really rare from the perspective of a watershed or a rain gauge um and so uh the fact that you know really large storms happen over larger areas can help support your analyses from there um what i'll call second order rainfall properties uh where first order is the amount of rain that falls so second order here being uh wind and where it falls within a storm are really important determinants of flood magnitude and i think most folks on the caller are are are familiar with this in general but you know floods are the result of very complicated interlocking time and space scales of the rainfall and the watershed that it's hitting right and so those second order properties are things that point scale analyses such as uh rain gauge based frequency and trend analyses they miss out on storm catalogs on the other hand especially if you've got good rainfall data going into them give a really good sampling of this uh within storm variability not only because you've got you know individual storms but you've got a whole set of them that you can look at to understand a wide range of what this variability could look like across uh seasons and across years and and along with that that can help get around some of the rules of thumb that so much uh flood frequency and pmp work is tied up in you know things around time of concentration and rainfall duration and things of that nature by actually being able to sample from the real observed variability of rainfall and then the third point is what i'll call the arrival property so in this case it's really thinking about not what's happening within the storms but what can we learn from looking at the collection of storms and its totality and so basically when and where these storms occur within a region so how often storms are happening during what seasons they're happening and over the region that we're looking are there important differences in the properties of these storms so where can storm catalogs come from well there are literally storm catalogs that one pictured i've got it also right here and so these are basically paper records um that are going to have things like isohyto map step theory adoration data for historical storms with uh with emphasis on the historical part because these are oftentimes quite old records and i'll say a little bit more about these in a moment but they have some strengths and some limitations um i'll plug a little bit some of the work that well specifically the software that that we've created in my group known as rainy day moving into sort of more modern options here so it this is the software that we use to do our stochastic storm transposition work but to get it started the user can define a number of different things around rainfall characteristics that they want that storm catalog to be based on and then they can also define which gridded rainfall data set they'd like to use and then from there rainy day will identify however many of the most extreme storms the user is interested in whether that's 10 or whether that's a thousand and i won't dwell on this too much but it is uh it also produces sort of diagnostic plots of over here you see a couple of plots out of the software from within storm variability right so that's second order rainfall properties that i mentioned and then also some diagnostics of the storm catalog itself so here we're just looking at a rectangular domain um 400 or so storms the dots are the centroids of those storms and you can see basically where they're occurring how often they're occurring and means and standard deviations and there does a few other things as well um some aspects of what we've done have inspired bits of things that are happening now within FEMA's future of flood risk data initiative and so they um have uh tasked uh FEMA has tasked the army corps of engineers hydrologic engineering center who has them sub-task uh Dewberry to start creating uh storm catalogs using this AORC precipitation data set that's about 40 years long at this point you can see some screenshots from from their um very preliminary prototype of this and so this here's this larger domain that they're identifying a storm catalog from to support analyses of a smaller watershed here and they're in the future going to be working towards things like storm typing and and supporting different types of analyses around seasonality as well and then this is actually going to be um a key input for an initial phase of about two billion dollars of flood plane mapping under that initiative using hack HMS specifically okay so I think storm catalogs are great but there are certainly some limitations around them and so I want to talk about those for a moment here um we're really saying these are collection of rainfall events from a region that can support our you know our analyses but that then instantly raises the question well what should that region be right so here in Wisconsin I think that I can learn something about extreme rainfall that could happen in Madison by things that have happened over the border in Minnesota or in Iowa for example but I don't think that I should be trying to learn anything from what happened in California or Florida and so that suggests that some amount of let's say homogeneity across this region over which you're assembling these storm catalogs is necessary but it's really not obvious what that what that actually should mean right homogeneity with respect to just rainfall with respect to uh water vapor transport or other variables um some work that's been done on other areas like rainfall frequency analysis have some relevance there but I think when it comes to whether it's the probabilistic work that I'm doing or whether it's PMP work that the committee's tasked with that some more focused work is needed we are doing some stuff related to this for our our rainfall and flood frequency work using stochastic storm transposition I just wanted to show you two figures from previous papers of ours so here we're looking this was from a paper where we're looking at flood frequency for this watershed in northeastern Iowa and our storm catalog covered this rectangular domain here and you might say well why is it a rectangle and the basic answer is it doesn't matter very much you could draw a different rectangle you could probably draw a circle something about somewhat different size and through sensitivity analysis we showed that really for our frequency assessments it didn't matter a whole lot and that's fundamentally because to a first order approximation rainfall is roughly homogeneous across the part of the country whereas over here this was a study of the big Thompson watershed in Colorado um up here and in this case in red we see the transposition domain that we used and I'll spare you the details on how we created it but essentially it follows the front range of the Rockies east of the continental divide and so we're trying to restrict our storm catalog to only include storms that are going to have kind of the same who are again a rough approximation anyways or a graphic enhancement and other features then two minutes okay I'll uh yep I should be able to get there okay so um so uh we're working on uh developing a hypothesis testing sort of approach to develop domains uh uh or regions for storm catalog development um and I think in the interest of time I will move ahead to some of these other issues so storm typing I mentioned this before you know relatively easy for certain types of storms where they're fairly clearly defined um phenomenon the atmosphere harder for others although some work has been done um different types of storms really matter a lot for floods at different scales so I'll try to keep this quick but here we see some work from my phd where we're looking at a 10 110 square kilometer watershed in charlotte north carolina the flood frequency was really determined entirely by tropical cyclones and then within the exactly the same watershed but a smaller sub catchment for the upper tail tropical cyclone didn't matter at all right and so different storms different types of storms matter a lot at different scales and then of course climate change will affect these different types of storms unevenly um I think we've already heard about data quality issues so maybe I will skip that for at least the more modern data sets um and then for the sort of these paper records uh they're limited to older time periods limited or disconnected inconsistent in the types and amount of information provided and um in you know basically whether they're providing enough um whether certain storms in some parts of the country are being left out of those records for one reason or another so I'll just close now with five recommendations and this is my last slide um so I mentioned that flood response it is this complex result of these interlocking spatial and temporal scales and so the fact that storm catalogs can provide you with multiple different storms with different uh rainfall properties um even if you're not doing a full blown probabilistic analysis I think that some degree of sampling from that catalog can help remove some of the guesswork around you know essentially what critical uh time scales and spatial scales might be um I do uh while I acknowledge some of the limitations that were pointed out about um uh the pre-polarometric or non-polarometric data in earlier records I do think there's a lot of value in these and we've demonstrated that in a number of our research efforts um I do think again though that there's value in these earlier storm reports and that some of the sort of shortcomings of them particularly with respect to inconsistent sampling is not so important for PMP as it is for flood frequency analysis and then I do think that if we're going to develop storm catalogs we need ways of defining what sort of regions they can be based on and not only based on precipitation properties but also other atmospheric fields from the analysis I think we'll hear some more relevant perspectives on that later and then I would suggest looking at related federal efforts that are going on uh primarily through that future flood risk data initiative although on the regional homogeneity I suspect that Atlas 14 and 15 have some uh some things to offer as well so I will stop there Jim I think you're muted. Effie. Thanks Dan for for the very interesting presentation and thank you for bringing back to we do not care for PMP just for the sake of PMP we care about you know the flooding and therefore the types the orientation the space time intricate properties are very important I can have the same storm in a different orientation over the watershed and can make a big difference uh so just I wanted to point out this that we have yes our task is PMP but we should not forget all the properties that we need to look at for the things that we care the floods one thing that do you think that or do you have an evidence that the homogeneous areas for the storm transition kind of have may have major in some cases major changes under a warming climate? Yeah that's a good question we have not well I was going to say we have not looked at that very closely I have with some longer term gridded rain gauge records so there are some gridded rain gauge data sets that are 75 years long or more and so I've done some sort of split sample sort of things and you can find very big differences um in what might seem somewhat reasonable in terms of these homogeneity metrics although it's not something that I've really taken far enough to you know be publishable for example and it's also difficult I think to know how much of that really is a climate change signal and how much of that is it's just more natural variability so I'm not sure yeah um two questions queued up Shichi and then Johnny yeah thank you so much for your presentation it's really interesting so my question is about curious about your experience about AORC because it's a more recent data set and I'm wondering have you kind of coming on that in particular they are some other gridded database that has been used by the community although they are at a day scale like prison um day mat and others so so that comes more like what will be your recommendation if you're moving forward should we use those gridded database or should we always go back to co-op and stuff from there thank you yeah great question so as far as AORC goes I actually don't have a ton of experience with it this is um my group is gearing up with a bunch of new fresh faces and fresh projects and we are moving forward we're going to be using AORC in the past we've concentrated a lot on stage four and on um an LDS to both of which you know have some strengths and weaknesses and particularly depending on which part of the country you're looking in but they both have relatively high temporal resolution um and let's say well it depends on which data set you're talking about the spatial resolution but point being you asked about daily data um I think that is very important that at some point in this process of going from you know the atmosphere to the flooding you have to be able to inject high resolution information right uh it meaning that I don't think that we can just rely on daily data daily rainfall data uh to carry us through to flooding now there can be a lot of uses to learn valuable things from daily rainfall data um but uh but I would really encourage a focus on on higher resolution data when it comes to actually generating the inputs to uh hydrologic and hydraulic modeling thank you you're welcome John thanks Dan for that wonderful presentation as usual um my question's on the flooding like so your first bullet and the ties to older PMP ideas especially us uh sorry eastern U.S. and HMR 51 so the question is what are your thoughts on mixtures of rainfall signatures to cause those floods everybody hopefully is aware that HMR 51 uses a composite for all the rain events including mcs's and convection and cyclones and what are your thoughts on still doing that yeah I I must admit that um I have not read I'm not as widely read as you are on that uh on that you know on some of those things and so um I guess so when you say mixtures you're thinking about mixtures on just the rainfall side of things just the rainfall so it uses a single event essentially yeah 72 hours incorporates a six hour duration incorporates all these are the things yeah with the western U.S. they broke it out into shorter durations and some of the newer statewide studies um go on this path of mixtures and separating out by type yeah it's a good question I think that that idea of sort of mixing events together to create some you know catchall event is um gives me some pause I'm not sure how to do it I'm sure that a lot of good thought went into that back in the day um I guess my initial thought or another thought though is that just that I don't think we need to do that anymore for a couple of reasons one is that I think that between the the data resources that exist now and these gridded data sets but and maybe particularly the fact that the Army Corps and and FEMA will be developing storm catalogs that are ready for HEC HMS for uh for example um and then on top of that that HEC HMS and to some degree or another other modern hydrologic modeling software have all the tools needed to uh ingest these sorts of data there's no reason why we can't be thinking about individual you know if not thousands of runs like my group does then a dozen or a hundred or something that can typically be handled in some batch fashion and modern modeling software hopefully I answered that question properly ah great thanks so much yeah um let's see uh dana follow up on the to the high value in um a or c in stage four and thinking of stage four and um and stochastic storm transposition or uh or transposition for pmp um you know it doesn't essentially just depend on the largest value and if so um shouldn't we really focus our efforts on getting the largest value right and and is this the right vehicle in that case yeah it's a good question um and and with the uh for pmp that is well how should I say this it will that that largest value will drive any sort of probabilistic analysis like stochastic storm transposition an extreme up or tail so that is true I think um what I would still encourage folks to think about is uh whether that particularly that second order variability that I described about you know when and where the front that rain is falling within storms um that can still be useful from smaller storm events I'm not it you know some of the there's some diminishment of the importance of that second order variability as you move to rarer and rarer events but it by no means goes away at least as far as I've been able to see the other point there is that um and this is maybe we don't have time to get into a larger discussion of this but you know I think that the seasonality issues that a storm catalog approach can reveal can also be pretty important in terms of maybe revealing possibilities around um snowpack and um soil moisture conditions as well right so uh so just understanding how the land surface is lining up seasonally with uh when the those biggest storms might be possible good let's see any other questions okay um so at this point we'll take a five minute break and and reconvene uh with our second session okay we should be back up and running okay excellent thanks for being patient everybody and of course I get all that time back at the end right I'm sure all right so we'll have some more discussion on this as we go through um some of these processes but again the bottom line is in the spas catalog development we've been doing over the years all these aspects are kind of part of the process and really incremental improvements through time being able to utilize that storm data for temporal evaluations spatial evaluations we've talked already about transpositional limits and how important that is in understanding things quantifying uncertainty of the parameters that go into the spas process all these things should be part of the storm catalog going forward and are part of the process that we do now and one of the key aspects of the spas process that we utilize was trying to have a way to incorporate old information pre-radar data with new information and really good information utilizing Nexrad and polar metric radar that we talked about earlier how to put all those pieces into one database in a consistent way that could be utilized for pmp development and other analyses again as part of that process I mean this is not our first rodeo these are all pmp studies we've done around the world their only reason I'm showing this slide is for is two two two aspects one is we have been fortunate enough to deal with storm and extreme rainfall and pmp type events in just about every meteorological setting you can imagine from equatorial regions to arctic regions and everything in between right so why does that matter because that gives us a really good sense of just how important it is to to quantify rainfall accumulation in time space and magnitude in a consistent way for use in design and development of pmp by storm type by season and so on and see what issues come out of all of those aspects every location has unique challenges unique types of meteorology unique questions that have to be answered for storm characterization storm transposition limits and putting all those pieces together from an end to end process to actually get pmp that's used in design is a critical part of the process of course and so having that context and understanding is very important so having that work all over the world on pmp development really helps to have that spot catalog in a place that's usable going forward for these types of discussions and that was a little bit of background of course all of you hopefully are familiar with the hmrs and what's gone into them these are just examples of the storms that were used in the various hmrs hmr 51 hmr 55a and then of course 57 59 you know and because these are all storm-based approaches you know there's there really was a dearth of information used available for a lot of those studies i mean very much a lack of storm data in the coastal mountains and central areas of california and in the eastern parts of washington and oregon and high elevations of the rockies i mean to think that we could come up with the information we did in those hmrs is pretty incredible there were some very smart people doing a lot of great work with limited data and of course now we have so much more information in so many other ways to incorporate reanalysis data and next-rad weather during our data the stuff that dan's group was working on at wisconsin and putting all those pieces together into one big database and of course our part of the puzzle is the spas database you know we've analyzed over a thousand or nearly a thousand storms now over the last 20 years since 2002 and that the point of that is that we've done it in a consistent manner the whole time of course there's been improvements through time incorporating an extra head weather radar for example and other types of storm typing and and so on but that that's part of the big catalog and database that needs to be done so we want to answer the questions of what's next and what's going forward well the first thing is we can't really know what's next until we know what we've already done right so with the spas process just for a quick reference started in 2002 we have a consistent algorithm that we use to analyze rainfall the big key importance is that we utilize all of the above right there's been a lot of discussions today already on which data sets are the best how to use each one how they all work individually and so on we put it we're agnostic we want it all we want all the data and then when we take it in we want to use the best of each data source and put that together as one type of output right so we're using observational data we're using we're doing our own bucket surveys at times we're using next spread rather radar that's been dynamically adjusted on an hourly basis we're using model reanalysis data when available to help distribute things we're using satellite remote sensing information to help identify storms and spatially distribute processes everything gets thrown into the bucket and to me that's a key component to any storm catalog going forward is to use the best of all of those data sets that have been talked about already and will be talked about going forward and put in this one key component really that's so important for pmp development is to understand the limitations of each piece and how those affect the outcomes right so when you talk about reanalysis or ar those are great for for identifying individual events or or seasonality or or just general conditions across a region but they're not very good when it comes to explicit accuracy which you might need for pmp design and evaluation when you're designing for critical infrastructure right so you have to understand that next rad weather radar or polymetric radar we talked about before uncut uncorrected zr relationships don't do a great job especially with extreme rainfall i mean when when you do that with it when it's not been property calibrated and adjusted to observational data you're going to have issues with the outcomes unfortunately observational data also has its own issues and uncertainty there's no perfect metric out there so you have to use the best of all the worlds to kind of come up with an answer you know this is just an example on the right side here of a single hour of accumulation with a blue line here bringing the default zr relationship for this particular storm event this is actually september 2009 in georgia we had extreme flooding and and a couple of dam failures and so on well we have the blue dots here being the actual observational data and the black line being an exponential best fit of those data and you can see the difference in that one single reflectivity run between those two data sets and if you put that over the entire storm time frame you have a huge difference in the amount of reflectivity and hence accumulation from a standard zr relationship versus a spas corrected zr relationship and you can imagine what difference that makes in the magnitude of rainfall accumulation so that's just one example of how the spas process really corrects for that and and takes all the data and information available to come up with a much more accurate data set now we can say that and wave our hands all we want but we still have to prove that the spas process works and if we're going to use it in a storm catalog type of analysis we have to have proof of concept and so on so where did spas come from of course like most good science it came out of necessity when we started doing these pmp studies we had to have a way to utilize the dads from the hmrs with updated information and depth area duration information for pmp development so it was produced and set up following guidance and processes that were described in technical paper one in 1946 and ongoing from the bureau of reclamation armory coronal genius and so on in producing their dads and then utilizing that information with current state of the science information and processes such as you know obviously gis computing power next red weather radar and so on and then trying to make sure that the processes develop were consistent between the old and the new so we could continue to use the old data with the new information and we prove that through obviously functionality testing where we have a known answer and a known process and we can kind of go through and see how the spas process computes those known answers through time and how it degrades through time and make sure that it fits that process and is doing exactly what it's programmed to do and that did go through the nrc's validation verification process back in 2017 and then of course can comparing spas output in dad to older storm so one of the first things we did back in way back in 2002 2003 was to take a spas dad as a 1955 westfield massachusetts storm from hurricane uh i believe it was dyan i can't remember exactly it might have been i think it was dyan anyway and compare that to a spas dad and see what the differences were in space and time to make sure we have consistency with those outcomes we've done that for hundreds of storms now where we have previous dads and spas dads to make sure that we have the consistency needed for usability going forward so going forward the questions become how can we continue to improve this process how can we utilize all those spas analyze storms that have already been implemented for pmp development precept frequency develop model calibration and so on and continue to build on that how can we integrate and utilize the other data sets that have already been talked about today and will be talked about going forward to make one overall data set that's usable for everybody is producing outputs that are accurate and reliable enough to use in pmp design for critical and high hazard infrastructure and so on well obviously there's there's many things that we've already talked about you can't have too many quality observations right especially for hourly and sub hourly um you know we we talked a lot about daily but obviously there's many basins around the country that respond to their pmf into the flood on us on a sub daily perspective and even a sub hourly perspective so we have to have good information at those levels obviously next red weather radar and so on does a great job of helping out with the spatial information and timing uh down to five minute but there's other issues of course with that as I showed when it's uncorrected we have a lot of problems there's beam blockage there's radar coverage issues and so on then we have you know coverage from satellite remote sensing which has its own issues in in magnitude and timing and so on so overcoming those challenges by having more observations on the ground cannot be overstated obviously consistency in the way that that precipitation information is processed and analyzed and the consistency of the outputs that are needed is is critical um accuracy you know quantification of uncertainty just just thinking about how much uncertainty there is in what we think is a great rain gauge observation you know if it's a very windy condition um if it's effect by frozen precipitation or hail or other things how much uncertainty there is just in that one observational data set and then how that gets carried through through the entire data set and of course that's why you want to use all the different pieces of information to help uh best quantify the magnitude of accumulation and not just rely on one aspect but really trying to get better quantification of the uncertainty of rain gauge observations themselves that's ongoing research that should continue so we can understand and quantify that going forward obviously there's been a lot of work in the past to quantify and identify extreme rainfall events and that needs to continue so we can have consistency and utilize those old events uh they're very important for setting the level of pmp throughout the uh the country um and then of course that database needs to be accessible and updatable going forward so new storms and information can continue to be added and adjustments can be made transposition limits as discussed earlier is still a key key unknown that really I don't know how we're going to have the answer to that very subjective varies by storm type season topography other aspects but having that database of information that stored cat storm catalog will certainly put us in the right place to be able to answer uh those transposition uh questions so what's next I mean really all the stuff that we're doing with this committee and the people that we're talking about there's so much great work out there hopefully it can be integrated and incorporated and kind of taking the best components of each piece to continue to improve the process and you know um obviously we've done a ton of work to develop storms and storm information directly for pmp development and uh but certainly you know we don't pretend to know all the answers or have all the perfect information so let's try and get all those pieces together uh from the modeling side from the reanalysis side from the from the um observational side using radar data and satellite remote sensing and kind of take the best of all those pieces to to answer these questions and come up with a storm catalog that uh uh really answers the bell for all this and continue to be updated through time um so anyway hopefully that kind of helps out obviously 15 minutes isn't a lot of time to get into the background detail so we'll be happy to answer some questions on the spa's process and how it's been developed and used for pmp through time so thank you for your attention thank you Bill um and let's see his questions queue up let me uh just start with one on the rainfall processing uh the the dynamic adjustment of um the radar rainfall estimates hourly timescale can you talk a little bit more about what the key issues are in um effectively uh carrying out adjustments uh at that short time scale yeah obviously uh you know the the bias correction the adjustment and the best fit to drive the best fit zr equation is going to be based on the number of observations you have for any given radar scan or any given hour right so on any given hour over the entire spa's domain we're taking in all the observational information we have and we're saying how much rainfall is falling in each of those points how does that relate to the um reflectivity zr zr reflectivity over that same point and then how can we what kind of bias correction needs to be applied at that point and then applied spatially throughout the entire domain to get a best fit between all those points of course the problem is a couple of things one we're assuming that observation at that given hour is has is accurate right we're assuming that rain gauge observation is ground truth two um we're assuming that the radar reflectivity over that rain gauge observation is reflective of what happened on the ground right there right well obviously uh depending on the the base of the cloud the wind and so on the the reflectivity grid above that observation may actually be you know upstream or downstream of what the observation on the ground actually occurred right so you're but but we know those things you recognize them and so you try to overcome them by the power of big data and having lots of observations and good statistical fits and statistical analyses throughout the entire domain every hour and and make those corrections um so that's that's kind of the process of how you're trying to go through that using as many observations as possible over a large domain under the entire you know seeing it or storm event and making correction every hour that fits the observation data great um one additional question along those lines and um um but kind of veering into application of the storm catalog what are the kind of key lessons you've learned about um about how to objectively carry out storm transposition and then we have a question from Effie to follow up yeah yeah storm transpositioning is obviously one of the biggest things we deal with at all the PMV studies we deal with and you know it's an iterative process right obviously there's a feel as a meteorologist for understanding the storm type the seasonality the interactions of topography uh coastal convergence other factors that may come into what how did that storm develop how did that storm what did that storm look like from us overall synoptic characteristic and how does that fit into the storm types of the region you're looking at then you want to look at the data analysis the results in the information when you move those storms around so for example let's say we're doing a study um uh let's just we'll use simply the state of pennsylvania right so we and jinn this is you know this very well the uniqueness of the incredible rainfall events that happen in the appellations such as smithport such as june of 1995 such as red bank 1996 right there's there's a very unique combination of topography moisture availability and atmospheric dynamics that happen in the north central and central appellations that cause these extreme localized rainfalls right well you have to ask yourself where else could that type of event happen in the region with that same kind of combination that's a subjective um question but you can look at what's happened around the region to figure that out obviously you look at the topography differences access to moisture seasonality synoptic conditions and so on to help make those decisions then as you're doing a pmp study you're putting those pieces together to see how they fit and if you're seeing unnatural meteorological um uh you know gradients across the region that's you shouldn't ignore that that's probably telling you that you've moved the storm too far or not far enough and or you just have not observed a storm in your database yet to fill in that gap and you have to make those kind of decisions so the storm catalog process the whole point of that is to have enough information to understand and hopefully have identified enough storms to fill in those gaps of subjectivity and unknown and so you don't have to move storms around further than necessary you have enough data in a given region that you are feeling confident is usable for that region from what to drive your pmp estimates and then of course peel back that layer of conservatism that you may have applied by moving storms further than they should have gone thanks um Effie uh hi Bill thanks a lot for the very interesting presentation you emphasized to use the best of all data sets which i completely agree you emphasized also uh you saw some impressive impressive example of the uncertainties in the zr relationship so or the point gauge observations but i missed a little what approaches you use to propagate all this uncertainties into the uncertainty quantification of the pmp you see what i mean yes we we have all the observations we have their uncertainties we have the transposition et cetera but at the end of the day how do i put all this together it's not trivial yeah that's a great great question and obviously today i wasn't able to show that but you know based on the mikovich paper from 2015 we have uh developed the whole process to quantify the range of uncertainty of each of the components that go into pmp development so we have a range of uncertainty of a rain gauge observation spas analysis dewpoint climatology storm transpositioning storm efficiency et cetera et cetera we analyze each of those individual components of pmp development to come up with a range of potential outcomes then you put them all together as one overall uncertainty process and then what you do what we do okay here's our deterministic best estimate of pmp let's say the depth is 20 inches and 24 hours at a given basin well if you put all those ranges of uncertainty together into one histogram it could have been somewhere between 15 inches and 35 inches right so where does your best estimate fall with that overall range of uncertainty um so we do that in all of our studies now because it's very informative to understand each of these components of pmp development has its own range of uncertainty yeah the spas process is just one of them and how do all of them stick together we can discuss another time because exactly right that was gonna say yeah because propagating only the uncertainty of the observations is not enough it's it's a probabilistic you know concept I mean they start in the transposition for example you could have attached probability of transposing it but we'll discuss another time I think it's thank you that's right that's right and I'll just add to that we do a probability we've got a question from chris but sure let's move on yeah I guess I mean just briefly to follow up on that is there I need to look at the mink of its paper is there do you guys have anything written up that describes your characterization of uncertainty that we could take a look at we do I will share that with you we have a couple of real-world examples an actual pmp development where we applied that and have she made a change to pmp estimation based on the outcomes of that okay thanks and then yeah with regard to the question about combining data sets I'm I should say in context here I'm just getting up to speed on how all this is done as a statistician just seeing this for the first time I agree that it makes sense to try and use the best available data but then it also seems like that raises the danger of sort of very bespoke analyses and for each individual case so I guess I'm just I'm not quite sure what my question is except to say is the idea that you're developing your own storm catalog where you're doing sort of very careful analysis of each individual event that you look at and then for each individual event you're trying to bring in and make use of the best data available for that event is that the sort of bespoke nature of what you're of what what you're doing here yeah you've characterized that well yeah but we still have the issue of trying to understand what do we consider ground truth right right now we consider rain gauge options to be the most accurate available data set we have but even those have their own uncertainty and issues but we have to have something that we're ground choosing to we're making these adjustments and trying to bring in the best pieces from other data sets to come up with a final picture right okay thanks that helps great thanks bill and with that we'll move on to our next presentation from Laura Slavinsky from from NOAA Laura great all right well thank you for having me so I come from the NOAA fiscal sciences lab I'm not a PMP expert but I was one of the co-leads on the 20th century reanalysis and so today I'm going to try to give a perspective on reanalysis and I'll try to answer as many of the questions as I can so the two questions that I was posed with are reanalysis fields and in particular the 20th century reanalysis suitable for historical reconstructions of storm environments for PMP magnitude storms my answer which feels like a cop out but it's true is that they can be but this is a research question it's not you know it's not a yes or no right and I'll go into a little bit my take on it but it's it really is a research question and then the second question is how the suitability of reanalysis data changes over time and the short answer here is that the accuracy of reanalysis data really depends on the observing network at the time that you're interested in I'll also talk a little bit about trend studies and when and how reanalysis can be used for that basically they they can sometimes if you're careful so just to provide some fundamentals on reanalysis for those who are not experts reanalysis provide a consistent gridded record of whether weather and climate by assimilating historical observations into a modern weather forecast model to achieve consistency and here I'm talking about consistency in time of reanalysis dataset we fix a forecast model our weather prediction model and we fixed a fix a data assimilation algorithm and to some degree you can fix the observing network to further make your reanalysis consistent so to that end I guess I'll I'll mention two terms that I'll use a couple times throughout this talk which is full input and sparse input reanalysis so full input reanalysis include era interim era five mera mera two and these assimilate most observations that are available in particular they assimilate in situ as well as satellite upper air and aircraft they generally only cover the latter half of the 20th century to avoid spurious trends and signals that can arise from significant changes in the observing system so if you tried to do a hundred year reanalysis or 150 year reanalysis and then added all the observations as you as they came on you'd see some pretty intense trends that are not correct but these can still be impacted by just a single instrument coming online and I'll show an example of of that happening later on the other side of the spectrum we have sparse input reanalysis this includes the 20th century reanalysis version 2c and version 3 which is what I co-led as well as the european center has sara 20c these assimilate only surface observations so the 20th century reanalysis only assimilate surface pressure the european reanalysis also assimilate marine winds and because we're not trying to assimilate satellites or upper air we can extend 100 years or even further into the past with not as much impact on the estimates from changes in our observing network so a little bit more detail on the 20th century reanalysis again we only assimilate surface pressure observations but we use a modern weather model to kind of spread out the information from those observations in a physically meaningful way version three of the reanalysis is on a three quarters of a degree grid 75 kilometers and is available every three hours from 1806 to 2015 and there are efforts underway to extend that further and as we develop future versions of the reanalysis this is something that we're going to consider as being able to keep it more close more closely up to date I also want to mention I skipped over this but we prescribe sea surface temperatures sea ice concentration and radiated forcings and that's because we're an atmospheric reanalysis reanalysis if we were an ocean atmosphere reanalysis then the sea surface temperature would be a little bit different and I'm not going to go into the details of the data assimilation method of course except to mention that the algorithm we use it has 80 ensemble members to quantify uncertainty and I think this is really important and I'm going to touch on this point a couple times that having that large ensemble to quantify the uncertainty can be really useful and as an example I'll show on the left here these are the synoptic conditions from the great blizzard of 1888 in March and so you can kind of see the outline of the US here in the top left this is the ensemble means sea level pressure from our reanalysis in the contours you can see the storm here the teal dots are all the surface pressure observations that we assimilated and then in the shading I'm showing our measure of confidence and so in the yellow and white colors we're more confident and so you can tell that we're more confident where we have more observations and we're less confident like in the Pacific Ocean where we don't have as many observations the top right panel is the 500 millibar geopotential height and its confidence and then I also have the ensemble mean 2 meter air temperature in the bottom left and the precipitation in the bottom right showing that we are getting the cold and the wet and finally our data is publicly available as long as well as this figure is also at that website so getting to the questions that I was asked to answer the suitability of reanalysis for reconstructing storm environments the first thing I'll say is that I would say that full input reanalysis like error five might be more accurate in terms of reconstructing individual storms because they would assimilate all our most available observations also mentioned that error five is now available back to 1940 as of just like a couple weeks ago I think and it's available hourly and at a quarter degree resolution in the atmosphere if you want to go earlier than 1940 or if you want like a longer sample size in time you would need to use a sparse input reanalysis like 20 cr and I will just mention this is you may want to use downscaling techniques to increase the resolution as in Mahoney et al 2022 which I think we're going to hear a little bit more about in the next talk and again ensemble based methods like what 20 cr uses are key for measuring uncertainty and confidence in its estimates and I'll also mention that ideally individual ensemble members should be used for nonlinear calculations as you know the mean of a nonlinear function is not the same as a nonlinear function of a mean and I think Mahoney et al also noted that you get less extreme extremes if you're using the ensemble mean than if you're using individual members so one thing I do need to mention is model biases so because reanalysis are combining models with observations they're not free from some of the biases in the models and so what I'm showing here is the figure from the 2021 paper on 20 cr v3 showing some of the biases in precipitation so on the left I'm showing 20 cr v3 biases relative to gpcp which is a satellite station blend and then crew ts on the bottom which is a station based reconstruction so you can see that there are biases in both the signs don't always match so in other words the 20 cr estimate lies in between the two instrumental the satellite station reconstructions and then on the right I'm showing the same biases for error five and we see somewhat similar magnitudes actually and also different signs and also error fives bias the sign of the bias is not always the same between relative to the two different stations so basically biases are something that we can't really totally completely get away from and something that really needs to be kept in mind despite that I do want to point out that basically 20 cr captures variability in precipitation surprisingly well in certain cases so on the top here I'm showing a time series of precipitation for january over the western us all the way from 1836 out to 2015 and the red curve is 20 cr v3 and what I want to point out is that the correlations with the station and satellite reconstructions in the 20th and 21st centuries these correlations are all above 0.9 so we're getting that variability in january quite well less so in july and that was something that we were expecting is that the summer estimates would not be as good and the other thing I want to point out in this figure is our uncertainty estimate this is based on our ensemble spread and it increases further back in time as you have fewer observations as you expect so that's the shading here that's quite wide in the 19th century much more narrow in the 20th and 21st centuries and I said that I would mention the suitability of reanalysis for trend analysis and I want to sort of give this I'll say cautionary tale I guess of asperius trend that was seen in mera so this is precipitation from mera in the red curve on the bottom left and where I have this orange vertical line here you see that not only does the magnitude change of precipitation but also the variability qualities change pretty drastically um and what actually happened is that in 1998 the atops instruments all started being assimilated um and potentially uh what happened is that mera was using a had a model with a dry bias that may have been corrected once those instruments came online so you see this pretty um it's a pretty bad signal and mera if you're trying to analyze precipitation from that data set 20 crb 3 is in blue it doesn't have that trend because it never assimilates any radiances and what I do want to mention even though this is a cautionary tale here this is something that producers of reanalysis are always looking out for and always is trying to fix so era 5 and mera 2 don't have this issue because they became aware of it and bias corrected the model and the observations but that's not to say that there aren't other issues so coming back to my answer of it being a research question there are a lot of things to keep an eye on as you're as you're trying to use these data sets so to return to those questions our reanalysis fields suitable for historical reconstructions again they can be but it's a research question in general 20 cr captures precipitation variability well but there are biases but 20 cr fields have been used for this purpose with downscaling methods with results that depend on the storm situation so again is a research question in some storms it can work pretty well in other storms maybe doesn't work well and again the ensemble method to quantify your uncertainty can be really useful there does the suitability of reanalysis data change over time and how again the accuracy depends on the observing network at the time so in more recent years and if you have more observations your reanalysis should be more accurate if you're trying to go back to the 19th century it's going to be less accurate but if you're in a place where there are observations you can look at even the spatial quality of your uncertainty using ensemble-based methods and finally I talked about trend studies and I gave that cautionary tale significant observing network changes can lead to various trends and discontinuities the benefit of 20 cr is that it only ever assimilates one type of observation so it's not as subject to these types of discontinuities I kind of raced through that so well we'll see if if we can get Laura back online but her presentation does hue up the next presentation from from Gary Lachman so why don't we move into the next presentation and then we'll we'll save all the questions for Gary and for Laura until the end Gary okay can you hear me all right yes okay great yeah well thanks for the opportunity I'm really happy to be able to speak to the committee and I'll be talking more about downscaling and reanalysis as Jim mentioned just a little bit about me since I don't know all of you I'm a diet in the wool atmospheric scientists not a hydrologist and so I am not I'm just learning about PMP but I study synoptic mesoscale atmospheric dynamics and weather modeling and prediction and for many years I've studied the role of moist dynamics and weather systems and so it was sort of a natural extension to take that work into the climate change and extreme weather realm and so I've looked at a lot of these sort of synoptic scale weather systems in this context and I also just wanted to point out some of the current and former students whose work I'll be sharing here uh Alice Michaelis is now at Northern Illinois uh Katie Hollinger is a current PhD student and Chun Young Jung is at Argonne National Lab um Gary do you we're not hearing your screen oh hi must have thanks for pointing that out let's try are you seeing it now yes okay great well let me let me hide the video controls make it so I can see it here okay so um yeah sorry about that so I guess maybe what got me into this um again the blizzard of 1888 um Alice and Michaelis was an undergraduate student who wanted to do a project on snow storms and the 20th century reanalysis had just come out and so we wondered could we use this to initialize uh the weather research and forecasting model sort of look at the dynamics and look at things that no one had ever seen like radar associated with this storm simulated radar and uh you know somewhat to our surprise this was using version two of the 20th century reanalysis uh we were able to capture the storm and it even had an area of accumulation that exceeded 40 inches it wasn't in exactly the right place but considering that we initialized with the ensemble mean we were pretty pleased with this and we published a paper in grl um you know presenting these results it was sort of a proof of concept type uh study um but now on to my question so uh another former NC State student Kelly Mahoney I'm really uh impressed by all that she's doing in this area um you know she published this blast from the past paper and that led to the question are downscaling simulations of extreme rainfall events using these reanalysis uh useful for the reconstruction of PMP type storms and I think yes it is a research question as as Laura indicated but I also think that there's an opportunity to more directly account for climate change and I'll elaborate on that in a minute um and then my second question was you know what are the settings where these downscaling reconstructions will have the most potential utility for enhancing the storm catalogs and I think there's opportunities in many settings I asked for clarification on if we meant meteorological settings versus geographical or temporal settings I think for meteorological settings uh as the Mahoney et al paper demonstrated if you have uh kind of a well organized large scale forcing for a cool season event you'll have much better model outcomes than for localized convective summertime events but a lot of times those are the PMP type storms in the warm season when there's more vapor um also the geographical settings I think where you have complex terrain and you you benefit more from model resolution than downscaling could help and also in locations where you don't have as good of a period of record I think these reanalysis could add more benefit but I'll expound on these things as we go a little bit about downscaling since it hasn't been talked about too much um you know we all know that GCMs just don't have the resolution I would say that GCMs are not tropical cyclone allowing and in the southeast and eastern US so many of the PMP type events are tropical cyclone related so you know the high-resmip simulations are just barely getting there but you're still not getting full strength storms so to address this my former student uh and now northern Illinois professor uh Allison Michaelis did some really nice computationally demanding time-slice simulations with the model for friction across scales with a 15 kilometer northern hemisphere mesh and that could simulate full strength tropical storms there's also interesting work that's been done at NCAR with this long-term large domain pseudo-global warming or PGW experiment there's been many many papers published using this dataset it was a four kilometer grid mesh a 13 year present day with a 13 year future counterpart and you could use that to look at changes in storm character and whatnot this PGW method we've used a lot for individual case study you know so this is sort of transposition in time if you will and that goes back to the mid 90s with Kristoff's share at ETH and several groups in Japan have done this we've used this method but that way you can sort of get a fully consistent dynamical replication of an event and you can study the physical processes and how it changes and why and then of course there's many statistical downscaling methods and that's I won't get into that today but you know just a little bit more about this pseudo-global warming which is consistent with the storyline framing of shepherd 2016 or the tales of future weather by Haslager et al you you take your best re-analyses or analyses and you simulate a extreme event for example here's hurricane sandy you run an ensemble compare it to observations and then you can use historical or future projections calculate a delta and apply that to the initial and boundary condition data and then re-simulate the event in a different thermodynamic environment so you can go either way you can simulate a past event or a future event you could use this to take a historical event and bring it into the present day but you're accounting for the thermodynamic changes you're not fully accounting for the larger-scale circulation changes but you're saying if this storm were to happen in this different thermodynamic environment how would its structure and characteristics and impacts change so for hurricane sandy we did this the present day ensemble worked out pretty well compared to observations we did a historical sandy because I was getting lots of calls from reporters saying how much of this was climate change so we we dialed it back subtracted out the using the GC the historical GCM runs we calculated a delta and simulated a pre-industrial sandy we also went forward and did a future sandy using the CMIP three GCMs and a high-end emission scenario and again we apply the temperature delta hold relative humidity constant and then that way your moisture delta has synoptic scale structure because you get more vapor increase where you're warmer and less where you're colder and so we were able to simulate the storm in different environments they published a bulletin paper the group at Connecticut was looking at how this what the implications this would have for power grid impacts for example but you can analyze since you have a full ensemble of physically consistent model simulations you can look at the changes and their causes and it's sort of a way of looking at the same event and transposition in time if you will using the PMP nomenclature but again there's limitations because you're not capturing the changes due to larger scale changes for example shifts in the jet or the storm tracks we've done this for a variety of different types of events some idealized some actual case studies and the nice thing is that you can look at precipitation rates at high resolution high temporal or spatial resolution and you can look at compound hazards the combination of wind and precipitation we're currently doing some sequential storm analysis and the appellations with Francis and Ivan in 2004 and you know the results are consistent and realistic to the extent that the model governing equations and model physics parameterizations are realistic so that comparison to observations for the present day cases is critical but there are limitations and one question that comes up is you know if you're picking individual case studies and looking for the future climate version of them what if there's different future patterns that will produce events that are even more extreme than the historical cases I think this is relevant for PMP as well and so you can use other methods to get at that question for example high resolution time slice simulations that sort of try to sample a range of natural variability you can avoid the pseudo global warming boundary condition issues you can account more for large-scale circulation changes and you can look at questions such as frequency and representiveness and so you can basically simulate what the future climate would look like at higher resolution without the expense of a full-blown GCM because you're just looking at a small period of time so Alice and Michaelis did this for her PhD dissertation using the impasse model with a 15 kilometer grid in the northern hemisphere not quite convection allowing but certainly tropical cyclone allowing and we've analyzed we haven't fully analyzed the precipitation data out of there but you can sort of do a model climate PMP by looking at the maximum six-hourly precipitation rate for the present-day simulations the future simulations and then the difference fields so if we had a bigger model run with a longer integration period and say an ensemble you could really do a complete storm catalog using a method like this there's also a current project we have here with the North Carolina Department of Transportation and Ken Kunkel is also involved with that and it's led by Jared Bowden and Kathy Della who's the state climatologist and that you know there's I see a lot of parallels with the NOAA PMP effort that's why I wanted to bring this up they're looking at sort of federal highways requirements for downscaling and they're asking us for design storms or stress test storms and they're interested in combining this with kind of the statistical downscaling that they're using and they're taking the gridded precipitation data from the model simulations and running it through HEKRAZ and you know their hydrologic models to look at inundation and they're trying to find ways to improve the resilience of transportation infrastructure so I see potential coordination with that effort as part of that Katie Hollinger did these PGW simulations for Hurricane Matthew in 2016 here's the stage four here's the present day simulated ensemble mean we usually use the probability matched mean this is the ensemble mean and that's using a four kilometer weather research and forecasting ensemble and then we can look at things like you know the histogram of rain rates for the present and future version of the storm and the difference histograms we also can do heat maps of rain rates exceeding certain thresholds these are the kind of things that the DOT was interested in seeing and how they changed also Katie Hollinger is working with uh hydrodynamics okay thank you Antonio Sebastian who's a new professor at UNC Chapel Hill and she has expertise in sort of how land use affects runoff and flooding and inundation she was on the attribution study of Hurricane Harvey and looking at how urban land use in Houston and the Bayou configuration affected flooding there but I've been learning a lot because that's not my my typical area but Katie has been working with Tony and learning about what happens when the rain hits the ground as Katie likes to say so this will be a component of her dissertation to take the output from these simulations and look at the hydrology at very high resolution recently Ken had a PhD student Geneva Gray who did her defense just a week or so ago and she said if you're doing these pseudo global warming downscaling studies you need to apply the statistical context and so Katie made this diagram where it compares the Atlas 14 return period with the 90th percentile range to our model simulation so you can see the present day Matthew was maybe a 500 year event by Atlas 14 standards but then the future Matthew was just you know off beyond the chart there both of these were below the PMP hourly 10 square mile simulations but I think this kind of context and consistency between these datasets is helpful so to you know try to wind things up I think Mahoney at all made a compelling case for the use of historical simulations but I think you could transposition those cases in time to say well if those historical storms happened in the present climate how would their precipitation characteristics change you know for me I'm learning about PMP when I look at some of these you know the moisture adjustment and the moisture maximization I think that's really pretty crude the references from 1947 I tracked it down and you know that's pre-gritted data you know and so I think we could do a lot better now with all due respect to the the smart people who came up with those methods but I think the dynamical models just due to their physical consistency if they're used carefully that can add a lot of value and also help maybe better understand the relation between variables is the type that's used in these HMR's including you know integrated vapor transport dupe point etc so just to return to the prompt questions are these useful I think yes but they might be even more useful with a direct account of climate change and also I think these large domain long duration pseudo-global warming or time-slide simulations if you you know had a really a big computer and a lot of resources you could really make a more complete PMP type catalog that's valid not only today but would be valid into the future and in terms of the settings I think I mentioned this before so I won't repeat all of this but I think that you know some events are much harder to model than others as Mahoney had all demonstrated and so I think you know certainly summertime you know organized convection or large-scale forcing is is better lent itself to better model results and also there's some places where the existing catalog is probably weaker and we know that high resolution models benefit us in areas of complex terrain so I'll quickly say thanks for the opportunity again I'll quickly pop up these references of some of the papers that I mentioned then I'll leave it here and be happy to take questions as time permits Gary I'll start off with a I think Ruby's jumped in so we'll go ahead yeah you can yeah please you you'd like to ask first yeah I'll go second Ruby okay all right hey Gary thank you very much for the for the presentation very interesting so I have a question so I I know that people have done downscaling to look at PMP for the present day by maximizing the moisture and then you talk about these PGW type of experiments where we account for the non-stationarity of the climate and adding the temperature change and then and then the moisture change but I'm not aware of studies that actually combine the two because what we need to know is under non-stationary climate in the future we still need to maximize what might happen right because because the moisture change may not necessarily be simply scaled with the temperature so we need to maximize so I'm wondering do you have any thoughts about that or are you aware of studies that combine looking at non-stationarity with maximization of moisture yeah that's a great question Ruby and I don't know of studies that have really accounted for the maximization part in a way yeah I when I review the PMP like I said I'm learning about PMP now and the moisture maximization part makes me a little queasy but it does have the word probable I guess and the P and the first P and PMP is probable so but the technique that they use for that using a moist adiabat through the dew point I think you probably could do better than that the way to really account for this possibly is if you had a big enough computer simulation maybe we wouldn't need to maximize moisture if we had enough ensemble members and enough resolution a long enough period of integration then you could you know maybe that would be a potential replacement for this maximization but it would take a really large ensemble exactly but down the road that may be possible all right thank you very much yeah thanks Gary my question was similar it was looking in the current environment not even thinking about future climates moisture maximization was and is one of the key ingredients of current PMP methods and there are also there's a long line of research suggesting that extreme precipitation doesn't work that way it doesn't scale with precipitable water the question the question I was wondering is if you had thoughts about how how would you sort of go about thinking about maximization of precipitation so take Harvey you know what would be the computing procedures that you would use to try and assess what would what would it take to make Harvey worse are there are there sort of realistic bounds on Harvey do you have thoughts on that yeah that's a great question and the first thought that comes to mind is this sort of ensemble sensitivity analysis methods that people like Ryan Torn at Albany and Jim Doyle and Carolyn Reynolds at NRL where you can use the ensemble to figure out how a change in a certain part of the domain would affect some variable some metric for example it could be rainfall in a given watershed so you could use ensemble sensitivity to get at that you know basically to say okay let's let's make this storm as bad as it can be and you can use the on that that method which involves using the adjoint of the model to figure out where the sort of upstream sensitivities are so there may be a way to maximize storm impact that isn't just moisture it's also what could you do to maximize the impact from a dynamical standpoint there may be other variables in addition or besides moisture that would lead to a more intense storm so there may be potential that's not exactly my area of expertise but I know people who work in that area like I mentioned Ryan and Jim and Carolyn who would really have ideas but I'm sure that's doable and there's other ways we could you know I think that would be the most systematic way to address that but I do think that it's possible to do that in a way that would improve upon the kind of the original moisture maximization methods great shichi you have a question hi yes I actually have a question for both but I think they are somewhat related so for Laura I think I'm a hydrologist and for us usually there is a big shock whenever when we constitute real analysis by comparing the rainfall directly to the gauge then we realize we should not do that but I think the if we are moving forward we're going to use a real analysis as a part of the tool to support the PMP that's the sound question we need to think about either how can we really use it I understand that not real analysis basically don't assimilate rainfall gauge station in there so you probably will still see a big bias in there but I also hear that they are they are probably like Mira also start to assimilate part of the precipitation so anyway I just it's more like a question and comment that if we're going to use a rainfall depth output from real analysis what would the best practice and similar thing for Gary is that we also did some downskating work ourselves and the lesson we learned so far is we cannot really just take the raw output to plug plug into a hydrological model and and do the simulation because those biases still being very big to a hydrological model so what will be your recommendation if one are going to use the output to do some further h and h and h simulation thank you yeah so Laura then Gary on that one okay we have Laura back sure I think so you're pretty garbled can you turn off the camera the first thing I was going to mention I think so can you hear me okay go for it uh well I'll mention that era five actually assimilates um rain gauge radar composites over the us uh if I remember correctly um so there are reanalysis that are using um that data uh as far as the biases I mean I think from my point of view it's always just been something to be aware of and I don't know that I would be the person to make a recommendation for how to um deal with the issues between in the inconsistencies between the uh observations and the reanalysis other than potentially um if there's some sort of systematic bias that you could find and do some sort of post-processing to get from the reanalysis field to your actual estimate that would be the only that would be my first thought yeah I guess for the second part you know I think what Mahoney at all showed is that if you take the reanalysis which is relatively low resolution in 20th century reanalysis and simulate it the event with an ensemble of high resolution uh you know model runs using for example wharf um you know then you can add value you're adding physical consistency and you can check it against the available observations to see if it's um you know in the ballpark but I think for everything we do with models we always run an ensemble usually a physics ensemble maybe also a physics and initial condition ensemble so that you're really accounting for the uncertainty in the model atmosphere um but I think using the raw uh the raw reanalysis output versus using post-processed run run through a mesoscale weather model output that would be an interesting comparison to make but the model will give you yeah the account of terrain influences coastlines that'll give you kind of that um the higher resolution focus on the event and if it's a small-scale event you know a convective storm for example uh I think you'd you'd have to have that um but running the ensemble is is really important I think to account for the variability thank you let's see so we're open to questions for both Gary and Laura at this point um let's Gary one more question on the coastlines um bit um in the state of Texas uh they recently had an update to their precipitation frequency atlas and there's this really sharp gradient or sharp maximum in the houston metropolitan area in southeast Texas and then there's sort of more subtle features along the balcones escarpment um is it asking too much of model simulations to capture um that level of detail for the really high end storms I think you know models where they can really add value is with you know those kind of coastal and terrain influences if if the if you give it high resolution terrain it will give you a consistent solution I think you know kind of the the quantitative precipitation forecasting study papers have shown that you you benefit from resolution when you have sort of a topographic uh element doesn't mean it's necessarily right but in principle the model should be able to represent uh topographic and coastal features um to the extent that kind of the geographic data in the model is is realistic as I I think it's I think it's within reach yeah let's see we've got John E and um and Effie Laura and Gary thanks for the wonderful presentations this question's I guess more for Gary uh following on with the high resolution um what are your thoughts on exploiting the models to really dive into the physics so we can answer some modern day synoptic and mesoscale questions on look that you showed the equation from PMP we're sort of after now when we include wind right in various terrain effects what they can tell us about physical maxima and the way to look retrospectively maybe in the future simulations with PGW or in this even the ones you've analyzed even a snowstorm what are your thoughts on using those to look at some of those physical considerations yeah well that's sort of what got me into this in the first place is we wanted to understand why the storms were changing and we were really looking at the kind of the moist dynamics the the latent heat release how that was affecting the storm dynamics so with a model you can output you know the physics tendencies and you can really get into the the processes so that's definitely doable in terms of how it relates to PMP and sort of how could you make the perfect storm using the model um you know that and understand sort of what your you know how you would change the storm to make it the perfect storm that's a good question i've never tried doing that again i think maybe that ensemble sensitivity would be what a systematic way to do that but you you can uh you know i think that that is also within reach if that's really what the goal is to say okay the model is programmed with the laws of physics you know it doesn't mean models can't produce horrible errors they can but you know what is the most rain you could get out of a given event i've never tried to do that but i'm sure that that it could be done and maybe doing it that way where you can analyze the physics of what's happening in the model and why those changes are making uh heavier precipitation that might be a good approach but i may be overly optimistic but like i said i haven't tried that myself thanks very much yeah thank you listen in everything today and the you know the realization of the complexities in the you know thermodynamics and the physical properties yeah and that basically um non-linearities for whatever we come up as the perfect storm and the uncertainty the non-linearities in converting it to to uh flood are real it just occurred to me are we basically taking the path of the pmp and uncertainty quantification around that or we want to promote the concept of ensemble pmp's you see they're different it's not like the perfect storm and how certain am i around that but a whole ensemble that for example projected in the future will have uh some of the ensemble members could you know capture more than certain than thermodynamics and other physical parameters and some other you see they're different things the same as uh you know it was said i cannot take a ensemble mean and filter it through non-linear operations and get something i have to filter every single ensemble member through that transformation so the concept of ensemble pmp's just occurred to me and i wanted to pass it by yeah it's certainly laura feel free to to chime in i i'm a huge fan of probabilistic predictions and ensembles and quantifying uncertainty and to me it's ironic that pmp is a deterministic product but the first p is probable so uh i i think you know we we shouldn't i don't know if there exists a actual real upper limit for given durations so i i'm in favor of a probabilistic or ensemble type approach just because i think that's more scientifically realistic yeah no i i don't question the deterministic versus probabilistic but just whether we present a storm with plus minus uncertainty or an ensemble of storms but we can discuss this more um okay yeah well yeah an ensemble of storms yes i think the more uh members that are brought in the the better but uh yeah laura do you want to comment on that one um if i'm understanding correctly i think what you're saying yeah i guess my only what i would add to other than the benefit of 20 cr having an ensemble representation is also that because it's such a long time series then you get that large larger sample and i think maybe that's kind of what you're getting at if he is like having more possible maximum storm like pmp magnitude storms right um anyway that's that's yeah i not a storm i just yeah storms in all there are spacetime characteristics an ensemble of storms if possible storms we're getting to the end of our scheduled time and i would uh just ask now if the any committee members have uh any additional questions for any of the speakers who are still around i think we've uh yes katie hi jim is bill here is bill still on the line i i have a question and i'd like to tap into some of his expertise right bill you should i'm here i'm here yes yeah you showed a figure of all the pmp studies that you've done across the world and something that really struck me is like in in all of your years of experience do you do you know sort of which aspects of your methodological chain right you have this large chain that you sort of follow when you compute these pmp estimates which which step which decision impacts your pmp estimates the most yes i do and it's it's uh the transposition process is the house is the most impactful um and i and just just late no i just sent a couple of papers examples where we've applied the uncertainty quantification of each major step to the pmp process and you can show the range there so you as a committee will be getting that from the group here soon so you can kind of read through those examples and see exactly what that answer is for you katie because you'll you'll see in the you know the rainfall process you know spas might be plus or minus 20 percent the dew point sea surface temperature climatology must be might be plus or minus 10 percent but the choice of where and how to move storms and the adjustment supplied is the biggest factor in that process and that's consistent across these studies that we've looked at so far okay thank you though i will say the one difference in other parts of the world the studies like in south america and southeast asia where we have limited observational rain gauge data and period of record that becomes much more important in the us that's not as big of a deal because you have really excellent rain gauge observational data net red weather radar coverage and so on so that so it varies depending on where you are in the world thanks um let's see i i hear other questions um percolating but i don't uh i don't see them so i think at this point we will thank all of our speakers for very very useful contributions to the committee's work um and we encourage all of the community to respond and weigh in on the issues that that we've been wrestling with today on the the forums that are on the the the studies side uh with that i think um we can declare victory and uh and march forward on modernizing probable maximum precipitation thank you everyone great thank you bye thanks all the speakers