 Thanks, Jared, and thanks for inviting me. So hopefully, oh, let's see, forward back, okay. So I'm going to talk today about the National Hydrologic Molecule. Can you guys all hear me? Okay. Yeah. Okay, good. Good. I've been working on this for a long time. It's been myself as well as a lot of other people that I've acknowledged here, and I will try to call their names out as I go through the presentation. So why did we develop a National Hydrologic Model? Probably 10 or so years ago, Steve Markstrom and I got a proposal funded to look at integrated watershed scale response to climate change for basins across the United States. So we said, hey, this would be really cool. Let's pull together all the modelers in the USGS that have developed a PRMS model. Let's bring them all together, and let's downscale some GCMs to those models. And maybe we can provide the foundation for hydrologically based climate change studies across the nation. But anybody who's at the USGS, the group that I used to run, the model and watershed systems group and the research program, which actually is no longer there, a lot of our work was developing models for people to use on the ground. So every one of these people probably worked with us to develop their model, but they developed it for a different reason. They developed it on a different scale. They parameterized it differently. They calibrated it differently. So when we really actually tried to compare things, we realized we couldn't because they weren't defining things the same. They weren't selecting the same process modules because you can select all different types of combinations of process modules and they all use different calibration strategies. So we decided what we really needed was a nationally consistent, stakeholder relevant, locally informed model for the entire country that people could pull from and use. So that's where this all came from about 10 years ago. So we call this the National Hydrologic Model, and this is the infrastructure of the National Hydrologic Model. I probably can't walk. Oh, I can hold this. So this consists of what we call a geospatial fabric, which are the hydrologic responsiveness, the modeling units. It contains the stream network and it contains the parameter values. So what we did is we took the NHDPlus version 1, because this was quite a long time ago, and we aggregated it based on points of interest, which were gauges, forecasting points, all kinds of things. We decided we're important lakes, confluences, and we took the three million catchments from the NHDPlus and we aggregated them to what we thought were good size modeling units for the country. So about 110,000 HRUs, and these are really defined based kind of on drainage density when you look at them. So these are the modeling units that we start off with, that we model the country with, and then we parameterize it with static spatial information, and we used our best knowledge to develop algorithms to parameterize the model to try to get decent distributed parameters. And then we also have the ability to run it with dynamic parameters as well, because nothing remains static. And so really, it's probably, you can start off with static parameters, but in a lot of cases, there's change and we need to represent that. Here's just an example of some of the dynamic line cover we use from Eros that I'll touch on later here. We use a lot of dynamic line cover into the future as well, and we combine that with climate change. So the next piece we have is the model input data. So what we are doing is we're testing this national scale model with all the different gridded data sets that we can find. You can probably pull a local model and do a better job with your climate inputs if you actually spend some time with it, but nobody seems to, everybody wants everything done for them these days. And so we're testing, well, that's just my experience. So we're testing all these different data sets in the model so that we can tell you what we think works where and why. And then the last part of this, the physical models, we have daily and monthly models running within this infrastructure. We have the monthly water balance model. We have the precipitation runoff modeling system. And within the precipitation runoff modeling system, we also have a stream temperature model running for the country. So the monthly water balance model is a very simple five parameter model that we developed a parameter regionalization scheme for the country a couple of years back. And we've also developed uncertainty estimates associated with the monthly runoff, which I'll touch on how we're using those later. The water balance model has been run for current and future conditions. And you can access a portal that accesses the monthly water balance database and you can get current and future conditions for 235 climate scenarios. So you can basically poke on a location that delineates the area. You can get local or basin derived outputs for the monthly water balance model outputs. That's the website. This is no longer supported. It's still running, but it's not being supported because it has the word climate and model in it. So maybe that'll change. Anyway, so 235 climate scenarios process for the water balance model. One of the first, one of the things I get asked all the time is, what's climate scenario should I be using? And it kind of really depends on what you're doing. But what we did for this, because putting out 235 of them seemed a little bit crazy to me, was I looked at a two-sample Kalmogorov-Svernov statistical test. And so all the downscaled GCMs that we're using, they're all trained with the same data set. So I ran for current conditions, that trained data set, and then I ran the GCMs for current conditions. And I just wanted to see if they could match the cumulative probability here. So that's about as simple of a test as you can get, because you're not going to match day to day or month to month. But you would hope your distribution was the same. And this is monthly as well. And so here's just some results that we got. If we split up those scenarios into the CMIP3 and the CMIP5, these are the GCMs, zero of them if it's pink, actually represent current conditions, and 100% of them represent current conditions if they're blue. And this is ones that represent precipitation, temperature, and runoff. And you can see it's actually kind of interesting. You can see the resolution of the, oops, that wasn't the right one, here we go. You can see kind of the resolution of the GCMs showing through in here. But you do see a big difference between CMIP3 and CMIP5, which is pretty good. Yay, they're getting better. And really, this is driven by the precip. They're not able to get the precip. And California still has problems. None of the GCMs can represent current conditions there. So you might want to think twice about what you're using, but you can actually subset with the common ground of smear test in the portal to just get the GCMs that represent your conditions. So PRMS, precipitation runoff myelin system. This is a, excuse me, this is a daily time step, deterministic distributed model. And this has been around for a very, very long time. And there's been a lot of changes to it over the course of its life. It's really simply, just if you consider a conceptual, there's a basin. And there's a, you consider a surface subservice and subsurface part of surface, subsurface and groundwater portion of the basin. You divided into hydrologic response units. You've got infiltration to the subsurface. And then you've got recharge to the groundwater. That's a very simplistic representation of this. The combination of the three has given you stream flow. And we do have it running for the country. So I always show this to the people that you believe we have it running for the country. But this doesn't mean it's right. People love showing visualizations. Put this up here, but doesn't mean it's correct. Just looks good. So I thought it looked good. We have a tool that we call Bandit. And this takes, this actually accesses the national hydrologic model infrastructure and subsets models for you. So if you can pick any location of the country, it'll give you a local model. It'll give you the inputs, the parameters, the executable and the control file. So you can be running a model within minutes after getting the outputs. And that's a Parker Norton who developed Bandit with very proudly displaying his Bandit t-shirt. And so what you do then is that you say, okay, I want this location. You can pull the model and say, okay, this is great. But I don't really like this resolution in this area right here. There's been a lot of change going on. And so what usually happens is people want to nest, find a resolution, HREs into the area that's important. So instead of running hyper resolution across the whole country, you run it at a resolution that's handleable, that we can parameterize, and then we actually nest the fine resolution where we need the information. So that's where it's locally informed usually. So one of the things that as an example of calibrating the model and pulling a model, we pulled the model for the prairie pothole area in North Dakota. And we looked at the surface water depression storage. PRMS uses a merge, fill, and spill concept. So within a response unit, all the surface depression is considered as a, is this kind of a representation here, which really bothers the ecologist. But you have to start, if you're parameterizing the country, you can't consider every depression storage by itself, right? So it was kind of cool because I pulled the prairie pothole area model, and I calibrated the model to stream flow. And when I looked at the surface depression, the HRU fractional storage, this is the purple line here, that line just went straight across. My stream flow looked fine, but my storage had nothing to do with really what was going on. So if I calibrated my HRU fractional storage to normalize lake elevations that were nearby, I was actually able to get good stream flow, and I was actually able to follow the normalized lake elevations. And then after the fact, I took the remotely sensed surface water depression area for that HRU and overlaid it, these green triangles. But that's pretty darn cool. So you can get the right answer for the wrong reason. But if you start bringing these alternative datasets, there's a lot we can do with this, and we can get these surface water depression areas dynamically from remotely sensed information to calibrate our models. So that brings me to continental domain parameter estimation, and what are we doing for it? Well, I have always thought about calibration as first looking at volume and then looking at timing. And that's just like for a small basin that you might be calibrating. But that wasn't going to actually work for the cone essence. So this is my scheme. I know it's very colorful. Don't worry. It's actually pretty straightforward. Other people hate it when I say that. But first, I think about volume. So I used a fast parameter sensitivity analysis that actually looks at dominant process and looks at parameter sensitivity for every respondent across the country and tells me which parameters we should be calibrating to begin with. And then I calibrate each respondent as its own models. I have 109,951 models. And I calibrate them to what I call calibration or baseline data sets that I've derived. So I have information for runoff, snow covered area, AET, soil motion and recharge. And these can all be improved upon. And that calibration order is determined by a fast analysis as well. Because I want to calibrate the dominant process in that HRU first. And so I ran the fast sensitivity analysis. And now I know which one of those is the dominant process. And I order my calibration based on that. And this just actually shows you which one of these is dominant. So it does make sense that you've got snow covered area being important here in the prairie pothole region. We have recharge being important. And the rest of the place runoff seems to dominate. So anyway, what I'm looking for is the sweet spot. In these parameter ranges, I'm looking for the sweet spot. And you'll notice I'm not using observed stream flow. I'm looking for the sweet spot. I'm trying to fit all these things at the same time. So here's my baseline calibration data sets. And it would be great if we actually improved upon these. I have thrown these together. I have a lot of ideas of how I would make these better. But basically either I have something where I have uncertainty estimates for a range or I bring in three different model outputs or different remote sensing data sets. And I use their range as my uncertainty bound when I calibrate. So I fit it to a range. I don't fit it to a value. Some of those ranges are pretty darn big. Here's the mean of those five data sets. Here's the mean of that range. And that might not be all that exciting to look at. But here's the mean divided by their range. So one means you're meeting your range or equal. This means that your range is a lot bigger than your mean, which means your uncertainty is huge, right? And it's kind of interesting when you think about that later when you're evaluating your model. Let's take a look at AET here, for example. AET mean divided by their range. We're seeing this big area here where we've got a lot of uncertainty. And that's because we're using Mod 16, CBAP, and the Monte Warbounds model AET. And they all agree here, like if you correlate them to each other, right? Like, you know, they're correlating great. This is monthly. They're actually showing an opposite sign when they're below zero, right? In these areas here. And so it's gonna show that you're fitting this probably great, because your range is huge. So you need to keep that in mind, but that's the information that we have. And we need to improve upon what these baselines are telling us. So here's how the calibration works, or how it kind of, here's just an example of what it does. If you look at maybe one of the parameters, soil recharge max frac, which is a soil zone storage parameter. And let's look at the six calibration steps. This is a randomly chosen HREU. And this is the order of the six calibration steps based on the fast analysis for that HREU. And so our Y axis is gonna be the value of that parameter and the calibration, and our X axis is gonna be the objective function values for that HREU. So the first step, it calibrated runoff. And so these are the objective function values. These are the parameter values. This was the range it started with. And this is the top 25% of those values. Those are the best objective function values. It takes those and it moves to the next step, calibrates to AET. Takes that range again with the red dots, moves to the next step. Soil moisture. Okay, yay. Look, soil zone storage parameter, soil moisture nails it. So it's actually pretty cool because there's not a lot of parameter sensitivity here. There's a little bit, so it's narrowing the range a little bit, but generally when you hit the variable that you want, it narrows it down. So, and you can look at all the different parameters and this is generally what happens with them. So that's how the calibration works. And this is just one of them pulled out, but they are all calibrated together. So, here's what the calibrated parameter values. There's an example of four maps of parameter values. One might think that they come out and look like a big buckshot, but because you've got all these data sets that you're holding it to, you actually end up with some pretty cool patterns, even though you're calibrating each HRU by itself for sensitive parameters. If it's not a real sensitive parameter, or if the baseline dataset you're calibrating to doesn't have a lot of information, then you're not gonna get a good pattern. So, how many simulations fall within the baseline range? Here are the five baselines and this is the percent of the simulations that fall within the range. And again, of course, this makes a difference as to how big your range was. So, here's the spatial distribution. Pink is 100% of them fall within it and black is none of them fall within it. And again, you have to actually think a little bit about what this is telling you because it's telling you, oh wow, look, we're doing great here with AET, but remember that's the area where the range is gonna be huge because the data was telling you two different things. So, let's look a little closer at AET. There's the baseline mean divided by the range, remember? So, anything less than one, the orange, red, and pinks, that area is going to have a very big range, right? Here's the objective function values, the final objective function values. So, good is lower values, bad are the upper to red to pink values, right? And so, your objective function values are actually looking better in the middle of the country here. But really what you wanna do is you wanna look at a combination of how big is the range and what is your objective function value, right? So, if you then take your mean and your range and you categorize that on the y-axis here and you take your normalized root between the error and you categorize it into these colors and then you plot them, now it's really showing you that you're really actually doing much better in the Southeast. You've got a really tight range and you've got a low objective function value. And in the area that had the really wide ranges, that's now showing you that maybe it really isn't as good because really what you wanna do is be up here. So, you really have to look at more than one thing when you're evaluating this. Another way to look at that is with runoff. Here's another way people like to look at Nash-Succliff. And this is the Nash-Succliff using the baseline dataset from the monthly water balance model, which is the runoff values. Bad is pink. That means you're just doing less than zeros. One is good. And this is actually the Nash-Succliff compared to the value that the water balance model's putting up. But remember, that's not what I calibrated it to. I calibrated it to the range that came out of that with uncertainty, right? So if I actually only calculate error when I don't follow another range, I'm using as much information as I can out of that runoff of that. Now, you mean I would actually think somebody was lying if they showed me this because you don't get ones for Nash-Succliff. But the range is so wide that you can fit it. So we have to narrow the ranges of our runoff when we're doing this. So that's the volume calibration. And this is relatively model agnostic. I set it up for some GS Flow models a month or so ago. The 100,000 and 60,000 grid cells and used the same concept to distribute the parameters for a GS Flow. And it's a nice way. A lot of times people don't, they say they have a distributed model but their parameters actually aren't distributed. And so this is actually a good way to get distributed values for your parameters as a starting point. GS Flow is an integration of Modful and Pyramus. In case anybody was wondering. Brownwater Surface Water Interaction Model. So let's see, next step. So I take the optimized parameters by HRU and I say, okay, that was my volume. Now I wanna look at timing. So what I do for timing is I take my HR use for the country and I use my Bandit tool and I pull watersheds with the Bandit tool. And what I do is I pull all the headwater basins that are less than 3,000 square kilometers in the country with Bandit. So that's 7,265 headwaters. And now I have 7,265 models. And then I have daily time series that have developed, that Will Farmer developed using pooled ordinary creaging at every one of those headwater basins. So I have daily streamflow timing with uncertainty associated with it. And then I calibrate the model now to timing and I can look at all the different routing options that we have. I can look at no routing. I can look at Muskegon. I can look at Misera, which is kinematic. And then we can just test a lot of actually really cool things when we have this set up like this. And then from there in the end, I say, okay, well, let's actually look at observed streamflow. And so now I bring in observed streamflow in the end. So here's the headwater basins. And then the headwater basins that have gauges are 1,417 of them. Those are the red areas that are left now. Now those areas I can fine tune with observed if somebody wants to pull a model at that location. So how does this work? Well, it depends where you go. Actually, this was a great example because it worked really well here. This is where we developed. This is the Gulf Coastal Plains in Ozarks, Landscape Conservation Cooperative. We pulled this model out of the national model and we calibrated it with the BHRU by headwater observed. And so there's our national subcliff values for streamflow, daily streamflow, and then the log of daily streamflow. And actually this was a cool application because we developed the model for this area. And then we ran historical and future land cover and climate through these models for that LCC. And so we ran about 15 different GCMs as four different RCPs. And we also looked at a bunch of statistics because these guys wanted statistics for future conditions. And so one of the things that Jacob LaFontaine did was he used that KS test and he categorized these statistics they were asking for into the duration of frequency, magnitude, rate of change, and timing that they're sort of associated with. And those little Xs, these little Xs actually show you whether for current conditions we were able to replicate that statistic. And so all the daily statistics that everybody wants in the future, we're not even able to come even close to replicating them for current conditions. So you really have to actually think about what it is that you're looking at in future conditions before you just start running all this stuff. So they have a landscape conservation planning atlas that they put all this into and you can look at all the statistics, you can subset them by the KS test. So this is actually really nice and they've actually asked us to actually extend this to my official wildest service wants it extended to region four. So we're producing region four for them right now with current and future conditions with a land cover and climate. So that's a good example of that. And here's another example with surface depression storage that we worked on in that area. When you put future climate and future land cover together you think, hey, well, this is great. We're doing the right thing here, but the land cover change that we get does not have anything that represents the change in surface storage. And if you look at land cover change, there's huge changes in impervious, which means they have to build surface depression to capture the surface runoff. And so people do this land cover change with climate change and they don't consider the change in surface storage that's being built to catch that. And so what we did here was actually pretty cool because we ran the model and then we backtracked out how much depression storage would you have to build in order to capture the surface runoff that's occurring because of the change in impervious. So a lot of really cool things you can do with this and you really gotta think about that. You can't just apply this stuff. You gotta actually really think about, okay, what is land cover given us and what isn't it giving us that actually can really affect our model? Here's another application using Bandit, the park service last year wanted the model pulled for every single national park and they wanted current and future conditions. And we said, sure, we can do that. So Bill Betaglen and Colin Penn in the Colorado Water Science Center developed the tools to take the park boundaries which are weird, right? There's park boundary here and it's here here and here there's little yellow areas. And we give them maps just to show and how does the park boundary look with respect to what our modeling units look like? And this is just one example, one park example. And then we give them all the components of the water balance for current conditions for every single park. And then we also give them a lot of fun graphics. This one is just for Yellowstone and I pulled out the minimum temperature because it's kind of cool to see even back to 1980 you can still see the increase in minimum temperature here. And then we also give them statistics and as I said, statistics always make me a little nervous for future conditions. And then we give them stream flow at gauges. The park service didn't even know which stream gauges were in their parks or near their parks. And we have all that information in our infrastructure. So we pulled model and we give them pages of plots like this for gauges in or near their basins with observers to simulate it. And actually I would and we give them all the information whether it's a reference gauge, it's not a reference gauge with the disturbance index, what's the comparison of drainage area between what we think and what endless things. And some of these match pretty good and some of them don't. This is actually uncalibrated version of the model. So we're redoing this with a longer time series when we'll give them a better output. And then we also give them future conditions from the Mountain Water Balance Futures portal. Greg McCabe and I took all the outputs from the Mountain Water Balance Model and we just developed the P5 through P95 percentile changes for seasonal temperature precipitation and runoff on the geospatial fabric. So when we pull the park service models, we can actually give them for every single national park. We can give them those percentile changes for precipitation seasonally. We can give them the changes for average temperature seasonally and then we can give them the changes in our runoff seasonally. And then we also developed the same thing with trends and the percentiles of the change so you can see the range of the trends and then we also given them the trends for every single park. And so we can pump out a lot of information very quickly with this type of infrastructure. So there's the trends in precipitation seasonally and then here's our trends in runoff. So last but not least is the Stream Network Temperature Model. This was a model SN TEMP developed a while back. It was really the way it was set up was perfect because we can associate temperatures with surface subsurface and groundwater and then we figure out these channel characteristics and then it's easily run for the country. A lot of this probably needs to be calibrated but we haven't worked on that lately. But it is running and there, yeah, well, you can pull a model and locally calibrate it. One of the things that we're doing with this is we're working with the Forest Service and BLM with this A-Remp project which has been monitoring 215 watersheds within the Northwest Forest Plants Domain. So these are all the watersheds they're monitoring. They've been monitoring them for 30 years. They never monitored stream flow so they don't know what the stream flow is. So we're giving them stream flow. They also have super cool information on land cover change at a really fine resolution and we're now incorporating that change into our model with forest and structure so that we can look at that because what they have seen in this area or what they think they're seeing in this area is a big change in low flows and what does the low flow change due to? Could it be that they killed all the beaver? Could it be that climate is changing? There's a lot of things that we can test here and one of the big things they want is stream temperature and so in the following year we're gonna actually incorporate stream temperature at all these as well. So anyway, that's my talk. We have a lot of stuff that we would love to share. We would love to collaborate with anybody who's interested in baseline information, process representation, model evaluation, model calibration. It's all there and we're willing to share it with anybody who's interested. Thank you. Terrific, thanks so much Lauren. So we have a break next. We'll take a couple of questions before the break. Who has questions for Lauren? Hi, she asked if there is information out there for using the monthly water bounce model as a teaching tool and I would say yes. I believe that the Bureau of Rec has it running in R and they use it as a teaching tool. I know that Greg McCabe and Steve Marksham put out a paper, gosh, five or 10 years ago that have it as a GUI with a GUI interface that you can use as a teaching tool as well. I think that's actually a lot of versions of it out there that are being used as a teaching tool. If you wanna send me a note I can probably give you the different references for it.