 She is from the University of Texas in Austin, and her talk is about grid or sub-grid, analysis of river systems and their change at the intersection of modeling and high-resolution data analysis. Paula, go ahead. Thanks so much, Paulbert, and thank you for the invitation to present here. By the way, Kerri's algorithm works great if you're interested in doing depressions on LiDAR. We have used the for proofing of flooding, it works like a charm, so strongly recommend it. So we're going to be talking about LiDAR a little bit, or anyway, intersection of modeling and data, and think about how these two essentially can be used together as we think about modeling systems for understanding future change. So this is kind of an old problem, and I'm not only saying that because I worked on that when I started grad school, but also because in a way, you know, data models have followed parallel paths, if we like, right? Used to be that we didn't have data, and we did not have essentially computational capabilities to really run a high-resolution. And it was more like the situation that you see here on the right, where you have a system that you know, but that system, when you put the mesh on, you know, you can't really adapt the mesh, or when you get an estimation of the flows, is not really that accurate with respect to what you want to do. And there's always like this compromise in our models between what we can resolve, which is really what's larger than the grid size, and what we cannot resolve, which is what is going to be below our grid size. The things, of course, are improving, right? So we're moving more and more towards a situation in which we actually have data, right? Because resolution of the terrain data, the imagery and so forth, has gone up and up, and of course also our computational capabilities have gone much better. We're not there yet, Joe, that these two things really talk to each other in terms of still to these days, you put a data analyst and another in the same room, you ask them what's high-resolution, the number is going to be different, right? At least probably by probably a factor of time, I would say. And so what we're going to be looking at then, I'm going to present two examples. We're going to address two questions. And this is all very recent work. So none of this is published, it's all in preparation. It's actually the first time they present this work, which is quite exciting. And so I'm going to present two scenarios, one in which we do have data. We have LiDAR and the question that I'm going to ask is a little bit different than what we've done before. It used to be, well, the computational capabilities to run a high-resolution. But now the question is, we have the data, and perhaps we can run the model. But the question is, as our resolution of the model of the data improves, do we really need to keep on increasing the resolution of the model? Well, not necessarily, right? It's going to depend on the question that we want to address. So we're going to look at the problem first. And then the second scenario is you can actually have data. And this is a certain area actually of the symmetry, but it's much harder usually to have a lot of good information. So we're going to start thinking about how we can use high-resolution imagery to fill in the gaps. I cannot come up with a technique on how to do it, right? How are you going to fill in these gaps based on imagery? So we're going to start with the first one, and I'm going to acknowledge the work of both of my group and David Moritz group. This is work that we've done collaboratively. And it's a field modeling investigation of river flopping connectivity. So we're doing this in the Trinity River and in Texas. So you see the location of the basin within Texas and the location, essentially of the part of the system we're looking at. And this is an interesting river and has we have a lot of lighter data here. And this is all interesting features by these levee channels, floodplain channels. What we're trying to understand is what role these features play. And we start looking at the interaction of river and floodlands during events, during storms. And in particular, I would say we're not looking only at flow from fluvial sources, but we're also looking at the role of rainfall as it establishes this connectivity, which is something that we really don't know very well when we look at river flopping connectivity studies. This is the role of the rainfall and the interaction with the fluvial component. And so we had set instruments in the field that was like summer of 2019. We retrieved them actually February 2020. So that was convenient because it was right before the pandemic started. And there were six locations for this talk. I'm actually just going to show you two of these locations. This is a floodplain channel that you see. It's a few meters wide. And all these locations we installed till current meters and water level loggers. And the second location is actually a little different in terms of morphology is more of a delta-like depositional site. So much wider than the channel that you saw earlier. And what you see here is also the domain of the model. So the solid line is the full domain. And then the dashed line is essentially that domain within which we're going to increase the resolution. So the background resolution is all 20 meters. And what we're going to do inside is to try out a two, a five, and a 10 meter resolution, and of course 20, to see what's happening and how we can capture the patterns that we're observing at the sensors. And so as I mentioned, this is a coupled fluvial event. So as the instruments were in the field, actually tropical storm in Melda was what hit Texas at the beginning of fall 2019. And so we actually captured the storm. So as you see, there's an interesting pattern of three big pulses of rain initially, and then the fluvial component picks up, right? So we really want to understand connectivity of the river and the thought line. And we started seeing evidence from the sensors that this is actually more complex than a fluvial event only, right? Because you can have full inversions and so forth. So these are some of the results from the model at the two sites that I saw earlier. So the one on the left is the channel, there's just a few meters wide and the one on the right is the larger site. And so you see, of course, the resolution is going to play a different role because these are sites that have different characteristics and different scales and faults. So the solid line thick line is what is measured in terms of water level at the gauge, at the sensor, and then all the other lines that represent different resolutions of the model. And so you see that as we increase the resolution, so first of all, you see that the 20 meter resolution which is the solid line almost doesn't see anything, right? That essentially it sees very small variations in the water level. But as we increase the resolution, we actually do a nicer job, particularly for the rainfall part in mapping what the sensor saw. At the broader side, resolution really doesn't play a huge role. And that's what you would kind of expect because it's much wider and so effectively increasing the resolution doesn't make a big difference. You will also notice that in both cases we're missing an amount of water and that can be 20 to 30 centimeters. And there's a couple of things to keep in mind. This glider is a few years old, so it's like 2017. And also these channels and these features that I show you usually have water in and that water is very turbid. So this is an interesting case in which we have data, right? And we can resolve the channels but we actually cannot resolve the full cross-section. So we actually think that what happened is that of course as long as the water is turbid the glider cannot penetrate. So we're actually not seeing the full cross-section of the channel. What I wanted to show you also and advertise a little bit is this tool that we have developed in my group came out last year. It's called Dorado and this allows you to do Lagrangian transport. And so the interesting thing is that you can see the particles where you want. So I'm showing you here an example of seeding site four. So for example, if you wanna understand if particles get here, is it like solutes transport? We can think of it. What do they do and what's the dynamics that these particles experience? And so you can also look at residence times of the particles at that location. But you can do this on multiple locations. You can do it over the whole flood plain and we're using the residence times, the distributions to essentially quantify these patterns of connectivity between the river and the flood plain. And so the residence time distribution is another place where you can see what the role of resolution is. And so what we see for example is that if you wanna capture the patterns of site four, then you really need higher resolution, right? You can see that the 10 meter here results in a simulation in which essentially you don't have enough velocity and even enough water at that site to really allow these particles to move out. Essentially the particles get there and they're essentially stuck there. You see this residence sign, it's just gonna continue and distribution is gonna continue in this direction while the two and five meters do a very nice job. But of course, if you look at the overall flood plain, so let's say your question is to look at overall residence times on the flood plain, then maybe it's not gonna matter that much because the regional pathways, regional patterns in a way of the flood plain are gonna be dominated by all the other space, right? So there's some of these channels for the majority of the flood plain, it can be captured at lower resolutions as well. And so the other thing that I'm not gonna talk about a lot in this talk, just wanted to show you a little bit is because I talked about this interplay between rainfall and discharge, which is something that we currently don't, we're not really studied in this type of analysis of river flood plain connectivity. And so what you can also do is to think about how this residence time is changing over time. And so for example, these curves are started with the right first. These are particles that are in the flood plain, but they're essentially tracked through time. So at what time were they seated on the flood plain, depending on what time we are in terms of hydrograph, rainfall pattern and hydrograph discharge event. And so you see, for example, with the lower curves here, there are the later parts of the event. These are the ones when the rainfall is finished. And so there are certainly parts of the flood plain that get isolated and essentially those particles don't have a chance to drain out of the flood plain. This is less interesting in a way that can pull a pattern for the particles that initiate in the river and get in the flood plain because that pattern of overbank flow is much more consistent throughout the event once the overbank event starts. Okay, so we're gonna switch gears. So we just saw a case in which we haven't laid our data. In principle, we can resolve these channels, right? And so you can decide where you want your mesh to be finer, when you want your mesh to be coarser. It really depends on what question you're trying to address. Now we're gonna show you a different case in which we actually don't have data, right? And this is a collaboration with Jeho and Sergio at Boston University. And this is part of a larger scale project, which is a mission, which is called Delta X, which is funded by NASA. And essentially what Delta X looks at is combining new ways of doing new remote sensing instruments that are being currently prototyped in the aeroglyphs. This is for water level and sediment concentration. Field observations and a medical modeling. And essentially what we're trying to do with just that a field campaign this spring, there's gonna be another one in the fall and the idea here is gonna be, we have simultaneous field measurements and remote sensing. We're gonna use this information to improve the model and then the model is used to make predictions in the future, particularly in the context of soil accretion and the controls on soil accretion, for example, network structure and so forth. And so this is where capturing then those patterns of connectivity becomes really important, right? Because we're talking about soil accretion. So we need to make sure that the flux propagation into these flat lengths is actually done right. And this is where the problem's coming because if you start looking at bathymetry and you zoom in, you're gonna start seeing that there's areas that are actually disrupted, right? The bathymetry is put together by multiple sources. I think all of us that work with mathematical data knows that there's a lot of limitations of what data we can access, how often it's updated and so forth. And so all these disconnections actually are a problem when you're trying to get the fluxes into the water. So you can see, for example, here, this is our modeling domain. It was pretty highly instrumented. So some of these dots that I had once are actually permanent sites and the other ones were just placed for the field campaign. This is actually not a delta X campaign. This was a pre-delta X campaign in 2016, but we don't have the new data yet. But the point is that you can see that when you run the aerodynamic model, you actually do a pretty nice job in channels and that's probably not surprising because we can model channels pretty easily. But when you start getting a fluxes into these sites, so what in this water length, it gets complicated, right? So you can see, for example, here, the 6042, which is 15 meters. So the model is run at 10 meters. So this is something that we can see at least in part, but you can see that this channel essentially almost doesn't see anything. So the solid red line here is the water level, right? So nothing compared to the black line, which is what is seen at the gauge. So obviously we have an issue because if you're trying to get a better modeling of soil accretion and we can't get the water there, then if you can't get the aerodynamic to dry, definitely to get sediment transport does not work, right? And so the idea that Johan and Sarah Johad was to say, okay, what about we use imagery? So for example, we have Sentinel-2, we have the National Wetland Inventory that I think is actually a one meter resolution. So there's imagery and we see the channels, right? The channels are missing in the bathymetry. So you can say, well, I can use the imagery to carve the bathymetry, right? Where I see the water mask, I carve the channels in there and I have an improved bathymetric data that potentially can improve my model. The thing is you can just do this blindly because what you see here is that there's kind of like an interplay between how much volume you remove, which is the red line here underneath and what happens to the tidal prism which is really this volume of water that gets in and out of the system during one tidal cycle. So if you carve a lot, the problem is that you're gonna have a tidal prism that is totally exaggerated and near essentially your flux propagation is gonna be physical. And so what we proposed to Johan, and this is work of my PhD student, Kyle, then was to say, okay, what if we come up with a way to inform as to where we should carve and how much we should carve? And the way I thought about this was to say, how about I think about the cost function and a fast-marching approach? So you can think of saying, okay, what's my cost of traveling to a certain location? Now these fast-marching geodesic approaches are based on cost functions that you define based on what you're interested in and what process you're trying to capture. In this case, we're trying to capture fluxes, right? So tidal propagation. So our cost function is gonna be a celebrity. So you can think of it as an easiness to travel to certain parts of the network. So let's say this is your actual network in the field, but the original of a theme that you're given is something like this. If you're trying to reach this location in the wetland, you're kind of stuck because the channel stops there. So this is reflected in your travel time to the location. So you can clearly see that travel time is very fast as long as you mean faster, as long as you have the channel and then you slow down quite a bit as soon as you land into this area where you don't have a channel carved into the theme. So we can carve the bathymetry and modify it and get an improved travel time. But as I mentioned before, this carving has a cost, right? Which, and that cost is associated to the tidal prism. So this becomes an optimization problem in which you can put your constraints on two axes, right? And say, well, I need to remove masks but I don't wanna remove too much masks because this becomes totally unphysical. But I also wanna improve my reduction in time, in travel time to the gauges. I need the water to get there and I need the water to get there at the speed that is similar to what I see in the field. So you can't achieve the best in both. It's just not gonna happen but the point is that you have a compromise or a kind of a boundary here that has these are all possible solutions. And so you look for solutions on these parietal fronts and say, okay, so I can pick, I can do different types of carving, I can get all my travel times, I can compare all these curves and understand which one provides me the optimal level of modification of the bathymetry, right? And so this is what you see here and I'm spitting you some details in this talk but we tried also different types of metrics imagery. So for example, these refers to a combination of the Sentinel-2 and the National Bathymetry. So the network definitely plays a role in terms of the performance of the Sentinel-2 and National Bathymetry seems better but the point is that all of these are these parietal fronts or different level of carving applied to different channel types, let's say. So the point is that if you have a channel that is already deep you don't necessarily need to make it deeper, right? So you can decide to carve only whatever is shallower than one meter or two meters or five meters. And so the point though is that these are all the numbers that you see are all different carving depths, if you like and all these curves have a sort of inflection point. So the increase initially in improvement in travel times comes at the fairly low cost in terms of mass removed but then you reach a point which is usually at two meters at which any further improvement in travel times you just associate it with a larger mass removed. And so based on this, we can say, you know what? The network, this one is better but also consistently carving at two meters seems to be the best compromise between these two criteria that leads to a physical solution without still achieving better travel times. And you see an example here of three different carved bathymeteries and these are travel times. And so you see that in your original and the original bathymeter that we have it's really hard to reach the wetlands, right? There's no pathways, it's very slow. This is very much of a problem if you're trying to understand the behavior of the coast through time and soil accretion. But here with these carved bathymeteries you actually reach these locations at a much faster speed. And so the modeling proof. So this is an efficiency metric and this is a rhythmic error. So kind of typical ways of comparing observations and models, you're looking at where this is roughly above 0.5.6 is better and all of these are different level of carving. So we said that two meters is optimal but with the model and this is actually Delft-Gradie modeling done at Boston University. We tried from 0.5 to 10 meters of carving. And so you see that the M5 is essentially the two meters it results in improvement in most of the gauges but not really at the level that is optimal. Rhythmic square error is mostly consistently going down. So actually it turns out that in this application it is not only the depth carving but it's also the channel widening that work well. So these are other scenarios in which this one on the left is again efficiency of the mean square for the model that is only widened. So you take the channels and you make them wider. This is particularly important for those channels that are one pixel in our model. And the case of the left to the right is carving of two meters plus the widening. And you see that with a little bit of widening already the results are much better. And so it's a combination of optimal depth carving and widening these achieving better performance. Okay, so I'm gonna wrap up these two stories. And I present them as very distinct but you can see there's a lot of elements actually of similarity between these two. So in the case in which we have lighter this is a case in which we can decide because we can have meshes that are adaptable and make it finer or course that we're using a nougat. So it's pretty easy to do that with the mesh and we can decide what we want to capture. And the point is that for some of those local patterns of sites, understanding particles, behavior, residence time and potentially it's gonna have implications for sediment transport, then you need that resolution to be there. But perhaps if you're looking at overall thought-plane behavior and overall residence time distributions that's gonna play less of a role. So perhaps the resolution could be course in there. And then the second story that I present is a case in which bathymetric data is a limitation that's often the case but we can start thinking about ways of optimizing the use of high resolution imagery to improve our bathymetric data so that we can improve the performance of our model. And with that, I wanna thank you all. Thank you for the invitation. Thanks for being here. And also mentioned that these data model integration is very much a topic of interest for a research coordination network funded by NSF that I have with Nancy Glenn, Boise State and Chris Cosby at Navigol. We've been delaying the second workshop to have it in person and we're planning it for spring 2022. So if you're interested in these ideas then that's gonna be a good opportunity for continuing this type of discussion. And with that, maybe I have a minute for questions. Thank you, Paul, that was wonderful. And this is very relevant just what you mentioned in the last part, it's very close to what CSDMS wants to accomplish as well, coupling between data and models. If there are any questions, please either raise your hand or type it in the chat. Let me have a minute for that. I see Julio, do you have a question? Julio, can you chat? Can you unmute Julio? Did you type it? Hi. So if I understood correctly, you want to make this extra channel so that you allow the flow but you want to not to add extra title prisma. So what if you make the channels very shallow but with the insanely low friction so that it would allow water to go through but it doesn't add much volume. I mean, because these are really not, like you are trying to add something that doesn't necessarily, is the real thing you just want to have there the resulting effect current? Yeah, that is correct. I mean, and I would say that roughness is one of the things that we're considering. The more we, the more we model, I think the more we think that roughness plays a role but not a future role in this system. So for the channels in particular, then of course when you get into water that's a different story. So I don't think we have tried what you are suggesting as a possibility. We kind of focus for optimizing roughness and that's happening through the combination with remote sensing and then looking at further improvements but it would be interesting. I don't have the metric to show for the scenario that you're thinking about. And I think the steps that we're following are essentially to say, give me that remote sensing and let's try to calibrate as much as we can roughness coefficients and so forth and get the model that we think is best. I think what we're seeing here is really a limitation of not only the local behavior I would say, but also you're missing the connectors between the main channels and the smaller channels. So sometimes there was a site, for example, the 6008 the new depth improvement the new width improvement is changing and that's because the barriers are actually upstream of the location.