 potentially very dramatic this integration event. Thank you, Jenny. Thank you, Jenny. We will stay with fluid flow problems at a little bit different setting though. And because our next speaker is Robert Weiss from Virginia Tech. He works on tsunamis, been coming to the CCDMS meeting several times in a row. So we're excited to have him as a keynote speaker and show us some of his work that he does at Virginia. Thank you very much. Thank you, organizers, for inviting me to presentation. We're going to talk about very different things now than we just saw. It will be so prudently simple. But before we start, I would like to give you a little overview. I want to express a few thoughts about hazard modeling as it pertains to tsunamis, and also to commit in a few things that I've heard during the last few days. Then I would like to tell you a little bit about the difference between tsunamis and stones, because that's a discussion we have to have. We talked about tsunami hazards. Some modeling approaches for bolder sense and prominent sort of tsunami department. So I heard a lot of saying that all models are wrong, but some are useful. Actually, I think that's not really right as it pertains to modeling that we do. This is something that comes out of statistic where we have maybe a random process that we don't know what it does necessarily and sort of blurring that picture. But with our type of modeling, we usually have a good idea what the physics are, the physical processes that guide very principle physics, like for example gravity downwards, this kind of thing. And then regarding this, we know where the target is. We just may not hit it really well. We may not have an American method that are very precise, but we know where the target is. So that's a difference. And also in terms of data, that's the same thing. We know how the data should look like, but maybe it doesn't. And so we need to think about that a little more as we move forward. What I'd like to say about models is, or my understanding of models is related to Mona Lisa, which is a model in itself because it's a picture of a woman. Let's assume that woman is real. And then you can look up the internet and you can find all sorts of algorithms, what we call algorithms that tell you how to draw a Mona Lisa. And if you're really good at that, you have a good model. So you learn how to draw the Mona Lisa. I mean, I draw it, it's still like this, but the thing is I learned something. I learned something about the process and I learned something about the governing processes that resulted into this picture, even though minus, not as good, but I can do 70 million of this in a day, whereas this can be only once in a lifetime. And then we have to discuss the difference between sort of deterministic modeling or the stochastic modeling. And that's sort of like the transition that I go through right now in my research, where I go from deterministic modeling to stochastic modeling, sort of losing some of the accuracy or precision and hope that with picking the right distribution around the parameter or parameter space to actually depict the Mona Lisa on average. The other thing is that I've been thinking about a lot is model coupling. And that is probably the essence of CSDMSs. I understand it over the past few years. And I have a little child that is growing up. And what I noticed is that in the past, when they were playing together, say a child builds a block, next child, model one builds a block, model child two takes the block, takes it apart and builds another block. And that's the same sometimes what we do with model coupling. So say we use a three-dimensional model to calculate the velocity, but we only need a three-dimensional velocity. We only need a two-dimensional velocity in the next model. So we take that all, average it out, and then give it over. But that doesn't mean we can leverage or we use the advantages of a 3D model of the first model in the second model. Sort of blind coupling, parallel, play, if you will. But then something really, really astonishing happens at the age of three, three and a half. When all of a sudden the child takes a block and does something with it to improve it, or gives it another purpose, another sort of insight. And that's sometimes we have to do exactly that when we think about model coupling. We not only have to model a couple models just because they get an information. I think we also need to couple models with each other that are appropriate to couple. And then the sum of them is credited in the parts. That's the goal. And that's also the transition that I'm in right now, even though I'm not a young age student anymore, but some things take a lot longer than that. And then there was a lot of talk about data, model data integration. And that's really an important thing, especially for tsunamis, such as data-driven science. Both of them have a lot of uncertainty. So ideally what we have, we have a model, let's say of the ocean or the earth, and then we have a function f, that are the governing equation that generate the data that we then sort of see as a model output. So dm is the model data. Then if you compare that to data that we measure, df would be your data, it's sort of approximately right. But what we often forget is that actually the data is a model too. We have to assume certain things of the data. So then there's something we need to understand. We need to be honest about it. Both is exposed to uncertainties. And definitely what is true is rubbish in is equal rubbish out. So you cannot expect, just because you have a good model, that your result will be good if the data that you compare to is bad and the other way around. So we need to be honest about our errors and uncertainties on the data. And obviously we have to be honest and aware of the uncertainties errors of the model. Right. Now we're going to the real stuff. So tsunami hazard from deposits. So if you look at the earth, there are lots of areas where tsunamis can strike, but there are also a lot of areas where storm can strike. So they are both agents of traumatic coastal change and impact as traumatic. And just sort of financially the most devastating disasters are related to these two processes globally, loss of life and you name it. The problem is, it's fortunately and unfortunately, depending on what side you are, none of these processes, storms and tsunami occur frequently enough that we can use the measured record as a way to project which impact. So we have to go into the geological past and interrogate the geological record. And that is a problem because it's not just a real measurement. It's we have to invert what is saved in deposits about the causative process. But before we even start doing that, we need to understand the major differences between storms and tsunamis. This is a 2004 tsunami. It's just the positive wave as a tight gauge measurement. And I think it was from the Indian Ocean tsunami somewhere in the Indian Ocean. Close to Africa, I think. You see the waves. There are a few waves dominating the... Anyway, just imagine where the six hour is. There are a few waves dominating the system and we see a very... start to decrease towards 60 hours. So it's just a few first waves dominate the tsunami hazard. It means tsunami, sediment transport and destruction. If you look at the storm, it's a very different situation. You have a built up of the storm surge and the waves and then it goes down. So that's the wave height or wave amplitude. If you look at the periods, they are vastly different in terms of how they act. So now imagine you are a sediment grain that is exposed to a storm or a tsunami. You need to think about how many waves does the tsunami have. Let's say four or five are important. Guess how many waves are important to erode a sand grain of like two millimeter. How many waves do you think could erode that sand grain or move that sand grain in a regular storm, an average storm? Is it hundreds? Who's for hundreds? What do you think, 100 waves? How about 1000? Let's go orders of magnitude. 10,000, 100,000. There were so many people here that didn't actually answer, so maybe it's 33,000 waves. And you can imagine that the deposits that are generated where potentially 30,000 pulses can generate a or can cause sediment transit would look different. And from those that are generated by 102, so here's the question, which one is the tsunami deposit? Okay, so first, who's for the right one, this one? So who thinks the right one is the tsunami deposit? Sorry, it's for you left, sorry. So this is your left, right? My right, sorry. So who's for the other one? Brave souls. They're both tsunami deposits. And I could show you the same for storm deposits. You would not be able to distinguish them. And so everybody who says they can distinguish tsunami from storm deposits is really confident in a very wrong way because of you can't, you really can't. And that's where the context is important and that's where we really need to study how these sediments are formed. And I believe that, and I've come to the conclusion with deterministic modeling, where I thought, if we know how the precise physics is of a tsunami and then look at how precisely sediment is moved, we can not only explain the top layer, which is the 2004 tsunamis, but all of these other bright layers underneath. And I no longer think that's possible. I think we need to look into other ways that are much more statistical to do that. And I think that's an important transition that is going through the tsunami community right now. It's the acknowledgement and the beginning quantification of uncertainties around deposits. And that's, and I'm talking about the generation processes. I'm not talking about the processes that take place after the tsunami was deposited and so is the top of it. What we call the post-depositioner processes. They altered tsunami deposits as well. But this is a hugely complex problem that I'm afraid that we will not necessarily solve soon. When we talk about tsunami deposits, we all know about sands, but we also talk about something like that. And the height of the yellow and the height of the white is one meter. So I don't want to be near that thing when that moves. But that's also transported by a tsunami. And that tells us something about the causative process as well. I'm going to tell you a little bit about the motion of both of these deposits. Let's talk about boulders first. The boulders are anything that's above 25 centimeter. And so it's a wide range of possible brain size. We did a little study. And when I say we, I mean mostly my students, we did a little study with a three-dimensional hydrodynamic code called GPUSBH. And we coupled it with one of these gaming engines that do very fancy particle motions. It's really physically very accurate. And so we decided to look into that and expose the boulders that we placed in this experiment, this experimental sort of setup and different water depth, different sizes and so forth. So we created a little parameter study to see. And then we said, let's do a little scaling analysis to maybe see a difference between boulders that are moved during tsunamis and those boulders moved by storms. We did that parameter study. It's very limited because of the runtime of the model is about a week. And so you need to, you know, you need to sort of plan ahead of these things. And unfortunately the size is not very large. Here's some snapshots. And you can really see very complex motions how boulders are sliding and roll. And it's really, really cool. And then if you analyze the data in a sort of systematic way, sort of look at the geometry of the parameter space that you just created, we were assuming that we could see a sort of simple parameter space, a geometry, in the sense that we could discriminate between boulders moved by tsunamis and storms. But that is not the case. So you can see they're all together in a form of really, really complex parameter space. And that was the first hint. For me, physically speaking, we may not be able to see a difference or we're going to be able to discriminate. And I thought, well, that's great. It's a really nice complex model, but I want to use a more simple model. And that's what I did. So I assumed a spherical boulder, which is as realistic as a spherical cow, place it on a slope, put some roughness in front of it, and then assume that the boulder is lodged when it reaches this critical sort of, or passes the critical angle. This sort of simple approach has been done many times in our models around that do similar things, but they just assume that the sum of the forces is larger than zero, and then they can see that the boulder is dislodged. But I argue that's not the case. If the boulder doesn't reach the dislodgment angle, then the boulder will move back in this original position, and we will talk about it in a second. So it has to reach that dislodgment position so that we know that the boulder is actually dislodged. So we used the most simple equation that there is, Newton's second motion. We do some of the forces. It's a very standard way. The term in the first equation with the seventh, fifth is because of the yarn water, and we have to displace water. There are some issues there. But in a sense, it's a very cheap equation. It's very simple, nothing fancy. But it has a fundamental sort of change how we think about recognizing boulder dislodgment. So that's the starting point, and as you can see here, it has to overcome this little step before we can recognize that it's dislodged. At the moment, the sum of the forces at some point, the forces start, and the boulder starts moving. So what would happen if the boulder, if the forces would stop now? The boulder would move back into its original position, and we would not be able to recognize that the boulder moved. And we can continue this until about here, and now the boulder can dislodge, and now we can recognize in the field that the boulder actually has moved. Compared to previous models, that just used the threshold of the sum of forces larger than zero as an indication of motion, we introduced a 30 to 70% error in the estimate of the wave height. That's not good. And the not so fortunate thing is that we cannot predict which one it is. It's 30 or 70, it depends on a lot of components. But the good thing is now is, as you notice, it takes some time for the boulder to move up there. So maybe we can use this as a way to discriminate between storms and tsunamis, because we not only linked to the maximum amplitude, but also to a timing, and we know that storm wave periods are orders of magnitude smaller than tsunami periods. So I was very happy about it, and I was like, yeah, cool, let's move on. So I also thought that back in the day, we used to do a lot of like monochromotic sine wave sort of assumption for storm tsunami waves is, again, as realistic as a spherical cow. And it's not really functional. So I start using sort of nonlinear waves and used frequency domain models. So you can see here that instead of using one wave to describe a storm wave, we use many, many, many waves and perturbations. And we used a triad's equation, a very simple slope. We placed the boulder there. And then we simulated the wave evolution as we go ashore with this frequency domain model. So what we see here is in the upper portion is A, B, and C are different water depths. The left one, right one is 20 meters and then to five meters. And as you can see, the spectrum changes significantly on the left side of the boulder. And that's what we call the infracravity domain. So infracravity domain means longer waves. So as the waves move ashore, more and more energy, wave energy, is pushed into the longer components of the waves. That's important for boulder transport. If you look at the resulting sort of time series, you see that in greater water depth, they are very symmetric, sorry, very symmetric, and they're not a lot of spikes. But if you go to the shore, the wave signal becomes asymmetric. The crests are higher than the troughs and they are more spikes because of wave wave interaction. That could lead to boulder transport. And exactly that's taking place. For these different realizations of the same spectrum, we can sometimes, we can see that not always the same boulder mass is dislodged due to the non-linear interactions between the different wave components. So we see, for example, that for a really small wave, sorry, mass for like, let's say, 255 kilogram... Yeah, for 255, almost all of the realizations, dislodged boulder, as we move towards 7.5 tons, less and less and less realizations are able to... That's great. So we generate sort of a dislodgment frequency. And if you plot that up over the entire spectrum, and this is about 80 million model runs in there, you can see generally, it's pretty nice, you can see that larger boulders need larger waves to move where to dislodge the boulder. And you see that there is a... It sometimes takes a larger mass area to move from blue to red, and red meaning all of the boulders are dislodged and blue meaning no boulders are dislodged boulders. So that is great. So we could bring it into the realm of statistics or some likelihood. So we no longer bound by deterministic description of the boulder dislodged boulders. So now if you compare boulders moved by storms, classical storm amplitudes and periods, and those moved by tsunamis with classical wave characteristics of tsunamis, and then you can divide the amplitudes with each other by each other and you see this block. So this is, and this is really discouraging. Why is this discouraging? Because it's the ratio between two amplitudes to move the same boulder, and we see that the maximum difference is 36%. And my feeling is it might not be enough given all the uncertainties that we have in the field, not knowing where the boulder came from, not knowing how large the roughness was, the boulder had to go over. Not knowing how far the boulder moved. If that was its original size, did it break up during the process? It might not be enough to actually say that this is a difference that we can actually work with. So we're looking at other ways to see if you actually can discriminate between boulders moved by tsunamis and storms, but my hopes are not rising at this stage. But now we're going to talk about sand in the last seven minutes. That's going to be a real roller coaster ride. So this is what I call, this is what I call the honeymoon equation. Those of you who know my wife, this will not come to your surprise, but this is what we did at our honeymoon. We were shipping down the Nile. Others enjoyed their spring, and we worked out these equations, which are very similar to some small situations sitting over there. But anyway, that was actually very romantic. But that's obviously an equation that we cannot solve in the context of a stochastic system because of that is way too expensive. I mean, you can't in a sense of an inversion, because you want to retrieve information from the deposits. We also ventured out, and this is thanks to my student who sits back there. We developed a model called strike that then eventually will replace, hopefully replace the forward model that we have in our inversion system in the future. And here's a comparison to your Johnson's data who sits somewhere over there too. And it looks really good actually, but it's super, super different. Just way too expensive still to put it into a context of inversion system. And an inversion system consists of model data, a forward model, and an inversion technique. And oftentimes, our inversion technique is nothing else than trying to error and substitute data. And we actually tell you if I get to it, to show you a real inversion. So for tsunami deposits, what we usually assume is that we have an equilibrium profile, and we know that since the 30s or so that larger grain size are closer to the bed than smaller ones. It's a very simple situation. It's physically very intuitive. It can be described by the Rauss equation. If you do that, or if you assume that then you find that you have a normally graded deposit. Large deposits are at the base because they're closer to the bed than other ones. So if you then take the same approach and you know the result, you can then place the respective grain sizes in the water column to a position that would allow to create a certain grading made it be normal or inverse grading in the deposit just by settling out. That's essentially the forward model. And we do that by simply trying to error. Try one, settle out, doesn't fit. Try another one. Obviously that's a little more quantitative. If you do that and we've done that to several deposits, this is for the deposits in India, from the Indian Ocean tsunami and we see it's actually kind of good what comes out. But we cannot really tell anything more than this. So this is the inverted flow speed and they're kind of confirmed by measurements. We can give a minimum and a maximum, sort of an average, but this minimum and maximum is not actually in the traditional statistical sense because there is no distribution or anything involved. It's just bound by these values, physically informed. And we applied these also to storm deposits and you can see here that we have to cut out some portions of the grain size because simply we cannot deal with it. So this is for a high end, I believe, in 2013. It does a reasonable job. So we are very convinced that this model can apply to certain situations for storms as well and we can even extrapolate. But now we want to talk about real, real inversions. We actually have a data-informed inversion technique and that's where we implemented again also thanks to Hui and his friend to implement the ensemble common filtering technique. That's what we've done now, but again we need a lot more from the deposits in order to inform this technique because you have a time-dependent sort of situation in there. And the NKF is often used in weather forecast and you notice that weather forecast becomes a lot more accurate in hindsight as we come closer to a certain date and so it's the same here. And we use the sedimentation flux, I believe, as a way of informing this. And I don't want to go too long for this, but this is my last slide. And what has allowed us this? I could show on like 35 slides on how cool this is, but the real cool thing is that we can now actually look at the errors. We can assume if we take artificial data we can put an error model on this data. We can study the different impacts of the errors on the result. Obviously we have model errors that we also can control. If you do an error analysis, I can tell you that rubbish in rubbish art is a pretty good assumption, a pretty good model because if you have a really accurate data and your model is not good, your results will be not good. I mean, it doesn't come to the surprise, right? And it's the same the other way around. If you have, I don't even remember what I just said, but the opposite. But it also was fascinating what I found fascinating. So what you see here is a sample frequency. That's a tsunami deposit and you take different, a different number of samples from the base to the top. So we start out with five and end up with 30. So now if you talk to field people, they will tell you that any number of, after 15 is sort of insane. At least I've been told I'm insane when I propose to have a large number and it's, I guess, very difficult to do that. But what this, what shows you is that the error in the inversion sort of has a sweet spot in the number of samples. So if you would take on the left side, yeah, if you would take, let's say 11 samples, the error, the overall error of the inversion is minimized. And the other one is I believe nine. And I think it's for the left one is for inversion of flow depth. And the right one is for the inversion of the flow sweep. So we can, so that's good. Now we can say, okay, we now have approved that if you, if you take 12 samples or nine samples or some number of sample that's larger than five, it's smaller than 15 because of then the error in sampling techniques that takes over, you actually minimize the error. And that shows that we can actually inform field work. That's, I think, probably one of the most important outcomes of this. I think I have last slide some panel thoughts. We obviously need better models, but that's always true. We also need better data, especially for data that comes from deposits that are not directly measured but inverted. We need to think about carefully how these deposits are analyzed, how the model is analyzed so that we can actually compare apples and apples. It doesn't matter if you have, if you take a deposit and you, in the field, and you put a lot of work and taking samples from different layers, but in the model you have just a depth-arrived sort of situation. That doesn't mean anything. We need obviously a better integration between field work and modeling so that people who do modeling understand the boundaries and problems are in the field and vice versa, because oftentimes I feel talking to people that don't understand the difficulties that we often face. And then, and especially when it comes to inferences from deposits, we need to just make sure we understand there are uncertainties, communicate what uncertainties mean, and maybe need to move to other techniques that allow us to study those uncertainties more and more specifically. And with that, I leave you. Thank you very much. Thank you, Robert, for a thought-provoking talk. We're starting to run a little over schedule, but we'll take one question. I see Katie there in the back. Thank you for a lovely talk. So, I think a challenge in bringing data, field data, what have you, into the sort of inversion framework is that really what you want is to have some sort of prior distribution on that data. If you're collecting a tsunami deposit, how, as a model or a field practitioner, how you actually think about assigning a prior, that kind of data is really challenging. And so I wanted to see if you had any insight from your work on thinking about how you put the kinds of mathematical descriptions onto the kinds of data that you need to use in this model data comparison framework. I would like to have just reliable data that tells me at which height above the base of the deposit. What grain size was in a very robust, statistically robust way would be awesome. I'm not asking for trying for priors or whatever in a statistical sense to see. There's so many uncertainties associated with the modeling that we have to be very brave in our definition of distributions of certain parameters anyway, but just very reliable and standardized data because of tsunami deposits can be this thick or this thick and they are generated by the same tsunami about half meters apart. So it's like a standard way that allows me to look at the robustness of the data that is measured or the procedure. Because what we did in our modeling is oftentimes we try to create a situation where we can sample the model deposits in the same way as the field deposits, but we don't have to deal with the dirt because we have a number of grains because we can just shove it in there, right? But they have a... In the field, you collect a fired amount of grains that at some point impacts your stuff and just the standardization of that would be enough and not even sort of thinking of it in a more advanced way. And also make people understand to collect this data that it's actually important to be precise and accurate to get it back. No offense to any field process. Thank you, Robert.