 with all this complexity of the seismic assessment of tsunami potential. People have thoughts sort of logically that why don't we start using tsunami data to assess the tsunami potential. And one of the first who was doing it was Kenji Satake and back in 1984, I think. He published one of the first papers, but there is a series of papers there where he used tsunami measurements at the coast and there were a handful available at that time. The tide gauges on the coast measured wave amplitudes and they measured tsunami amplitudes pretty well. So he used that records of tsunami to try to invert to see, well, Kenji is a seismologist. So he's interested more in the, was, I mean, he's both. He's a very well-known tsunami scientist also. But I think his first head was seismology. So he was mostly interested in the earthquake sounds. So the origin of the earthquake. So he was trying to see if he could learn something about earthquakes by measuring the tsunami. So the way he's done it is he, you know, he divide, it's fairly similar to the final fault solution that's employed now with the seismic wave for the seismic wave. He subdivided the potential fault and into subsections and generated separate solutions, tsunami solutions using tsunami model from each of these subsection. And these are green's functions, if you will. And then come, you know, come up and try to come up with the approximation of the observation on land as a combination of these green's functions. And if you do it with several of them with several tag gauges and with several sections of the subfold, you'll come up with the series of linear equations that you see on the bottom here where the B subis are the measurements and time series, these are known. And A, you know, the matrix A is the matrix of the green's functions. And you need to solve these equations to find X subjs with the, which are the magnitudes of the subfolds. So you can, all of the, so instead of just the one CMT solution, which is a point source solution, you can find the distribution of this sleep distribution along the non, along the finite fault. And that was quite a novel approach at that time. So he's done, I mean, it's 1983, tsunami in sea of Japan was his first attempt to do that. And it was, as you can see the data on the right, on the ARIET APA half is the comparison of this, of the data, which he, you know, massaged pretty well because it's all this data is not digital, it's all paper recordings that he had to work with. And that's compared with this combination of this, of the green's functions. So the best solution that he has. So, I mean, the way he solved it is with the square root approach for the XJs. And he could come up with the distribution of the source along the fault, you know, it's just a great approach. There's a problem with that, there are three main problems. So we, again, this was 1985, right? The computers prolific, but we don't have, you know, to the personal computers has started to take off. The computer power is limited. So there's the metrics size for the inversions limited for the approximation. This is one, problem one. The number two, the data itself, the type gauges, they again, they were all analog stations at the time, definitely not real-time data. So the, but the main problem with this data is that the type gauges are usually placed at the locations deep inside the harbors. The reason for that is that because they want to shield the type gauge from the force of the wind waves, which can be pretty powerful, can just destroy the instrument, but it's the worst place you wanna measure the tsunami with. I mean, yes, that's what tsunami amplifies, but the harbors usually, it's a very enclosed bodies of water that they resonate with their own frequency. So there's the act like as a filter, if you will. So there is embedded filter in every single time series that you try to invert for. And the third problem, of course, it's not a real-time, I mentioned that already. So you cannot use this method for the real-time assessment. Again, for me, it's a fairly important part. While it's not the only one goal for the tsunami inversion problems, it's a very important part of the tsunami data simulation and data inversion. So there's actually, there's one more problem. It's the quality of the solution, of course, of the inversion depends on the quality of the model that you use to generate this green's functions, which are the direct propagation problem of the tsunami. And these were fairly crude, I have to say. That's exactly when I started to get into the tsunami and the modeling was my specialty, it still is. And I know that the models were fairly crude, but the scientists, the community definitely saw the advantage of this or the benefits of this approach. And it really took it too hard to perfect it, to take all these three problems and try to improve those. The models, the data and the speed of how fast you can do these calculations. So there's this little graph that I put together sort of the timeline of tsunami forecast evolution. They said back in, even before that, 1975, the state of the art model that I like to show was the one that you see on the left. You see it's actually this animation of the tsunami wave hitting Hawaii Island. If you see the resolution there, it's definitely what state of the art is actually. The author of this is Eddie Bernard. He was director of our lab for a long time. And you can see how much the computer can hold in terms of the data. Fast forward to the end of the century, 2000, the same exact model, he's using the shallow water wave equations. So the model, I mean, in terms of the mathematics is virtually the same. But if you improve the solution, improve the data and visualization too, you can come up with the model that look very much like the real tsunami that you see on the right. And that's the simulation of the tsunami of 1993 in Japan that killed over 200 people in this little peninsula called Aonai Peninsula. And that's what you see the animation of the tsunami for here. And what it took the community to move from the 1975 type models to the 1999, well, the computers, yes, definitely. More slow, holds pretty well still, and the computer capacity doubles in about two years. That's true, so you can put more data into there, but it also more sort of more resolution into the model. But it also took a lot of data that the tsunami community started to dig for, during this, what's called the generational decade for natural disaster reduction, which started in 1990. And the tsunami community took it too hard to take all possible data from any tsunami that they can put their hands on. And these little dots, red dots along the timeline are the tsunami events that the tsunami community formed this tsunami sort of fast response team where after every sizable tsunamis, the group of scientists will go and collect all the data possible so that we can benchmark the models and make sure that the models behave in such a way that they can simulate the actual event. And that paid off, this model that you see on the right has compared, it's not only looked pretty much better than the previous model, previous century models, or previous decade models, it actually compares pretty well with the data. And that's the reason is that we collected, and I was part of this fast response team and some of the events, we collected all possible data and the data is fairly perishable use. You see some, the ways that you collected, you actually go along the coast and collect all possible tsunami sort of amplitudes that can be inferred from the marks that tsunami leave on the coastlines. And these are very perishable, so we have to go there fast. So there was huge advancement. At the same time, the data side, the real-time data side was being developed also. You know, there's the timeline on the bottom here, shows the development of this tsunami-specific measurement device that was actually done in our lab, which was really the our lab initiative initially first. And it started in about 1990s, early 90s. The instrument that you put in the deep water to that can detect tsunami and provide measurements that is useful for models. It's called deep ocean assessment and reporting for tsunami and dart for short. And that's a very sophisticated design. It probably takes a whole new presentation to describe it. But that took a long time. So the decade of natural disaster reduction took us to improve the accuracy of the model. But then the next decade, or the decade after that, really was the decade that would put all this accuracy, all this new sophisticated models into action to provide the forecast, not only accurately, but fast. And the reason for that was this big tsunami that occurs in 2004 in Sumatra, offshore Sumatra, the Indian Ocean tsunami, the Boxing Day Tsunami of December 26th, it engulfed the whole Indian Ocean, but actually it reached out to the whole world ocean. And what you see here is the animation of the model that we've come up in our shop very shortly after the tsunami. Well, very shortly at that time was several hours after the earthquake. But even that, and that was based on pretty much just one, well, there was some scattered, uncertain earthquake data, but not really a CMT solution at that time. Actually, yeah, there was a CMT solution, but the CMT solution was about 10 times less energy of the earthquake estimated than there actually was because it was huge earthquake. It was eventual magnitude that the Milokal estimated by the earth body waves was about 9.3. And initial estimates were about 8.2 or something like that. They had the very initial estimate. I remember it very well because I was there trying to model that. So the model that you see here, it was scaled by just one data point. There was the Cocos Island tide gauge, not far from the epicenter. And at that reported the tsunami wave in real time and I use that to scale this solution to what it was. Even then it wasn't known how accurate the solution was, but so it was very crude model, but the demand for this model was huge, which showed that the forecast is really needed. And we actually can do that. What's missing is the data. The only data that was verified this forecast in somewhat was the satellite altimeter data, which was obtained actually very fortuitously about two hours after the event. The JSON topics, two sort of paired satellites flew over the area, as you see on the top, the trajectory of the satellites and the altimeter data that they provided in black. And that black line that you see here on the plot is very much massaged data. There's a lot of filtering, detining and other things done with this data before it became useful for comparison at this point. In fact, we used it for version two, but first the comparison and the blue line sort of shows the model data that you see here. And it looks like in deep ocean, we get the data pretty well. And as it turned out later, this was fairly good model, generally speaking of this event. Again, so what that showed is that we do have the models that can be used. We don't have the data that can be used for the real-time forecast. And that's when this development of the deep ocean assessment and reporting for tsunamis, this DARTS system started to pay off. It was really just the initiative of PMEL, but it has become the worldwide demand for this data. And the data, this instrument actually measures the pressure, the static pressure at the bottom of the ocean. And the tsunami wave, since they involved the movement of the whole column of water, they changed the static pressure. And the pressure instrument is sensitive enough to detect even one centimeter tsunami. Not only detect, but to resolve it with good enough accuracies that we can use it in data simulation. Now, to use this data, again, this data is only as good as far as the model can use that. And the way we use the model is very similar to what Sataka has suggested for the 10 gauges. It's you build the Grins functions and that you try to approximate this DARTS data with the linear combination of the Grins functions. And then by using some minimization technique, you try to minimize the error that, so the mismatch between the Grins function combination and the actual solution. And you can use different minimization techniques, but the least square method provides you with the very fast estimate. And then this L2 norm has a very good qualities that you can use to provide robust and fast, fast as the keyword here because the DARTS was really designed for the fast estimate. Minimization technique. So pretty much the same, this square method that Sataka has used for the 10 gauges, we used it for DARTS data to come up with the source. And to, again, we try to do it fast. Remember that the science is great, but we want to use the science for the actual practical applications to reduce the time. We pre-computer all these Grins functions because every Grins functions, it's actually the full propagation solution from the portion of the fault, right? So this red squares that you see, and these are red squares, it's too magnified, these are this portion of the faults that are used to compute the Grins functions. And Grins function is the full propagation action facts, the global propagation over tsunami. We call from this Grins function called unit source because these are the source with the unit slip. The two we come up with about 150 by 50 kilometer size of this rectangle that gives you about 7.5 approximately, about 7.5 magnitude earthquake. So you combine those and scale those by using the DARTS data, which are the yellow triangles here. And yet it's a modern constellation of DARTS data, most almost all of them are here. And then you can come up with the source of the tsunami fairly quick. The reason the DARTS data is so useful is that the tsunami wave propagates very fast in deep water. So they actually, you see they're in deep water fairly far away from potential sources. But in most cases, the DARTS data gets tsunami faster than tsunami hits the land anywhere because it propagates so much faster in the deep water. So that's, and that's how the idea of tsunami forecast has come about, combining a few things, historical knowledge of where potential tsunami genetic earthquakes occur. And that's what we're talking about earthquakes. So that's the dot here is the lines are potential big faults that are known. And these faults are usually the plate boundaries. And from historical database, we see where tsunamis occur in the last 3,000 years or so. And they do occur along these lines. So that's where you pre-compute your risk functions first. And of course you wanna put your detection close to the source so that we can catch the tsunami early and get the data early. And over the years, again, it started in 2004 in these 15 years, we went through actually four generations of this dot boost. The principle is still the same, but they're much better fit for the problem now with all kinds of improvement. And this data is used for the inversion. This is the green's functions. These black lines, green's functions actually are much more of those. Now we pre-compute these potential sources as anywhere we can so that we can save time on the computing during the data. And then we use this least square inversion technique to come up with the solution. But then it's not over yet. Again, the forecast is only as good as it's accurate at the coastline. At the coastline, to remember the movie that I showed you at the beginning, the wave becomes very non-linear. And the linearity that we used to come up with the source of inversion is not holding up anymore. You cannot imply the linearity when it's close to the source. So we have a two-step approach and we should actually three-step approach. When you come up with a source and come up with this propagation scenario of the combined source, we use this scenario as initial conditions for the near-short computations of its nanos. So it's very, sounds very, very complex. But if you integrate it into the forecast system that gives you continuous verification with each tsunami. And since our start of our development of the system that uses the data inversion for the forecast, this is again, this is our data slide. We have almost up to a hundred different events that we have data for. And we use every event to improve the system. We tweak the version algorithm. It's the key to assess the source because if you assess the source correctly then the rest is as good as the models are. There's a lot of efforts to improve the models. And it's again, it's a separate talk but both propagation and inundation models, there have been special efforts put into improving that with the standard set of benchmark established. So we know what models can simulate tsunami good enough so they can be used for the inversion. And putting it all together took a lot of time and that's a lot of my time has been put into developing this overarching system that includes everything where inversion is actually key because I'm the modeler, I'm still considering myself a modeler and now that garbage in garbage out is a very much true statement for the modeling. You know, the models, you know, as good as the data can go into them and then version provides the key step then. So again, this sort of this three step approach is you detect tsunami data. I mean, you get the tsunami data. Again, the first data is actually not from Dart that's shown here but is from the seismic data and the seismic conversion is already we assume that's there. Then in version of this data, you get the approximation of the tsunami source which is this permanent displacement I was talking about before. And you use that as the initial condition for the near-coast simulation to provide the flooding forecast. And the flooding is the main parameter that we're looking for. And to show you what exactly you are trying to forecast, I'm gonna show you this video of the March 11, 2011 Japan tsunami. This is what tsunami flooding looks like. I mean, first of all, it doesn't look like this nice curly wave that you usually, you know, see as a tsunami depiction. Second of all, you see how much debris you, the tsunami picks up during the propagation. So it's a fairly complex phenomenon to simulate this inundation portion. And with all debris there, the force is so tremendous that you definitely have a problem with defending this any other way but to evacuate people. So to save life, you definitely need to evacuate people from any inundation zone. And this video is a very good illustration of why you want to evacuate everybody from the area of estimated flooding. It's, as you can see, the wave just blows away everything. It's sea on its back. And destruction is actually, it's pretty phenomenal. You can see the tsunami can bring fires into land also. But it's this show very well. So that's a long video, but you know, maybe just a little bit more portion here that to see the power of this wave it very little can withstand this power. So again, the modeling which comes after the tsunami inversion and proper estimation of the flooding zone becomes the key to saving lives. So if you can estimate where you want to evacuate people from, you can save all lives as Eddie Bernard has challenged us with. Now this tsunami, and I'll focus a little bit on this tsunami for a few minutes, created a lot of challenges. I mean, we've been developing this system and we thought we were on the right path, but it shows the problem, for example, with the tide gauge station for at least the real-time inversion, because by that time, many tide gauge stations reported real-time data, digital data that you can actually use for theoretically for tsunami inversion. But large tsunamis like that, you can see from the east coast of Japan, many tide gauges, this flat line, they were simply destroyed by tsunami. So it's not very reliable data, even especially for the large tsunamis that's the one that you actually really need this data for. Again, engineering approaches you can see from the previous video has a lot of challenges for this forceful wave like that. Japan has invested very heavily for the engineering defenses from tsunamis. It's fortified pretty much the whole coast of Japan is fortified with tsunami walls, special walls that surround the coastlines defending from tsunamis. You can see the remnants from this wall Kamayashi city here at the bottom. These were definitely under-designed. Definitely under-designed. So if you design the wall and if this wall is overtopped, in fact, the wave in some ways become even worse because the downflow becomes much more destructive if there was no wave in some ways. Although it's not entirely true, there means some. So anyway, the design of the tsunami wave is critical. But anyway, if there's like they say there's always bigger fish, it can always come up with the largest tsunamis and cost benefit becomes very challenging for the engineering approach of tsunami defense. Now, the hazard assessment, the modeling assessment of the tsunami, that's what I wanna focus on. That has been a big challenge for this tsunami also. What you see here is that the blue area is what was inferred to be the tsunami danger zones for two locations on the east coast of Japan. The pink area is what was actually flooded during this event. So you see the people evacuated from the danger zone, some people did, only to get flooded by the wave. And that's definitely not what we want in terms of the modeling assessments. We want the inverse picture to be true. We want these two lines actually coincide, that would be the best. But if nothing else, it should be conservative forecast that the danger zone should be, that estimated should be larger than the one that's occurred. So at that time, 2011, we have our disinversion scheme already in test operations at tsunami warning centers. So we were testing it during this event actually. There were several dark stations, international dark stations in the area. And the first dart that recorded this wave, and we were watching it real time. That still is the absolute record of deep ocean tsunami amplitude we measured. It's one point, you know, almost 1.7, it's actually was larger than that. This, the plot here, the t-detining took, you know, a few centimeters off this amplitude. It's almost, you know, all the way to up to two meters. When we saw this wave, we thought there was something wrong with the gate because we never imagined a tsunami wave that would be that high. But when we realized that it's not, after the second dart detected, almost a half a meter tsunami also never seen before. If you remember my previous slide, the Indian Ocean tsunami, this deep ocean satellite altimetry, data showed about 60 centimeter tsunami. And there was a catastrophic tsunami propagating in the Indian Ocean. So we definitely saw something at least that high or probably larger going on. And that's when we, you know, and we had our least square methods of inversion already set up. So we were able to do the inversion based on this area of the data for these two darts. And that gave us this combination of unit functions. You see the squares. These are pre-computed, you know, grid functions, unit sources. So combination of these unit sources gave us that solution in red, which has a pretty good comparison with the data, you know, it's fairly good one. So when the darts were reported, you know, the further out darts reported the tsunami, that was already verification of the solution that we gave because the solution was based on just two darts in the beginning, the US and Russian darts. So that gave us the source. And that happened, and if you remember that, it's about, you know, a little more than an hour, about an hour and 20 minutes into the event. And in fact, at that time, the magnitude was still stand from the seismic inversion was still something like 8.4, almost out of magnitude less than the actual magnitude that was for this event. But we, since the tsunami data is, you know, we actually measuring the tsunami data that we want to model, we were able to do, well, this, again, I show this, there was several other inversions was done later. So we compared the inversion that's done for the, with the darts, with the seismic inversion that I talked about at the end of the GPS inversion. And there was a lot of GPS data in Japan. In fact, the GPS inversion did use the dart data also. That was done not in real time. It was this proof of concept that was done years later, but we use it for comparison to see what we can potentially get from the GPS inversion, for example, in combination with dart data, and then use it for the forecast. That's what you see here is the vertical displacement. That's the source of the tsunami that was and worked from the GPS data. And then some other measures there, but also the final full solution was available. But it was done, I mean, again, it was such a complex event. The first final full solution was available about day later, about 24 hours later, but it was updated even later to like more robust solution. We compared this three, even though they were not real time, we wanted to see what's available. And even with all the data that and analysis that went in the GPS and final full solution, the dart data, which was obtained in like this time within the first two hours, was in many ways a better fit in terms of the tsunami model. It's not really too much of a surprise because we use tsunami data to infer the tsunami source. Well, GPS and USG data and assessment data is looking into the earthquake data. So it's not a direct data comparison, of course. And when, let's see, I think there was a comparison here. Well, there's independent comparison for this data. We used several recordings. Again, there was probably a year later we've done that, that are close to the coast of Japan. And again, this is the comparison. These three columns show comparison of three different methods. And while they both, all three of them showed fairly good flooding forecast, the dart was the fastest and still provided the best amplitude estimates for among the three. Again, I wouldn't be too surprised that it did because we use the tsunami data for the tsunami model. So it's as simple as that. The problem with, well, especially, I mean, if you could just focus on where the highest amplitudes were, you see that, say, they find a fault, predicted the highest amplitudes in sort of not the same location again. It's difficult to get the asperities right for the such a large complex source. The GPS and for data, it came up with a bit better distribution of the amplitude, but it took a lot of massaging the data. And we're still not sure how to use that. We experimented with that. We still not have it operational there. So with the dart data, on the other hand, we were able to get the forecast for the US coastlines, which is a much easier problem because we had few hours in front of them. It was about six hours of tsunami propagation time before the waves hit the first US territory in the US coastline in Hawaii. And again, I want to remember, when I talk about tsunami forecast, we all want to forecast the flooding of the impact of the coastline, okay? That's the forecast. If we, you know, when we come up with the forecast of the next dart data, the dart location or the source of the tsunamis is great, scientifically fantastic. Practically not really valuable until you get the forecast for the coastline. And we've done it with this again. Remember, this is the third step when we actually run models in real time because it becomes fairly non-linear while it gets closer to the source. And you have this, you know, high and higher resolution nested grids that zoom onto the harbor level. And what you see here on the lower row was this highest resolution model in the harbors and several harbors along the Hawaiian islands and the comparison of the, that's the forecast. That's what's highlighted is actually the forecast. All the rest was the ways to get to this forecast. And the comparison of the tide gauges that were installed in all these harbors, there was one of the criteria for our forecast that they should be tide gauge to verify the forecast was pretty good, actually. And so we were fairly happy with this far-field forecast that we provided. And that was done for the whole Pacific Ocean. You know, while the tsunami was propagating this, you know, two-dark inversion, it provided pretty good assessment of this, you know, Pacific tsunami propagation. But that highlighted a problem that we have and I'll maybe talk a little bit at the end of that, that this inversion is based on just two-dark data, just two, you know, and the reason it's good is probably because we have a lot of assumptions and these assumptions worked well for this case. But what it highlights is the problem that our tsunami inversion is a very data poor problem. We have very little data, you know, even though we have almost, you know, almost up to a hundred dark stations around the world, it's still only one, two, maximum three darts that we use for the inversion during the real time because we just don't have time for more. So, I mean, it's changing slowly, I'll show a little bit about it, but still it's a very data poor problem. So we have to do a lot of assumptions there. But it worked well for that tsunami, for the UIS, we've ran all our forecast models for the US coastlines, you see the comparisons here, not, it's a very sort of easy plot here with a lot of really lines. But just to summarize, we actually did the metric, specific metric for the goodness of the forecast, of the forecast to improve the forecast scale and we see about 70% accuracy in terms of the amplitude assessment of the time gauge. That's sort of numerical assessment that we can get. But the good thing is that the largest tsunami is the better the accuracy is. And the reason is that, you know, if you, for the small signals that we see in some of the tide gauges, there's a lot of noise in the tide signal. Remember, I was talking about the problem with the tide gauge signals for the inversion because there's a lot of, you know, a filtering going on, which we call noise. For us, tsunami scientists, it's noise. And the smaller the signal, the worst this problem becomes. And so we've, what this shows is sort of the snapshot of the global model of this tsunami, of the March 11, 2000 tsunami that shows the maximum computed amplitudes at every point of the model around the world actually. So the maximum computed amplitudes at every point. So we just, we compute the propagation and take maximum at every point. That's the snapshot that you see that the energy propagates not evenly around the globe, but it has very, you know, narrow paths of where the highest energy is shot to the coastline. And we've done it for many tsunamis. Like I said, it's a built-in feature that every tsunami give you the opportunity to perfect your, you know, the whole sequence, but mainly the inversion technique. And we've done, it's still the square that we use, but we've done a lot of additional constraints to this quest to improve our forecast. But the problem, I mean, that works as you could see fairly well. Now we're pretty confident that if tsunami is far away, then the coastlines, you know, a few hours away, we can predict the tsunami ability pretty well. But the main destruction and the main problem is near source. That's in tsunami again in Japan, showed that very well, you see the flow on the left. This is the model that telescoping to the high and high resolution and provides the flooding estimates on the left. That is the model that uses the inversion from two dots. Okay, that's the same model. So in principle, it works in terms of the accuracy pretty well because the comparison with data, which the flooding that was measured is a white line and the color sort of feel data shows the model. So it compares fairly well. The problem is the timing again. Can we do this inversion so fast that by the way, this flooding occurred about an hour after the event. So I mean, we have some time, not much, but some, but how fast can we do the forecast? Yeah, the forecast is that. It's the flooding model that we have to run after the inversion. And if you just do like little mind exercise, it does take like three to 15 minutes just to realize that they're quick of potential dangers going on. Just because we need to collect all the assessment data and just to get the magnitude, the rough estimate of the magnitude and the location, takes about three minutes, but to be certain about 15 minutes to get some estimates of the overview can get away from that just the data boundaries, the data limits, the speed of the seismic wave propagation. Then it takes some time to do the inversion, right? I mean, or to do at least the, okay, you do just the seismic inversion, then you put it into the source and do the propagation and only then you do the non-linear models. It takes another 10 to 15 minutes. You're already more than 30 minutes away, even without any complications, you know, sort of administrative complications, you're 30 minutes into the event. The wave static to heat causes cause lines. And only then you can probably get the darts that are located right now. So that's the problem. We need to reduce the time. The best way to reduce the time will be to predict when the earthquake occur. If we like few hours away before the earthquake, then we are golden. We can do it all this cycle, you know, before time and we can get perfect forecasting once, you know, before, way before the wave heat. That's where we're not there yet. Hopefully we can get some good earthquake prediction capabilities sometime soon, not yet. So in practical, it's a practical future. It's actually this future is now. We actually right there. Again, we can't get away from this three to 15 minute initial sort of assessment of the earthquake, but the rest can be reduced pretty well. Computers again, more slow is on our side. It gets you, you know, the computers are faster and faster and faster. We actually reduce our computation time in seconds. So the computations are not a bottleneck of this process anymore. We can do it very fast. So the first assessment can be out in a minute, but if you place the data detections, you know, and Dart is one of them, I just show example, but they can be others, you know, in strategically locations that are closer, then you can get it even faster. The, I'm trying to see if I'm good on time. The, and we can get the forecast technically in 10 minutes or 15 minutes. And that's already before even the closest line is hit. And if you automate it, all the process then, we can have the forecast before any, you know, before any wave hit the coastline and want people and get people from the harm's way. And it's theoretical exercise. In practice, it's, I mean, just to give you some flavor with what problems we're facing with. So the time gauge is, you know, there's so many time gauges that it's really would be good to have the time gauge as addition to the, to the, you know, throw it into the inversion routine and see if it gives you a better solution. We've tried it doesn't really work very well. You know, what you see here is the, on the, on the left is the deep ocean detector, just one. And we see how good of the solution for the, if we invert just this one dart to the source. And we see if when we get good solution for the darts, the same solution computed for the tag gauge that's for one event in the Pacific there in Samoa in 2009, become very stable also. This is not the case if you use the time gauges for the same thing. So you try to invert the time gauge signal. It's this, you know, use the same, the square technique. You get the stable solution for the time gauge, you know, fairly fast, but it doesn't guarantee you the stable solution for the dart, which means your propagation solution is still very uncertain. Putting the time gauges say the darts close to the source has its own problems too, because this, you know, the darts, which the pressure signal that dart measures measure the shaking of the earthquake very well, apparently. And if it's close to the source, if it's far from the source, these seismic waves are separated in time from tsunami. But if you put them closer, you see it's on the, on the right in the middle frame, you see the black line, which is seismic wave, which is raw signal of the pressure detector in some time in, somewhere in Japan for the, during the 2011 tsunami. It's completely, tsunami signal completely masked by seismic. Well, we can deal with this for the new generation. Four Gs were designed with a filter inside that seems to work pretty well. There are other data, this, you know, I was talking, I was saying that we have a very data poor problem. It's changing slowly, but it's changing. They send in Japan, for example, there's a massive cable system that with the same pressure sensors. So instead of sending the data to the buoy and to the satellite, they using the cable data, it's a much more expensive proposition, but if you can have much more data sent through the cable than through the satellite, and they have a lot of data that the East Coast of Japan is fully instrumented. You can see the number of detectors there is enormous. The East Coast, the West Coast of US, the Pacific Coast of US is started to be instrumented in the same way, not anywhere close to the amount of sensors in Japan. But so the number of sensors is increasing. And that, well, even the satellite data is sort of, we're still investigating, and it's not, we still, there's no cigar in terms of the satellites sort of looking down and getting you the tsunami data. It's not, it's very, very difficult to deal with. And it's not, we're not having geostationary satellites looking down at the ocean yet. These satellites are for different purposes, the ultimate to data. This is, I think it's a fourth generation of ultimate to data satellites orbiting now. You have to make sure that the orbit come to the right place in the right time. That doesn't happen all the time. And the data is very noisy for tsunami purposes. It's very good for the, to look at the different other oceanographic applications, but not for tsunamis yet. But with all this data coming in, new data potentially, you may, we started to look at different, at beyond just the least square feet. The machine learning applications is, we are only at the beginning of the way. There are a few studies that we're looking at to that, especially on the most of them are focused in Japan, where this huge number of data, data points are available now that you can throw. Well, the good thing about the machine learning applications, you don't care about, you know, what data you throw into them and how much data, the more data, the merrier. And not exactly the case for the least square. And some early results are encouraging, not all, you know, mind blowing, but encouraging. You see this on the right frame, it shows the forecasting red with the actual observation in blue. And it still takes about 40 minutes with all this massive amount of data to get the sort of robust forecast of inundation. But it's fast developing field. We are looking, you know, in the future of machine learning applications. I wanna go back a little bit on the tsunami. Well, Vasili, I'm very sorry. We have only 10 minutes to finish just put your wrap here, your talk. I mean, because it's a very quick question. 10 minutes. Okay, that's pretty good. Yeah, I'm almost there. I'll just zoom through to meet your tsunami. Yeah, great, yeah. Thank you. Yeah, I'll just go to the conclusion slide in question. The pressure wave can actually be also, this is the leftist pressure. The right is the tsunami is propagated again. You can actually do some things with inversion, but that's the summary. I skip the middle tsunami. You can write to me with the details so that actually I can send you the paper that we've done that. But in summary, this nature of tsunamis, which are very long waves and you can model the shallow water wave equations and they behave pretty linearly in the open ocean, opens up a good opportunities for the data inversion techniques, many efficient data inversion techniques, so much so that they can be used in real time. And what I showed you is how we use it so far, there's a lot more need to be done to get to this zero casualties from tsunami and the data inversion is the key for that. And a lot of new data is coming up, GNSS data, the satellite geometry and the machine learning algorithms. So that concludes my talk. So if you have sort of to make a forecast, just add data. And if you have questions, I will try to answer those. Okay, thank you very much, Vasily. It's a big topic and it's a lot of coverage from your side, it's exciting. We now have some time for questions. I see some questions in the chat box. It's a particularly, it is written isn't possible to already simulate numerous models and select the suitable model according to a scenario to save the time, just like meteorology people do. And some modeling, absolutely. There are many models, many tsunami models. Ensemble modeling was not, is not yet a big thing in the tsunami model. And that's probably, and I know the hurricane models, it's all about ensemble modeling. Provide great results, you get the uncertainty very well. The field of the tsunami forecast and certainty is wide open so far. It's the problem is the very limited time that you have to assess that. And ensemble modeling is probably the way to do it. For now, we just optimized one model so much that it runs in seconds. You need to do the same for different other models so that you can do ensemble. But yes, good question, not there yet. Okay, one question also is related to actually that which we discussed already very briefly by end of your first part. It's a question related to the earthquakes and tsunami graph from the Sikof. And it's written that we can see that the Earthquake with the magnitude of A can generate the tsunami with negative magnitude and positive magnitude. So what can explain this variation? Oh, good question. I just zoomed through that. A lot of things, this, I go back to this Sikof this quote of Fedy Bernard, who I mentioned a couple of times. He often called the tsunami generation problem compares the tsunami generation from earthquakes as the scramble egg problem. There are several eggs there and they're all scrambled. There's a lot of things going on and they're probably huge mass length failures going on. The uncertainty of the mechanism that Alec was inferring to is another egg. The uncertainty of the data is another egg too. And the big earthquakes is especially complex omelettes, because the rupture is maybe going on for up to 10 minutes, like for example, for the Indian Ocean tsunami. But you have to provide the forecast within these 10 minutes. So it's really, there's a lot of different things that's in addition to this fairly simplified notion of elastic deformation that we use for tsunami source. So that's my sort of general answer. Alec maybe we may actually add some more, because he's an expert in Sikof. Okay. You know, I don't see in the chart an equation, but it's normally, I have a question and a common question. Actually, we always, when we look at some specific, I mean, proof science, we look also at some applications. And in your case, it is a direct application to the safe of life and safe of their properties. Well, the properties is not so easy to save as, but the point is that today morning, we had another lecture related to the volcanic ash propagation and clouds propagation. And this is also also discussed in terms of the data and the real the modeling of the specific parts of here. It's a source model, propagation model and data insertion model, et cetera. But anyway, there is a my understanding of your lecture, what the following, that it's we have a methods which can easily handle the problem. The point is time. And that's why we use as let's say, not as slow as possible, but it's as more available as available or more available data as possible. You know, it's a meaning, meaning you are not waiting when the full range of the data will be available. You can assimilate this data and generate the fantastic model because it is not a aim of such a modeling which needs to deliver information with a very short period of time. Still, you mentioned that it's a very important issue is the reduction of time related to the source. But here again, it's something which is probably related to the first question we will discuss. If we know properly the area of source area, for example, in your cases related to Japan where the very closest abduction zone is and if we will divide this abduction zone in some cells, let's say speaking numerically, you know, and if we consider that it's earthquake with a magnitude eight and more can generate some dangerous waves which is a one meter and higher. And in this case, is it possible probably to really generate the models which will again, it's a coming to ensemble models but not a truly ensemble models but you mentioned it's a machine learning. For example, or this is say, you know, the artificial intelligence was using the pattern recognition recognize this, you know, which object and what will be in notation and so on. And this is a way now to think about the future or what is the real perspectives right now except of what you mentioned that it's a just a short time to determine the source. And what is another also directions of sinking in the tsunami signs? I want to, what a good question the, the, well, as you said, the timing is what make the problem so specific and difficult. It's not the timing per se is the problem. It's the fact that you have to come up with the inversion and such amount in a short amount of time. And it's an interesting phenomenon because you're pretty much trying to find the balance between accuracy and speed. You can do very fast but inaccurate and you can do very accurate but very, very, very slowly. So you definitely want to combine this too. And in terms of the tsunami forecast, you know, real-time forecast, neither will work. You don't want fast and inaccurate. In fact, that was exactly the case in Japan. I mean, it was the issue of the warning and free minutes. It was underestimated of at least the fact of magnitude. It's just, and that's, you know, created a set of through a problem. You don't want that. If you want, you don't want accurate forecast late. I mean, that's obviously, you want it soon. So the problem that we deal with is how to make this accurate forecast fast. And that comes down to the data. And it does come down to the data. Like I said, we are data poor problem for every particular source. There's only a few data points that you can work on. And that's why, as you just mentioned, we imply so many different assumptions to sort of, to substitute this data with our prior knowledge, right? It would be great to have the data instead because, you know, our prior knowledge can be, you know, we think we're smart, but we may not be that smart. The nature always outsmarts us. So it's best to have the data available, a lot of amount of data. And Japan is doing this way, you know, is going this way, is they are not looking back, they're spending huge amount of money to instrument the coastline to the death, sort of say, but then that requires new algorithms also. So new data will bring new view of the tsunami inversion and machine learning is coming to mind. So far, we have at the very beginning of way, and so far, the results I've seen were sort of, you know, not overwhelming, but not underwhelming either. They're sort of in part. So that's where I see, you know, there will be a lot more data coming soon into the tsunami field. And we need new algorithms that deal with this data. That's why I say. Thank you. Karim, do you have a question? Yeah, thank you very much, Vasily. That was really, really nice, really interesting. I mean, just using the words of my friend Emil Kahn, that you know very well, even better than me. I mean, there are still, I mean, here the quest of wisdom is quite positive to tell the truth. As far as far field staff are concerned, I think that, I mean, since the 2004 disaster, I think that far field is still encouraging, but maybe the near field is remains still an issue. A lot of the lies that Anik was a little bit discussing, I think that, I mean, this is something that you did not mention, but I'm sure that you're very positive with that. Is the implementation of the Omega or W phase inversion in seismology that has basically, I think that as far as developing countries are concerned, I mean, in the estimates of the magnitude, and this has really a promising impact on far fields for that warning. And I see GNSS as the future, of course, but if you take an area like the Macron, I doubt that we will get permanent GPS stations distributed in Iran and Pakistan and with open data policy and all that stuff that would really allow us to do the right work. And it seems to me that for the near field, I mean, when it goes to the most important contribution to the survival of people, maybe education remains the issue. I mean, I think that science is doing its best, but I mean, we ICDP are very much accused in developing countries and we feel like open data policy and all this stuff are still an issue within the near field. Very good points, very good points, all agreed. Just maybe a few comments on a couple of those. The education, I mean, I haven't talked about it, tsunami warning as a system itself, but yes, I was talking only about one portion of it, which is forecast, which is important, but it cannot solve the problem by itself. Education is definitely a key. However, I should say that education is not also a silver bullet also. It all has to be in concert. If we issue the warning and people don't know what to do, people die. If people know what to do, but don't get the warning and the warning can be the shaking, for example, they saw people die. It's only if it comes together when we have the perfect, I mean, sort of perfect and good system where zero casualties from tsunamis. So it has to be from both sides. Example from the tsunami in Nicaragua in 1989 was the example sort of showing that education, well, it was not silver bullet. The reason for people dying there was that nobody even felt the earthquake. So it doesn't matter how educated people were if they didn't feel the earthquake. There was this unusually slow event which was very mild shaking and that created huge tsunami. So you have to have all the components ideally, ideally. But if you go for the bank for the bank, education definitely is a great way to start. Absolutely, I agree with you. It's not only in Nicaragua related to developing country but also to develop country, take a German and it's the 200 people that lost their life because of flooding. And why? Actually, it's because they were no educated. They were no informed about the possibility of the flooding and the flash flooding at the region. That this is a really very important issue all over the world. There is not only to look at the basic science which is a very important because it brings us a basic knowledge about the event, about this phenomenon but also to educate people even with that we are always living with risk. We should not afraid that we are living with risk. Sometimes the policy makers don't tell people about it. Why not? We are living with risk, with virus and so on. We should know and should know how to really manage our life. And that is a really very good question. It's raised by one of my really concern was when I saw Dark System which you showed. Why it is not much in stopping the Atlantic Sea? What happens if the 1755 Lisbon earthquake will come again? Yeah, a good question. And I don't have an answer to that. How will selection occur with the dark? Well, I guess that's a risk management. You see the potential sources and you see, look at the Atlantic and you see there was just one event in the 1915. You know, a couple more in the Caribbean. There are some events there also mostly prehistoric. So if you want to prioritize, you prioritize your funding into the Pacific. And that's what happened. But, I mean, there was actually early this year, there was a earthquake in a pretty big tsunami in Sandwich Island, which was gone completely unnoticed and almost undetected only because there was no people living there. But it was a big tsunami. And actually it's probably the first tsunami since 2004 that was detected in all three oceans. Oh, well, all four oceans. It's in the Sandwich Islands is a very South of South America, like next to Antarctica, very remote area. But it can generate pretty big tsunami. And actually I'm wondering what happened in South Africa, which should have seemed pretty good way to get from that. So yeah, you're right. Well, it's all, we're struggling in science and especially application and managing risk. That's where managing risk comes, you know, head up. You have to understand the risk. And just like this example, I mean, this example from NASA encouraged me so much how they come up with the risk assessment from the meteor impact, which never occur. But if when they occur, you can actually measure the risk. It's a measurable risk in terms of fatalities per year. And then you can start, you can start, you know, sort of funding development and everything because it's needed for the detection part. I see no further question. And Vasili, I would like to thank you very much for this excellent lecture. And we learned a lot. And indeed, it's probably some of our participants will also contact you with respect to the papers or particularly we mentioned about the meteor tsunami, which becomes quite important nowadays. And at least at least it's the interesting phenomenon. And to understand a little bit more in some detail. Karim, I would like to... Yes, I have just an information. I think that Vasili did really a very good overview. I would like to invite the participant to check the website of ICTP where we had recent workshops in which the physics of tsunamis was really tackled in very much details by both Sino-Lakis and Emilokal. So you can find plenty of material where you could see even the equations. You could see the development. You could see plenty of slides where Vasili is appearing as well with all his contributions to tsunami sciences. And please make use of this. I mean, you have videos there. You have PDF of power points and feel free to use them. And even the PowerPoint of Vasili will be available for everyone on the website. I think that with this, thank you very much, Vasili. And then we'll keep in touch. I hope that we will have time. Yeah, thank you very much. And I would like to remind you that tomorrow we will have a two sessions. Morning session will be given by Professor Tichner from the Etihad Suri about the seismic inverse problems with smarter methods. It's truly smart. He's smart and his method also smart. And the next presentation will be in the early evening as today from five to seven p.m. Central European summer time. And it will be given by early carrier scientist, by Gil Virts. I think Gil, I saw his, oh yeah, he's still here. Could you open your video? Oh, hi. Hi, Gil, yeah. And he will give a presentation related to the understanding predicting of geomagnetic cycle of radiation so we state assimilation. Okay, great. This is a short announcement about the tomorrow events. And now good evening and good afternoon and good morning to everybody. Okay, bye-bye then. Bye-bye.