 So hello and welcome to this tutorial on using the Geoclaw software for modeling Tsunamis, storm surge, dam break problems and other flow over topography. This is a general package of software that I'll tell you a little bit about and go through a short tutorial about how to use it. So my name is Randy Levec. I'm one of the core developers of Geoclaw and I am speaking to you from the University of Washington in Seattle if you're not familiar with this area. Seattle is here in the Qtid Sound and one reason that we worry a lot about Tsunamis in this region is that every few hundred years there's a magnitude 9 subduction zone earthquake, a Cascadia subduction zone offshore and the next time that happens the coast of Washington is going to be in serious trouble. So a lot of what we do in my group is to work on modeling the coast of Washington and assess it for tsunami hazards and help design vertical evacuation structures and other things. So we've been using Geoclaw pretty heavily in some of these applications. So I'm going to start with some slides, introduce the methods and the software a little bit and then go into a hands-on demo. I hope this is coming through. Can somebody just respond to let me know that you're hearing this okay? Yep, I can hear you well. Okay, great. Thank you. Okay, so the software is based on the claw pack package of software which stands for conservation laws package and this is a package that we started developing in 1994 and has been kind of a continuous development ever since that solves systems of hyperbolic partial differential equations that model wave propagation in various applications areas. The tsunami and Geoclaw aspect of it started around 2004 around the time of the big sumatra first wave and tsunami in some pieces worked by David George and he applied claw pack to tsunamis and started tsunami claw and then as we realized that it was useful for many other sorts of overland flow it morphed into Geoclaw. David George still works on Geoclaw and on something called D-Flaw for debris flow without SK's volcano observatory in your port and Oregon and works on a version of the code that can also be used for landslide and the free flow modeling, but I won't talk about that for you. Just some features of the Geoclaw software in general it's for solving two-dimensional shallow water equations typically over topography and it handles shoreline or the edge of the flow by allowing some grid cells to be dry. The depth of water is zero, they're dry, it's positive, they're wet, and that can change dynamically from time step to time step. So we don't impose boundary conditions at a moving boundary. Instead, we just handle the interface between what the dry cells. Another big component of Geoclaw is that it uses adaptive mesh refinement quite typically in sort of block structure and manner which I'll show you some examples of in a moment and then within each adaptive grid it uses shock capturing finite-value methods, high-resolution methods that have been developed over the last 30 years or so, but effectively capture the flow even if you have discontinuities forming a hydraulic jump, for example, as a wave comes on shore and it comes highly non-linear. We don't have sort of massive parallelization built into the Geoclaw software, but it does work with OpenMP on shared memory machines. So I do a lot of tsunami modeling just on my laptop using four cores, for example, or nodes of a cluster using maybe 12 or 24 cores. We'll have used it with 100 cores or so to summarize problems. The main claw pack software is all written in Fortran, the sort of adaptive mesh refinement code, but we use Python for the heavily for the interface for specifying the input parameters and for plotting a solution. All of the development of claw pack and Geoclaw's part of claw pack still is developed on github, so you can sort of follow what's going on through the pull requests and discussions there. We use continuous integration using Travis, so at any time those pull requests or requests get run. There's a number of people who have been heavily involved in developing different aspects of claw pack and Geoclaw, just the two people who've been the core developers over the last few years. Marcia Berger developed the adaptive mesh refinement portions of claw pack originally and they still have been involved. Kyle Mandley, a farmer student in Columbia who's worked a lot on a storm surge modeling, Geoclaw, David Ketchum at Taust has also contributed a lot to the claw pack software. If you're interested in how we go about developing this large software, we have a paper from a couple of years ago that describes a lot of our development approaches may be interesting to you. So Geoclaw is specific from the shallow water equations. There's also some multi-layer shallow water equations in Geoclaw that Kyle's been developing. And as I said, David George, for example, has been looking at more complicated realities for end sites and the pre-flows. But the basic code that we use for tsunami modeling in particular is the two-dimensional shallow water equations. So in these equations, each is the depth of the water fluid. U and V are the velocity in two horizontal directions and they're sort of depth to average velocity. So there's only a single velocity at each point. This one-half gh squared appearing in the momentum equations is the hydrostatic pressure. And the B on the right-hand side, capital B, is the bathymetry or topography. Describing the topography that the flow is going over. If B is negative and it's under water, the bathymetry, if B is positive, then it's above-short topography. Although as the wave inundates dry land, the depth can be positive even in regions of positive topography. So as I mentioned, there's a number of applications that Geoclaw has been used on. And I'll show a couple of examples starting with a dam break problem just because it nicely sort of illustrates the adaptive mesh refinement and the wetting at the margin of the flow. And then a couple of examples from tsunami modeling, which as I said, I mostly work on. And if you're interested in that, the animation that I showed at the very beginning and a number of other animations and reports on different projects can be found at this link. All of these screen links in the PDF file should be clickable, and those are shared. I won't talk about storm surge in particular, but here's one paper in particular by Kyle and Clint Dawson on comparing Geoclaw to answer for storm surge simulation. So just to start showing you how the adaptive refinement works. This is a test problem that Dave Jordan ran many years ago now on the Malta Seydan failure from 1959. This dam in France was filled up and shortly thereafter it exploded and there was a large flood that went down the valley. The dam was up here at the top of the valley. Flood went down this valley and there was probably a bit of information about exactly how it progressed down the valley as it passed out to the power plants along the way, our stations. And so it's been used as a test problem for various overland flooding routes. So these plots came from his work in 2011, which you can find in this paper. And it illustrates that we start with a Cartesian grid over in this case, I think just use coordinates that were meters often we use longitude, but it still logically rectangle the grids that we use. And we typically start with a very coarse grid in the region where nothing is happening, where the dry land is doesn't seem to be modeled with any great resolution since there's no flow over it. So we start with a very coarse grid that they did with 400 meter grid cells. And then what you see here is a level two grid, which has been refined by a factor of eight each direction in this case, I guess from 400 meters down to 50 meters. And you see the original reservoir behind the dam, which is here. And then at this initial time, there's only two levels, but then as the flow starts to move. Additional levels of level three grid was added here for the grid lines aren't shown, but it's at a 12 meter resolution. And then these even smaller patches inside that are three meter resolution over the top. And as the water flows down the valley, the region is of refinement automatically adapt all of the flow. So there's some criterion that's built into the code. And there's actually several that the user can choose from you to specify how to do the adaptive refinement. But the basic idea is that each time step on, well, each regrinding time, which might be several time steps on each level, you would flag a certain set of points as meeting refinements in this case, maybe because there's water there. So flagging a certain wet cell gets flagged. And then those wet cells, applied cells get flustered into rectangular patches. All of these little rectangles that you see here are level four patches. So they're all the same resolution, but they've been chosen to try to cover the area that was flagged with relatively few other cells refined that hadn't been flagged. But also you don't want too many individual grid patches, because you have to exchange information between patches, each time step has boundary conditions and explicit method that's used to advance the solution. So there's a criterion and a parameter that can be set also in the input that controls how many patches it tends to make. And then as the flow evolves, it will define more and more of the region. I wanted to mention one other problem of this nature. If you're interested in dam break problems or overland flow because this paper just came out last week. And in fact, just today, it made the EOS AGU website as a editor's highlight. So I'm very happy about that. Now this is a paper of some work by Mike Krzewski, PhD student in open space sciences here, working with Kate Huntington and myself on the geoclaw modeling. And he modeled a lake that was created by landslide in the Himalayas in the year 2000. That landslide failed and drained the lake. And there was a big flood that went hundreds of kilometers downstream. So he actually modeled a 400 kilometers stretch of this river down to a resolution of 15 meter grid cells in some places compared to different resolutions, different friction factors, compared to numerical results coming out of geoclaw with some field data from field work they've done in this area. So nice results and there's a number of animations that are in the supplementary materials along with this paper. So recommend taking a look at that if you're interested in this type of problem. So going back to the shallow water equations and tsunami modeling. So I mentioned that adaptive refinement is crucial. Let's us follow, for example, complicated river valleys with the fine grids only where there's water essentially. And we can also handle the webbing and drying. The other thing I wanted to point out here, the first issue listed here is that there's in the case of tsunami modeling or storm surge where you're looking at waves long wavelength waves over the ocean. You typically have waves that out in the deep ocean are very small amplitude, compared to the depth of the ocean. And we're solving typically on an ocean at rest initially before things start moving in the tsunami. And so if you look at these equations in the case where you and me, the velocities are both zero, then everything drops out of the momentum equations except the hydrostatic pressure, the one half g h squared, and the right hand side, which depends on the variations and And those two terms can both be very large because you have variations in the depth of the ocean. And so you need to maintain a balance between those two terms. So the way that the numerical methods are implemented have to be done in a manner that we call a well balanced method of this balance in a study state situation. That's exactly preserved by the numerical method. And then that has to also happen when you're using adaptive mesh refinement because you have patch boundaries that are all over the ocean and regions where there is no tsunami yet, for example. And so getting that to work properly with the adaptive mesh refinement on and together with the wedding and drying as flow in and date on land was one of the originals of the major challenges that they've been working on. But that's working well now. So just to give you an idea of some scales I'm talking about this is an example that will look at it as a best problem. This is the 2010 tsunami generated by an earthquake off the coast of Chile. And this is three hours after the earthquake or the wave has propagated out this point that's indicated here with a number is what's called a dark buoy of deep ocean assessment and reporting the tsunamis. And that's what art stands for the pressure gauges at various locations in the ocean on the sea floor that can measure the hydrostatic pressure of water well enough to the sense the tsunami going by, which is a major part of the early warning system for tsunamis in the world. This is also a great source of data for doing comparisons and validation that I put. So I'm going to look at this example that's going to work for the gauge computational gauge to see what tsunami is recorded there and how it compares to the actual tsunami. But at the moment what I want to show you is the along this transect, but it looks like if we take the cross section of the topography. So down on the bottom here, and this is shown at sort of the resolution that we might be using for the tsunami out in the ocean. And so it's a piecewise constant approximation to the sea floor. This is the coast of Chile on the right, and it goes down to a trench here that's more than six kilometers deep. And then the average depth of the ocean is about 4000 meters or so. And you can see that from one grid cell to the next you can have jumps of hundreds of meters. On the other hand, if you look at the surface here, well it looks like nothing is happening here but if you zoom in on the surface in the top plot over the top 20 centimeters what you see is the tsunami wave. And for scale here, this is 500 kilometers. So the wavelength of these tsunamis is very long. The amplitude, on the other hand, is very small compared to the depth of the ocean. But as it approaches the shore, it can amplify the shoaling and give much people to run up in some areas, especially in the near shore area, the part of this one that is actually a little bit chilly. But the part that's traveling across the ocean is typically less than a meter amplitude and very long wavelength. And it's very important to be able to model these small changes in the surface relative to the huge changes in the topography underneath it. And we're solving, again, the shallow water equation with this piecewise constant, the symmetry underneath it. So this well balancing is a critical aspect in doing tsunami simulation or storm surge simulation. So here's one more example of an adaptive mesh refinement in the context of a tsunami model. This was a test problem from a benchmarking workshop a few years ago on looking at currents and cargers. Modeling currents that are generated by tsunamis can be even more sensitive and difficult in modeling the actual inundation depth. So there's a workshop in Portland a couple of years ago sponsored by the National Tsunami Hazard Mitigation Program, where a dozen different tsunami modeling groups came together and tested our codes on a number of different benchmark problems. So this is the North Island of New Zealand, you see here in the center on the Tsunami region where we have the adaptive refinement. This is 15 and a half hours after the Tohoku earthquake off the coast of Japan in 2011. And the zoom in further than what you see on the right here is an entrance to this Taronga Harbor in New Zealand, where there were some gauges that measured both the amplitude and the velocity of the waves as they came into the harbor. So here's a zoom on the entrance of that harbor. There was an APCP acoustic Doppler current profiler in the entrance here that measured current velocities. There's also a point out here and another point, tide gauge in the harbor where the surface elevation was measured. And what you see on the right is the actual data from those locations after moving the tide. You always have to be tied the data that moves out of these sort of gauges or dark blue data. And so this ADPN out here, what you see here, the black is the actual observed surface elevation after detaching at that point. While the red is the geoclaw simulation over the same time period. And again, remember this is 15 and a half hours after the earthquake. We propagated all the way across the ocean and we're capturing waves that are less than half a meter high at this location. And at this tide gauge inside the harbor, similar sort of results. The velocities of the current, as I said, are harder to capture accurately, but we still get at least the right magnitude. This is the east-west or south components of velocity and then the total speed. So that was the test problem that we were looking at in this case. And to go back to the sort of adaptive mesh refinement, as you can see, we have a very fine grid here near the harbor. And a relatively fine grid around the North Island of New Zealand. But going back to the ocean, we see that Australia is not very well resolved. North America is not resolved. Japan doesn't even appear at this resolution. But if we went back to the initial time, or say three hours after the earthquake, then our adaptive mesh refinement is centered around Japan where the earthquake was generated. And at this resolution, we're not resolving New Zealand at all, so we don't see anything in the zoom pictures. And then as the tsunami propagates, we use our adaptive mesh refinement to follow the waves. In this case, we're interested in the ones headed towards New Zealand. So we have some guidance and code to follow the portion of the waves that are headed in that direction. So nine hours after the earthquake, by using adaptive mesh refinement, we've calculated the tsunami up to this point. Using only about three minutes of wall time on quad-core MacBook laptop computer. So propagating across the ocean, since we don't need terribly fine grids in the ocean. And if we judiciously use the adaptive mesh refinement, we can actually solve the tsunami propagation problem quite quickly across the ocean. Then things start to bog down once we start refining near the destination. So 12 hours after the earthquake, the waves are starting to approach New Zealand and they're starting to refine North Island. We still don't have much resolution of the harbor. Obviously it's five minutes of relaxed time now. And then as we start to refine the harbor, you see the computing time starts to go up dramatically. 19 minutes at this point where we're now resolving the harbor fully. And then three hours of computing time to get up to 15 hours is simulated. And almost all of that work is in computing what's going on right around this harbor. But by using adaptive refinement, we only need to put those very fine grids right there. And typically in doing inundation studies of modeling currents and harbors, we want to go down to many locations. There's one third arc second data available, which is about 10 meter resolution. That's very, very fine compared to the grids that we're using out of motion. Okay, so let me turn now to saying a little bit about how they use GeoCa. And the first thing to do is to get it installed. So I'm not going to go through that in detail. But we do have a recently revamped documentation page on installing Plotpack that we hope makes it a little easier to follow. And in many cases you can simply do it with a PIP install. Basically this downloads a version of the code and sets a Python path and some other paths that are needed to find things in Plotpack. There's also a version of Plotpack called Pyclaw, which is kind of a pure Python version, but it still uses Fortran, Remon solvers, the kind of basic building blocks that use finite volume methods for hyperbolic problems. And so it also pre-compiles some Remon solvers using F2Py, so they can actually be called from Python. But that's not used for GeoCa directly. GeoCa we typically use a much bigger set of Fortran regimes than GeoCa. So there are various other options for using Plotpack besides using the PIP install. You can clone the repository and then do a PIP install locally or just use Python path environment variable. You can also use Docker. We have a Docker file that's relatively easy to use Plotpack by bringing down all of the dependencies that it needs. GeoCa and as well as several different Python packages that you don't have to install anything else on your laptop. You have Docker installed. There's also options like binder for Jupyter notebooks, which we'll look at in a moment. Anyway, if you do install it on your own computer, what you'll find when you look in the Plotpack directory, the main rectory is that there's actually the way it's organized on Git. There are several sub-repositories. There are often different people who are working on developing things in different repositories. So we split up the whole Plotpack project into several different sub-repositories that are interrelated and typically use each other. So there's a PyClaw repository and a classic repository, which is the for the FORTRAN code working on a single grid without adaptive mesh refinement. The RIMON directory or repository has RIMON solvers for many different hyperbolic problems, acoustics, objection, boiler equations, and other compressible flavors. Claw utility is some utilities that are used all over the place. And then Vizclaw is a set of visualization tools written in Python for visualizing what comes out of either PyClaw or EMRclaw or GeoClaw. EMRclaw is the adaptive mesh refinement version of Plotpack that is more generic and can be used for industrial gas dynamics, for example. Well, GeoClaw uses many of the routines directly from EMRclaw, but then has its own version of many of the routines in order to properly handle flow. There's also a doc repository where we have all of our documentation, which we develop using Sphinx, and then that's pushed to another repository that displays it on the websites. And then when you install the basic ClawPack, you don't automatically get the apps repository, but we have another repository that has a number of different applications in it that we can develop and that other people have contributed. In particular, there's a tsunami subdirectory and a surge example of subdirectory, which is actually a sub repository of apps. There's a notebooks directory that has some additional Jupyter notebooks that illustrate how to use some of the tools in Plotpack and GeoClaw. Within GeoClaw, if you go into that directory, what you find, there's a set of documentation on that if you go to Plotpack.org. And the GeoClaw documentation describes all these things in more detail. But the main ClawPack slash GeoClaw directory contains a source subdirectory, SRC, that has in particular the source code for shallow water equations. And it also has a Python subdirectory that has a number of Python tools that are specific to working with data that goes into or comes out of GeoClaw. And in particular, there's a module called TopoTools that is designed to help work with topography files, DEM files that describe the topography, but that goes in as one of the inputs in GeoClaw. And DtopoTools, Dtopo, we refer to moving topography, changes in topography. So, for example, when an earthquake happens, the C-floor is uplifted, and there's a standard way of generating C-floor displacement from slip on a fault point called the Okada model that basically solves a elastic half-space problem and computes the uplift of the surface due to a dislocation deep in the interior. And that Okada model, for example, is implemented in the DtopoTools module, so you can read in both parameters for earthquakes in several different formats, and calculate the C-floor definition. Also, within ClawPak slash GeoClaw, and also in the other repositories like AMR Claw and TASLA, there's a test directory that has some basic tests. TravisCI is a continuous integration system that gets automatically invoked in time. This time, we do a pull request on GitHub and it runs all of these tests and reports if any of them fail, which is one way to help make sure that we don't break things inadvertently as we're modifying the code. These are really important since we have these different repositories coupled together that may get changed independently, and so some of these tests are sort of interdependent between repositories as well. So in many of the ClawPak repositories, including GeoClaw and the examples directory, which has within it several subdirectories with different examples to help with your study. So the examples we're going to look at today are based on the example that's in this Chile 2010 example that comes built into GeoClaw. Although I was served to find the other day that there's a MakeTopo.py routine in there that's currently broken because of the ways URL is not getting properly resolved. MakeTopo.py downloads some topography data that's needed to run this example. So if you actually downloaded GeoClaw and go into that directory and try to run the example, it doesn't work at the moment. So the example that I'll show you that some of this GitHub repository specific for this tutorial has that fixed and you could use that. You can make Topo.py or the one that's in the master branch on GitHub. If you want to run the example that's in the current version of the 550 implementation, which hopefully we will be updating soon. So as I mentioned in GeoClaw, mostly what we're running is the actual Fortran code. And we have a MakeFile system to help check dependencies so that you don't have to recompile all of the routines of GeoClaw every time you change one routine. If you changed the specification of how you want to make the plots, you don't necessarily have to rerun the code in order to make the plots. So it's a little bit complicated. There's some documentation that you might want to read through. But the basic idea that is important to understand is that our basic MakeFile has definition of a set of targets that start with dots. And when you do Make.data, for example, it takes setrun.py, which I'll talk about in a moment, which is a file that a Python script that sort of sets up the input parameters for GeoClaw in a relatively user-friendly way. And it converts that Python code, runs that code to create a set of files that's typically in with dot data that are read in by the Fortran. So the Fortran code actually reads in a simpler format than that specified in setrun.py, and doing Make.data run setrun creates the input for the Fortran. But it also, when you do Make.data, creates a file called .data, which is a hidden file on Linux. So if you type this ls, you won't see it, and ls, you'll see the hidden files. But it uses the date of creation of that file for checking dependencies. So if we later rerun the code, it knows whether it has to rerun setrun.py in order to refresh the data or whether it's already up to this. And then there are similar targets like .exe would compile the Fortran code, checking dependencies, and making sure that you don't leave compile. Routine, if it's already been recently compiled, and it uses also the dates of the .o file compared to .f or .f. And then Make.output would run the code. But it checks the dependencies to make sure that the Fortran code is up-to-date, and that the data is up-to-date before running the code. And similarly, Make.outputs will use another Python script setplot.plot that specifies what sort of plots you want. But before doing that, it would check that the output is actually up-to-date. If you do make plots without the .pistol plots, then it would just run the plotting routines based on whatever output is currently there without checking whether it was up-to-date with the setrun.py, for example. So setrun.py, as I mentioned, will look at one in a moment. It's a Python code that allows you to set up the input data and has lots of comments in it. Typically, if you're starting a new example and using Geoflog, and you don't want to write this from scratch, you want to find an example that does something similar to what you're trying to do and just copy it over and modify what you need to to change it. And again, when you execute the setrun.py or type Make.data or Make.data, then it creates a set of files that are actually read-in by the Fortran. And then when you run the code, it creates a directory, which typically we use underscore output as the name of that directory, but that's specified in the Make file, which means that if you want to. And within that directory, it has a set of files that have names like fort.t0000, fort.t001, and so on. One for each output time. How many output times you want is specified in the setrun.py. But for each output time, there's a very short file that just has some basic information about that particular output time, what the time is, how many AMR grids there are at that time, a bit of other information. And then if you're creating ASCII output, there's a set of fort.t0000Q files that go along with it that actually have the solution on all of the grids at that corresponding time. You also have the option when you run the code to specify binary output, which is a little bit smaller, but we look that directly as these look. In which case the fort.t0000Q files would only have headers for each grid that tell what size each grid is, and what the grid resolution is, and the actual data solution values on that grid would end up in the fort.t0000Q files. And then in the set plot, you specify whether you're reading in ASCII or binary as well, consistent with what you output to. And typically at each output time, if you're using adaptive mesh refinement, there may be many grids, there can be thousands of grids instead of each time. So there will be a header for each grid, followed by the data. And then we also have plotting tools. There's a routine in each application, an example directory called setplot.plot that sets up how you want to do the plotting. And we kind of came up with our own little language for specifying the plots because of the fact that we're always plotting or usually plotting adaptive mesh refinement data. So we don't just have a single grid of data to plot. We typically have hundreds or thousands of grids that are overlapping each other and that are many different grid resolutions. In some areas you want to show color maps appropriate for water and other areas for dry land if you have water intruding on top of land. And so trying to kind of write the Python code to do all of that and loop over all the grids gets a little bit messy. And so we've tried to simplify that for the user by writing all that code in part of this block package and then providing a language for specifying what kind of plots you want and then it will hopefully produce the plots that you want. So the basic idea in setplot is that you can specify for each frame, for each output time or you have data, you can specify one or more figures that you want to appear at that particular time. And then each figure, you can specify one or more axes that should appear on it, maybe just one axis or maybe some plots with two by two array of axes. And then within each axis you can specify what plot items you want to appear on that axis, which are typically the curve or contour plot or pseudo color plot or something like that. And then there are various attributes you can set for each of these items to specify what contour levels are, what colors they should be like. And again, there's a lot of examples already in the geoclaw and plot pack repositories and the examples directories and some documentation and examples in the main plot pack documentation. So you can often find something that's very similar to what you want to do with this copy of the adjustments. So the set plot, you know, this is kind of this basic simplest version of a set plot function that would specify, it takes plot beta, which is a particular Python object that's defined by this plot plot beta class. And then that has a method called new plot figure that you would call as many times as you want to create different figures and then within each figure, you would call the new plot axes as many times as you want to create different axes of figures that should go on the map or the axes that should go in that figure. And then within each axis, you can use plot axes, it has a method called new plot item that you can call multiple times to specify different types of items to be partied on top of each other. So we'll briefly look at an example, but we won't have time to go through the much content. And then to actually produce the plots, well, if you type this make that plots, it would check the dependencies and then once it has up to date output it would use the set plot to produce a set of plots and if you do this make that plots it creates actually a set of web pages. It makes PNG files for each of the different figures that you specify that each of the output times. And then it assembles those into web pages that make it easy to kind of browse through them and it also automatically creates some animations that just loop through the PNG files and JavaScript basically. So that's kind of the easiest way to make the plots. On the other hand, you can't really zoom in dynamically or explore the data. Often when I'm developing code, instead, we'll use the icon shell and do things more interactively and in that case we'll have another set of tools. This high plot plot interactive plotting package that allows you to take the same set plot dot pie but loop through it and display one frame at a time in a way that you can interact with the plot and zoom in on it. So if you start this up and then start up a plot loop and type question mark, it would be a different command names that are valid at that point. And in the plot documentation under these glottis description. Okay, so that's the end of the slides and what I'd like to do now is sort of switch to actually running some code and showing some plots. And if you want to follow along there, the examples I'll be using are in this GitHub repository, which also has this tiny URL down here. I also just this morning posted HTML, the two notebooks that we'll be looking at so you can also look at them. Just as HTML files, but you don't have to be running anything but you can still kind of follow through the description of what I'm doing and some animations that are embedded that should work even in HTML versions. So I'm going to switch over to, here's the GitHub repository. This tiny URL or if you go to this repository and you click this button, then you'll get a HTML version of the notebooks. So what's in this repository is a couple of versions of this Chile 2010 example that we were looking at. They kind of walk you through how do you change the number of levels of requirement, how do you add additional regions where you say you want to don't want to find them, how do you add pages and that kind of thing. And I'll go quickly through some of this, we don't have too much time left so, but I hope that these notebooks will help you to kind of walk through things. So, another option for running the notebooks is to launch binder, and you do that then it actually starts up a server in the cloud that has all of the dependencies running and this repository clone so that you can be running the notebooks there. But what I'll be looking at here is just on my own laptop. This is the version on my laptop so if you go to that repository what you'll find is those two directories, geoclog examples and notebooks. Both of them have the same two examples in them, but the geoclog examples version is sort of more the version we would normally use running things in the command line. The notebooks is kind of a tutorial version that walks you through it in the framework of a notebook. So, maybe first I will just show you how we would sort of run things normally out of terminals. So, here's a terminal window. I'm in this directory geoclog examples to the 2010 B, this is more interesting one. But at the moment if I do ls minus a, what you see is that this is sort of the what you should get if you clone the repository, it's set run dot pi, set plot dot pi, and make file, and make topo dot pi, just do flops dot i, pi, and b can be useful if you're running things on Docker and then want to do the flops on your laptop outside the directory that we want to use the next day. So, if we do three, two, four, one, two, no tie dot text is some high gauge or darkly gauge output. And so, if we do, we could just do make dot plots, but if we sort of go through it step by step, make dot exe would create the executable in this case, most of the code is already compiled. So, it's basically just making those dot oh files together, but it makes this ex geoclog executable. So, if I do make that data, it runs set run dot pi, and creates a bunch of data files. It's not a bunch of things about what it's doing. It also, by the way, I think the way this is set up, it makes some KML files, which are often useful. You can open these in Google Earth and it shows, for example, what region the region where the detopo the earthquake source is specified this utopo file is a global topography at 10 minute resolution that's being used as our background topography. You have gauges, there's also a gauges dot KML. So, once we've made the data, we now have a bunch of files that end in dot data. Look at what those look like, but they're basically just lists of parameters that are read in by the foreground code. Oh, also, if we do ls minus a, we now see this spinning file dot data is created. It's the time stamp of that. It's used to tell if I do make that data again, and then probably say it's up to date. Yeah, that data is up to date. Yeah, this other warning about target all is just because we redefined the target all in this particular make file. It specifies where the set run file is, where the output should go, and various other things, and then down to the bottom, it specifies some particular source code that's needed for this geoplaw application. And we've also defined this make topo. If you do make space topo, it runs make topo dot pie, which actually downloads the topography that you need, and creates the topo file using the auto model. And then you can find all to make topo and make plots and look at each of the analogousness and each of the analogousness and these kinds of things. So if we also do make topo, I guess I've already downloaded that, so if you download it again, these files are actually stored in the scratch directory that we're already there, so if we have to generate them, we can just download some topography and create a geoplaw file. And then if you do make that output, it actually runs the code that uses several outputs, and then reports on the timing information at the end and then make that plots. You can actually use the information that's in setplot.py to create a bunch of frames of output, and then eventually that will go into the scratch directory. So once that's run, then if you open up this underscore plots, plot index page, then it shows all of the frames that were created say two different figures that were specified here, one showing the full domain and one zooming in on the post of Peru. You can either look at all of the figures and zoom in on a particular one by clicking on it and step through it, or back from the index page. If you go to the JS movies, the JavaScript version of the movies, this just loops through all of the different frames and shows you the animation. So that's how we typically would use geoplaw using also then editing the setplot.py, for example, whatever editor you like. If you go in here, you'll see that there's many different things specified in terms of the domain, the number of grid cells, lots of other parameters, but rather than trying to walk through that file, these two notebooks that are, that was the binder version I was testing out this morning. So the Chile 2010A notebook that's in the repository in the notebooks directory, walks you through an example like this and there's some instructions on what you do if you are working at command line, but we also have some tools that allow you to do the same kind of thing within the notebook, which we use mostly for making tutorials like this, but could be handy for making other sorts of files that illustrate how you created something that's a geoplaw example. So if what we did in the command line was just to make topo, compile the code with make.exe, create the bots with make.plots, and what it produced is, well, if we had run it in this first version, Chile 2010A, it's set up to use only a single course grid with no adaptive refinement at all. It runs very quickly, but it produces a very forced resolution as we see here. And so there are then some instructions in here about how to go in and modify that file, but you would change in the set run.py file. You could either make it finer by increasing the resolution of the course this grid, or what I suggest here instead is to change the number of AMR levels from one to two. So there's two levels of refinement to start with, and then we'll add more below. And already in this file, there's some lines that look like this that set the refinement factor in X and Y, and also in T. So we refine by a factor of two, going from level one to level two in both X and Y. And actually in the T direction, there's another parameter that tells it to kind of dynamically choose the refinement ratio in time, which typically for these explicit methods, if you're refining by a factor of L in each spatial dimension, you also have to refine by the same factor in time in order to maintain the stability of the explicit enough method. And because the wave speed depends on the depth of the water, if you have very fine grids that are only near shore, then you may not have to refine as much in time as we do in space, because the wave speed may be much smaller in those fine grids than it is in the Corsair grids out in the ocean. And so we have another parameter that's not mentioned here that can be set to tell GeoClaw that it will adaptively choose the time step on each level based on the wave speed. So adding the second level, if you redo make that flux and then look at the flux to get reduced again, then you should see something like this. It's added a second level refining by a factor of two. But it doesn't look like it's done a very good job. You see that it's not really capturing all of the wave as it propagates out. And that's because there's some refinement criterion that's being used in this case to determine which cells need to be refined to the next level. And I've purposely chosen a value that's too large for that tolerance here so that much of the wave is getting lost as it propagates out. So the next experiment would be to go in and change this wave tolerance parameter from 0.1 to 0.02, for example, and then rerun the code. And in that case, what comes out is again using only two levels, but it's now refining pretty much everywhere that there's waves. And it's, you know, flags, points for refining and then clusters, then the rectangles here. That's one big rectangle that covers most of the remains. That's such coarse grids in this example. And then the rest of this notebook sort of walks through adding a third level. So if we have three levels, then we have to tell it how to refine going from level one to two and also how to refine going from level two to three. So we need to add another component to these refinement ratios here. I've set again up a factor of two, but it could be a factor of four or eight, or maybe an odd factor. In some applications, we use factors of 10 or 20, even then from one level to the next. We often have, if we're modeling a tsunami across the ocean and then zooming in on a coastal location, we may have refinement factors of more than 10,000 going from the course this level to the finance level. Six levels of refinement. In this case, I've added in a third level with additional refinement by two. And in the set plot that pie, you can specify that on this finest level, we don't want to plot the grid lines. The other idea would just be solid blocks in there. And so this lets us to see the waves. And also slow the same motion down a bit here. See what's happening. So now we see that the level three grid is really kind of following the waves and it's refining everywhere that the wave tolerance is about this one or two. It's just looking at the amplitude of the surface of the ocean relative to sea level. We thought that it's above two centimeters. In this case, it's like this. Now you might say, well, that's great up here, but we really didn't want to refine this way that's heading towards Antarctica and often to the Atlantic. So there are also capabilities in geoblog to specify that in a certain regions, we want to only allow two levels or force it to always have three levels regardless of the tolerance and combination of those. So you can specify what we call regions, refinement regions that are rectangular space time regions. You can say over one time period, we need to refine this rectangle with a later time period. It's a different rectangle that we want to try to follow the waves as they propagate across the ocean. And so this next example gives an example of how you might do that. We've got an example, then in this case, it's refining the waves that are propagating northward up to the sea. Okay, well, we're about a time. So I think I won't go through the other one in 2010 B, but if you take a look at that one, it walks you through adding some gauges to the simulation and then comparing the results at simulated gauge with some of the actual high gauge data, data in this case, and also how you can, in Python, look with gauges with an additional gauge someplace reading the data and plot, not only the surface elevation, but also the velocities. So these examples should work in this repository. I was having a little trouble getting it all working on binder the other day with the doctor files and all, but I think it's working now. So something isn't working for you. Let me know and I'll try to get that. I hope these will be useful introduction to how we get started with your thought. If you run into problems, we do have a mailing list for a cloud bash users mailing list on Google groups that you can find from the product documentation. We also have in the product repositories, issues and all of the positive for you can raise an issue with you, if you found about your problems, something to look for. So thank you very much for joining this webinar and I hope this has been useful.